How to decode H.264 video frame in Java environment - java

Does anyone know how to decode H.264 video frame in Java environment?
My network camera products support the RTP/RTSP Streaming.
The service standard RTP/RTSP from my network camera is served and it also supports “RTP/RTSP over HTTP”.
RTSP : TCP 554
RTP Start Port: UDP 5000

Or use Xuggler. Works with RTP, RTMP, HTTP or other protocols, and can decode and encode H264 and most other codecs. And is actively maintained, free, and open-source (LGPL).

I found a very simple and straight-forward solution based on JavaCV's FFmpegFrameGrabber class. This library allows you to play a streaming media by wrapping the ffmpeg in Java.
How to use it?
First, you may download and install the library, using Maven or Gradle.
Here you have a StreamingClient class that calls a SimplePlayer class that has Thread to play the video.
public class StreamingClient extends Application implements GrabberListener
{
public static void main(String[] args)
{
launch(args);
}
private Stage primaryStage;
private ImageView imageView;
private SimplePlayer simplePlayer;
#Override
public void start(Stage stage) throws Exception
{
String source = "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov"; // the video is weird for 1 minute then becomes stable
primaryStage = stage;
imageView = new ImageView();
StackPane root = new StackPane();
root.getChildren().add(imageView);
imageView.fitWidthProperty().bind(primaryStage.widthProperty());
imageView.fitHeightProperty().bind(primaryStage.heightProperty());
Scene scene = new Scene(root, 640, 480);
primaryStage.setTitle("Streaming Player");
primaryStage.setScene(scene);
primaryStage.show();
simplePlayer = new SimplePlayer(source, this);
}
#Override
public void onMediaGrabbed(int width, int height)
{
primaryStage.setWidth(width);
primaryStage.setHeight(height);
}
#Override
public void onImageProcessed(Image image)
{
LogHelper.e(TAG, "image: " + image);
Platform.runLater(() -> {
imageView.setImage(image);
});
}
#Override
public void onPlaying() {}
#Override
public void onGainControl(FloatControl gainControl) {}
#Override
public void stop() throws Exception
{
simplePlayer.stop();
}
}
SimplePlayer class uses FFmpegFrameGrabber to decode a frame that is converted into an image and displayed in your Stage
public class SimplePlayer
{
private static volatile Thread playThread;
private AnimationTimer timer;
private SourceDataLine soundLine;
private int counter;
public SimplePlayer(String source, GrabberListener grabberListener)
{
if (grabberListener == null) return;
if (source.isEmpty()) return;
counter = 0;
playThread = new Thread(() -> {
try {
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(source);
grabber.start();
grabberListener.onMediaGrabbed(grabber.getImageWidth(), grabber.getImageHeight());
if (grabber.getSampleRate() > 0 && grabber.getAudioChannels() > 0) {
AudioFormat audioFormat = new AudioFormat(grabber.getSampleRate(), 16, grabber.getAudioChannels(), true, true);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
soundLine = (SourceDataLine) AudioSystem.getLine(info);
soundLine.open(audioFormat);
soundLine.start();
}
Java2DFrameConverter converter = new Java2DFrameConverter();
while (!Thread.interrupted()) {
Frame frame = grabber.grab();
if (frame == null) {
break;
}
if (frame.image != null) {
Image image = SwingFXUtils.toFXImage(converter.convert(frame), null);
Platform.runLater(() -> {
grabberListener.onImageProcessed(image);
});
} else if (frame.samples != null) {
ShortBuffer channelSamplesFloatBuffer = (ShortBuffer) frame.samples[0];
channelSamplesFloatBuffer.rewind();
ByteBuffer outBuffer = ByteBuffer.allocate(channelSamplesFloatBuffer.capacity() * 2);
for (int i = 0; i < channelSamplesFloatBuffer.capacity(); i++) {
short val = channelSamplesFloatBuffer.get(i);
outBuffer.putShort(val);
}
}
}
grabber.stop();
grabber.release();
Platform.exit();
} catch (Exception exception) {
System.exit(1);
}
});
playThread.start();
}
public void stop()
{
playThread.interrupt();
}
}

You can use a pure Java library called JCodec ( http://jcodec.org ).
Decoding one H.264 frame is as easy as:
ByteBuffer bb = ... // Your frame data is stored in this buffer
H264Decoder decoder = new H264Decoder();
Picture out = Picture.create(1920, 1088, ColorSpace.YUV_420); // Allocate output frame of max size
Picture real = decoder.decodeFrame(bb, out.getData());
BufferedImage bi = JCodecUtil.toBufferedImage(real); // If you prefere AWT image
If you want to read a from from a container ( like MP4 ) you can use a handy helper class FrameGrab:
int frameNumber = 150;
BufferedImage frame = FrameGrab.getFrame(new File("filename.mp4"), frameNumber);
ImageIO.write(frame, "png", new File("frame_150.png"));
Finally, here's a full sophisticated sample:
private static void avc2png(String in, String out) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
source = readableFileChannel(in);
sink = writableFileChannel(out);
MP4Demuxer demux = new MP4Demuxer(source);
H264Decoder decoder = new H264Decoder();
Transform transform = new Yuv420pToRgb(0, 0);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create((ine.getWidth() + 15) & ~0xf, (ine.getHeight() + 15) & ~0xf,
ColorSpace.YUV420);
Picture rgb = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.RGB);
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
BufferedImage bi = new BufferedImage(ine.getWidth(), ine.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
AvcCBox avcC = Box.as(AvcCBox.class, Box.findFirst(ine, LeafBox.class, "avcC"));
decoder.addSps(avcC.getSpsList());
decoder.addPps(avcC.getPpsList());
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null; i++) {
ByteBuffer data = inFrame.getData();
Picture dec = decoder.decodeFrame(splitMOVPacket(data, avcC), target1.getData());
transform.transform(dec, rgb);
_out.clear();
AWTUtil.toBufferedImage(rgb, bi);
ImageIO.write(bi, "png", new File(format(out, i)));
if (i % 100 == 0)
System.out.println((i * 100 / totalFrames) + "%");
}
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}

I think the best solution is using "JNI + ffmpeg". In my current project, I need to play several full screen videos at the same time in a java openGL game based on libgdx. I have tried almost all the free libs but none of them has acceptable performance. So finally I decided to write my own jni C codes to work with ffmpeg. Here is the final performance on my laptop:
Environment: CPU: Core i7 Q740 #1.73G, Video: nVidia GeForce GT 435M,
OS: Windows 7 64bit, Java: Java7u60 64bit
Video: h264rgb / h264 encoded, no sound, resolution: 1366 * 768
Solution: Decode: JNI + ffmpeg v2.2.2, Upload to GPU:
update openGL texture using lwjgl
Performance: Decoding speed:
700-800FPS, Texture Uploading: about 1ms per frame.
I only spent several days to complete the first version. But the first version's decoding speed was only about 120FPS, and uploading time was about 5ms per frame. After several months' optimization, I got this final performance and some additional features. Now I can play several HD videos at the same time without any slowness.
Most videos in my game have transparent background. This kind of transparent video is a mp4 file with 2 video streams, one stream stores h264rgb encoded rgb data, the other stream stores h264 encoded alpha data. So to play an alpha video, I need to decode 2 video streams and merge them together and then upload to GPU. As a result, I can play several transparent HD videos above an opaque HD video at the same time in my game.

Take a look at the Java Media Framework (JMF) - http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/formats.html
I used it a while back and it was a bit immature, but they may have beefed it up since then.

Related

Android MLKit face detection not detecting faces when using Bitmap

I have an XR app, where display shows the camera (rear) feed. As such, capturing the screen is pretty much the same as capturing the camera feed...
As such, I take screenshots (Bitmaps) and then try to detect faces within them using Googles MLKit.
I'm following the official guide to detect faces.
To do this, I first init my face detector:
FaceDetector detector;
public MyFaceDetector(){
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
detector = FaceDetection.getClient(realTimeOpts);
}
I then have a function which passes in a bitmap. I first convert the bitmap to a byte array. I do this because InputImage.fromBitmap is very slow, and MLKit actually tells me that I should use a byte array:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream .toByteArray();
Next I make a mutable copy of the Bitmap (so that I can draw onto it), and set up a Canvas object, along with a color that will be used when drawing on to the Bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, options);
Canvas canvas = new Canvas(bmp);
Paint p = new Paint();
p.setColor(Color.RED);
After all is set up, I create an InputImage (used by the FaceDetector), using the byte array:
InputImage image = InputImage.fromByteArray(byteArray, bmp.getWidth(), bmp.getHeight(),0, InputImage.IMAGE_FORMAT_NV21);
Note the image format... There is a InputImage.IMAGE_FORMAT_BITMAP, but using this throws an IllegalArgumentException. Anyway, I next try to process the Bitmap, detect faces, fill each detected face with the color defined earlier, and then save the Bitmap to disk:
Task<List<Face>> result = detector.process(image).addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(List<Face> faces) {
Log.e("FACE DETECTION APP", "NUMBER OF FACES: " + faces.size());
Thread processor = new Thread(new Runnable() {
#Override
public void run() {
for (Face face : faces) {
Rect destinationRect = face.getBoundingBox();
canvas.drawRect(destinationRect, p);
canvas.save();
Log.e("FACE DETECTION APP", "WE GOT SOME FACCES!!!");
}
File file = new File(someFilePath);
try {
FileOutputStream fOut = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
processor.start();
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
}
While this code runs (i.e. no exceptions) and the bitmap is correctly written to disk, no faces are ever detected (faces.size() is always 0). I've tried rotating the image. I've tried changing the quality of the Bitmap. I've tried with and without the thread to process any detected faces. I've tried everything I can think of.
Anyone have any ideas?
ML Kit InputImage. fromByteArray only support yv12 and nv21 formats. You will need to convert the bitmap to one of these formats in order for ML kit pipeline to process. Also, if the original image you have is a bitmap, you can probably just use InputImage.fromBitmap to construct an InputImage. It shouldn't be slower than your current approach.
I was having the same issue use ImageInput.fromMediaImage(..., ...)
override fun analyze(image: ImageProxy) {
val mediaImage: Image = image.image.takeIf { it != null } ?: run {
image.close()
return
}
val inputImage = InputImage.fromMediaImage(mediaImage, image.imageInfo.rotationDegrees)
// TODO: Your ML Code
}
Check here for more details
https://developers.google.com/ml-kit/vision/image-labeling/android

Android LibVLC getBitmap from a TextureView

I am trying to retrieve a frame from a video that is playing back using LibVLC in android. For reference, this is how I am starting LibVLC. ffmpegSv is a TextureView
public void startMediaPlayer() {
ArrayList<String> options = new ArrayList<>();
options.add("--no-drop-late-frames");
options.add("--no-skip-frames");
options.add("-vvv");
options.add("--no-osd");
options.add("--rtsp-tcp");
options.add("--no-snapshot-preview");
options.add("--no-video-title");
options.add("--no-spu");
videoVlc = new LibVLC(getActivity(), options);
TextureView surfaceView = (TextureView) getActivity().findViewById(R.id.streamView);
newVideoMediaPlayer = new org.videolan.libvlc.MediaPlayer(videoVlc);
final IVLCVout vOut = newVideoMediaPlayer.getVLCVout();
vOut.setVideoSurface(ffmpegSv.getSurfaceTexture());
vOut.setWindowSize(ffmpegSv.getWidth(), ffmpegSv.getHeight());
vOut.attachViews();
Media videoMedia = new Media (videoVlc, Uri.parse("rtsp://1.1.1.1/abc.mov"));
newVideoMediaPlayer.setMedia(videoMedia);
newVideoMediaPlayer.play();
}
And this is how I am attempting to get the bitmap from it. I should note this method worked correctly when using the android MediaPlayer.
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
if (mStream != null) {
if (idx++ % 10 == 0) {
(new Runnable() {
#Override
public void run() {
FileOutputStream out = null;
Bitmap b = ffmpegSv.getBitmap(ffmpegSv.getWidth(), ffmpegSv.getHeight());
Bitmap bm = Bitmap.createScaledBitmap(b2, 640, 480, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 50, bos);
byte[] arr = bos.toByteArray();
mStream.onJpegFrame(arr, 0L);
b.recycle();
bm.recycle();
}
}).run();
idx = 0;
}
}
}
However, the image that is being produced has a sliver of the original image from the TextureView around the edge almost like a border, but the rest of the image is obscured by a black box.
The only thing I can think of is that VLC uses some sort of overlay for subtitles etc that when pulled out with getBitmap() is losing its transparency. However, I am not 100% sure this is the case. Is there a way to check if this is the case, or disable any sort of overlays that VLC could be adding?
EDIT : I have added a sample image to demonstrate the problem:
You can just make out the bottom, right and top of the background image and a clear rectangle over the top of it.
Bitmap b = ffmpegSv.getBitmap(ffmpegSv.getWidth(), ffmpegSv.getHeight());
Bitmap bm = Bitmap.createScaledBitmap(b2, 640, 480, true);
Aren't you scaling something else here?
What is b2?

Libgdx images from pixmaps are drawn solid black

For startes, i know libgdx is primarily used for gaming. I have just found out about it and wanted to try to give it an extra other (possible) purpose, for example a simple photo frame. The below code is just a part of a proof of concept and when it runs as it should evolve to a bigger app.
Below i have posted a very simple class at what i'm up to now. What it does is every [num] seconds, in a different thread, it loads an image from disk, put it in a pixmap and creates a texture from it on the GL thread (if i understand everything correctly).
I came to this code after a lot of trial and error, It took me an hour to found out that a texture should be created in the OpenGL thread. When the texture is created outside the thread the images where just big black boxes without the loaded texture.
Well, when i ran this class version with textures created on the thread I finally see the image showing, and nicely fading every [num] seconds.
But, after 15 executions the images are starting to appear as black boxes again as if the texture is created outside the GL thread. I'm not getting any exceptions printed in the console.
The application is running on a Raspberry Pi with memory split 128/128. The images are jpeg images in 1920*1080 (not progressive). Memory usage is as follows according to top:
VIRT: 249m
RES: 37m
SHR: 10m
Command line is: java -Xmx128M -DPI=true -DLWJGJ_BACKEND=GLES -Djava.library.path=libs:/opt/vc/lib:. -classpath *:. org.pidome.raspberry.mirrorclient.BootStrapper
I see the RES rising when a new image is loaded but after loading it is back to 37.
The System.out.println("Swap counter: " +swapCounter); keeps giving me output when the thread is run.
Could one of you guys point me in the right direction into solving the issue that after 15 iterations the textures are not shown anymore and the images are solid black?
Here is my current code (name PhotosActor is misleading, a result of first trying it to be an Actor):
public class PhotosActor {
List<Image> images = new ArrayList<>();
private String imgDir = "appimages/photos/";
List<String> fileSet = new ArrayList<>();
private final ScheduledExecutorService changeExecutor = Executors.newSingleThreadScheduledExecutor();
Stage stage;
int swapCounter = 0;
public PhotosActor(Stage stage) {
this.stage = stage;
}
public final void preload(){
loadFileSet();
changeExecutor.scheduleAtFixedRate(switchimg(), 10, 10, TimeUnit.SECONDS);
}
private Runnable switchimg(){
Runnable run = () -> {
try {
swapCounter++;
FileInputStream input = new FileInputStream(fileSet.get(new Random().nextInt(fileSet.size())));
Gdx2DPixmap gpm = new Gdx2DPixmap(input, Gdx2DPixmap.GDX2D_FORMAT_RGB888);
input.close();
Pixmap map = new Pixmap(gpm);
Gdx.app.postRunnable(() -> {
System.out.println("Swap counter: " +swapCounter);
Texture tex = new Texture(map);
map.dispose();
Image newImg = new Image(tex);
newImg.addAction(Actions.sequence(Actions.alpha(0),Actions.fadeIn(1f),Actions.delay(5),Actions.run(() -> {
if(images.size()>1){
Image oldImg = images.remove(1);
oldImg.getActions().clear();
oldImg.remove();
}
})));
images.add(0,newImg);
stage.addActor(newImg);
newImg.toBack();
if(images.size()>1){ images.get(1).toBack(); }
});
} catch (Exception ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
};
return run;
}
private void loadFileSet(){
File[] files = new File(imgDir).listFiles();
for (File file : files) {
if (file.isFile()) {
System.out.println("Loading: " + imgDir + file.getName());
fileSet.add(imgDir + file.getName());
}
}
}
}
Thanks in advance and cheers,
John.
I was able to resolve this myself, A couple of minutes ago it struck me that i have to dispose the texture. I was in the believe that removing the image also removed the texture. Which it clearly did not (or i have to update to more recent version).
So what i did was create a new class extending the image class:
public class PhotoImage extends Image {
Texture tex;
public PhotoImage(Texture tex){
super(tex);
this.tex = tex;
}
public void dispose(){
try {
this.tex.dispose();
} catch(Exception ex){
System.out.println(ex.getMessage());
}
}
}
On all the location i was refering to the image class i changed it to this PhotoImage class. The class modified some now looks like:
public class PhotosActor {
List<PhotoImage> images = new ArrayList<>();
private String imgDir = "appimages/photos/";
List<String> fileSet = new ArrayList<>();
private final ScheduledExecutorService changeExecutor = Executors.newSingleThreadScheduledExecutor();
Stage stage;
int swapCounter = 0;
public PhotosActor(Stage stage) {
this.stage = stage;
}
public final void preload(){
loadFileSet();
changeExecutor.scheduleAtFixedRate(switchimg(), 10, 10, TimeUnit.SECONDS);
}
private Runnable switchimg(){
Runnable run = () -> {
try {
swapCounter++;
byte[] byteResult = readLocalRandomFile();
Pixmap map = new Pixmap(byteResult, 0, byteResult.length);
Gdx.app.postRunnable(() -> {
System.out.println("Swap counter: " +swapCounter);
Texture tex = new Texture(map);
map.dispose();
PhotoImage newImg = new PhotoImage(tex);
images.add(0,newImg);
stage.addActor(newImg);
addTransform(newImg);
});
} catch (Exception ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
};
return run;
}
public void addTransform(Image img){
switch(new Random().nextInt(3)){
case 0:
img.toBack();
if(images.size()>1){ images.get(1).toBack(); }
img.addAction(Actions.sequence(Actions.alpha(0),Actions.fadeIn(1f),Actions.delay(5),Actions.run(() -> {
removeOldImg();
})));
break;
case 1:
img.toBack();
if(images.size()>1){ images.get(1).toBack(); }
img.setPosition(1920f, 1080f);
img.addAction(Actions.sequence(Actions.moveTo(0f, 0f, 5f),Actions.run(() -> {
removeOldImg();
})));
break;
case 2:
img.toBack();
if(images.size()>1){ images.get(1).toBack(); }
img.setScale(0f, 0f);
img.setPosition(960f, 540f);
img.addAction(Actions.sequence(Actions.parallel(Actions.scaleTo(1f, 1f, 5f), Actions.moveTo(0f, 0f, 5f)),Actions.run(() -> {
removeOldImg();
})));
break;
}
}
private void removeOldImg(){
if(images.size()>1){
PhotoImage oldImg = images.remove(1);
oldImg.remove();
oldImg.getActions().clear();
oldImg.dispose();
}
System.out.println("Amount of images: " + images.size());
}
private byte[] readLocalRandomFile() throws Exception{
FileInputStream input = null;
try {
input = new FileInputStream(fileSet.get(new Random().nextInt(fileSet.size())));
ByteArrayOutputStream out;
try (InputStream in = new BufferedInputStream(input)) {
out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int n = 0;
while (-1 != (n = in.read(buf))) {
out.write(buf, 0, n);
}
out.close();
return out.toByteArray();
} catch (IOException ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
} catch (FileNotFoundException ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
throw new Exception("No data");
}
private void loadFileSet(){
File[] files = new File(imgDir).listFiles();
for (File file : files) {
if (file.isFile()) {
System.out.println("Loading: " + imgDir + file.getName());
fileSet.add(imgDir + file.getName());
}
}
}
}
In the remove function i now have added
oldImg.dispose();
to get rid of the texture. Image transitions are now happy running on 50+ fps on the Raspberry Pi and the image rotation counter is on: 88 now. If there where people thinking thanks for your time!

ANDROID ZXING: Saving a photo in onPreviewFrame saves a photo every frame. How to only save a single photo upon scan?

For the last few weeks I have been attempting to alter Zxing to take a photo immediately upon scan. Thanks to help I am at a point where I can be consistently saving an image from the onPreviewFrame class within PreviewCallback.java
The code I use within the onPreviewMethod method shall follow, and then a short rundown of how my app works.
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
android.hardware.Camera.Parameters parameters = camera.getParameters();
android.hardware.Camera.Size size = parameters.getPreviewSize();
int height = size.height;
int width = size.width;
System.out.println("HEIGHT IS" + height);
System.out.println("WIDTH IS" + width);
if (cameraResolution != null && thePreviewHandler != null) {
YuvImage im = new YuvImage(data, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
My application centers around its own GUI and functionality, but can engage Zxing via intent (Zxing is built into the apps build path, yes this is bad as it can intefere if Zxing is already installed). Once Zxing has scanned a QR code, the information encoded on it is returned to my app and stored, and then after a short delay Zxing is automatically re-initiated.
My current code saves an image every frame whilst Zxing is running, the functionality I would like is to have only the frame on scan be saved. Although Zxing stops saving images in the short window where my app takes over again, Zxing is quickly re-initialized however and I may not have time to manipulate the data. A possible workaround however is quickly renaming the saved file so that Zxing doesn't start overwriting it and manipulation can be performed in the background. Nevertheless, saving an image every frame is a waste of resources and less than preferable.
How do I only save an image upon scan?
Thanks in advance.
Updated to show found instances of multiFormatReader as requested:
private final CaptureActivity activity;
private final MultiFormatReader multiFormatReader;
private boolean running = true;
DecodeHandler(CaptureActivity activity, Map<DecodeHintType,Object> hints) {
multiFormatReader = new MultiFormatReader();
multiFormatReader.setHints(hints);
this.activity = activity;
}
#Override
public void handleMessage(Message message) {
if (!running) {
return;
}
if (message.what == R.id.decode) {
decode((byte[]) message.obj, message.arg1, message.arg2);
} else if (message.what == R.id.quit) {
running = false;
Looper.myLooper().quit();
}}
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
ZXing detects every received frame until finds out correct information. The image saving point is when ZXing returns a string which is not null. In addition, you can save file with different name "timestamp + .jpg", in case previous file will be overwritten.

Capturing image from webcam in java?

How can I continuously capture images from a webcam?
I want to experiment with object recognition (by maybe using java media framework).
I was thinking of creating two threads
one thread:
Node 1: capture live image
Node 2: save image as "1.jpg"
Node 3: wait 5 seconds
Node 4: repeat...
other thread:
Node 1: wait until image is captured
Node 2: using the "1.jpg" get colors
from every pixle
Node 3: save data in arrays
Node 4: repeat...
This JavaCV implementation works fine.
Code:
import org.bytedeco.javacv.*;
import org.bytedeco.opencv.opencv_core.IplImage;
import java.io.File;
import static org.bytedeco.opencv.global.opencv_core.cvFlip;
import static org.bytedeco.opencv.helper.opencv_imgcodecs.cvSaveImage;
public class Test implements Runnable {
final int INTERVAL = 100;///you may use interval
CanvasFrame canvas = new CanvasFrame("Web Cam");
public Test() {
canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
}
public void run() {
new File("images").mkdir();
FrameGrabber grabber = new OpenCVFrameGrabber(0); // 1 for next camera
OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
IplImage img;
int i = 0;
try {
grabber.start();
while (true) {
Frame frame = grabber.grab();
img = converter.convert(frame);
//the grabbed frame will be flipped, re-flip to make it right
cvFlip(img, img, 1);// l-r = 90_degrees_steps_anti_clockwise
//save
cvSaveImage("images" + File.separator + (i++) + "-aa.jpg", img);
canvas.showImage(converter.convert(img));
Thread.sleep(INTERVAL);
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
Test gs = new Test();
Thread th = new Thread(gs);
th.start();
}
}
There is also post on configuration for JavaCV
You can modify the code and be able to save the images in regular interval and do rest of the processing you want.
Some time ago I've created generic Java library which can be used to take pictures with a PC webcam. The API is very simple, not overfeatured, can work standalone, but also supports additional webcam drivers like OpenIMAJ, JMF, FMJ, LTI-CIVIL, etc, and some IP cameras.
Link to the project is https://github.com/sarxos/webcam-capture
Example code (take picture and save in test.jpg):
Webcam webcam = Webcam.getDefault();
webcam.open();
BufferedImage image = webcam.getImage();
ImageIO.write(image, "JPG", new File("test.jpg"));
It is also available in Maven Central Repository or as a separate ZIP which includes all required dependencies and 3rd party JARs.
JMyron is very simple for use.
http://webcamxtra.sourceforge.net/
myron = new JMyron();
myron.start(imgw, imgh);
myron.update();
int[] img = myron.image();
Here is a similar question with some - yet unaccepted - answers. One of them mentions FMJ as a java alternative to JMF.
This kind of goes off of gt_ebuddy's answer using JavaCV, but my video output is at a much higher quality then his answer. I've also added some other random improvements (such as closing down the program when ESC and CTRL+C are pressed, and making sure to close down the resources the program uses properly).
import java.awt.event.ActionEvent;
import java.awt.event.KeyEvent;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import java.awt.image.BufferedImage;
import javax.swing.AbstractAction;
import javax.swing.ActionMap;
import javax.swing.InputMap;
import javax.swing.JComponent;
import javax.swing.JFrame;
import javax.swing.KeyStroke;
import com.googlecode.javacv.CanvasFrame;
import com.googlecode.javacv.OpenCVFrameGrabber;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
public class HighRes extends JComponent implements Runnable {
private static final long serialVersionUID = 1L;
private static CanvasFrame frame = new CanvasFrame("Web Cam");
private static boolean running = false;
private static int frameWidth = 800;
private static int frameHeight = 600;
private static OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0);
private static BufferedImage bufImg;
public HighRes()
{
// setup key bindings
ActionMap actionMap = frame.getRootPane().getActionMap();
InputMap inputMap = frame.getRootPane().getInputMap(JComponent.WHEN_IN_FOCUSED_WINDOW);
for (Keys direction : Keys.values())
{
actionMap.put(direction.getText(), new KeyBinding(direction.getText()));
inputMap.put(direction.getKeyStroke(), direction.getText());
}
frame.getRootPane().setActionMap(actionMap);
frame.getRootPane().setInputMap(JComponent.WHEN_IN_FOCUSED_WINDOW, inputMap);
// setup window listener for close action
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.addWindowListener(new WindowAdapter()
{
public void windowClosing(WindowEvent e)
{
stop();
}
});
}
public static void main(String... args)
{
HighRes webcam = new HighRes();
webcam.start();
}
#Override
public void run()
{
try
{
grabber.setImageWidth(frameWidth);
grabber.setImageHeight(frameHeight);
grabber.start();
while (running)
{
final IplImage cvimg = grabber.grab();
if (cvimg != null)
{
// cvFlip(cvimg, cvimg, 1); // mirror
// show image on window
bufImg = cvimg.getBufferedImage();
frame.showImage(bufImg);
}
}
grabber.stop();
grabber.release();
frame.dispose();
}
catch (Exception e)
{
e.printStackTrace();
}
}
public void start()
{
new Thread(this).start();
running = true;
}
public void stop()
{
running = false;
}
private class KeyBinding extends AbstractAction {
private static final long serialVersionUID = 1L;
public KeyBinding(String text)
{
super(text);
putValue(ACTION_COMMAND_KEY, text);
}
#Override
public void actionPerformed(ActionEvent e)
{
String action = e.getActionCommand();
if (action.equals(Keys.ESCAPE.toString()) || action.equals(Keys.CTRLC.toString())) stop();
else System.out.println("Key Binding: " + action);
}
}
}
enum Keys
{
ESCAPE("Escape", KeyStroke.getKeyStroke(KeyEvent.VK_ESCAPE, 0)),
CTRLC("Control-C", KeyStroke.getKeyStroke(KeyEvent.VK_C, KeyEvent.CTRL_DOWN_MASK)),
UP("Up", KeyStroke.getKeyStroke(KeyEvent.VK_UP, 0)),
DOWN("Down", KeyStroke.getKeyStroke(KeyEvent.VK_DOWN, 0)),
LEFT("Left", KeyStroke.getKeyStroke(KeyEvent.VK_LEFT, 0)),
RIGHT("Right", KeyStroke.getKeyStroke(KeyEvent.VK_RIGHT, 0));
private String text;
private KeyStroke keyStroke;
Keys(String text, KeyStroke keyStroke)
{
this.text = text;
this.keyStroke = keyStroke;
}
public String getText()
{
return text;
}
public KeyStroke getKeyStroke()
{
return keyStroke;
}
#Override
public String toString()
{
return text;
}
}
You can try Java Webcam SDK library also.
SDK demo applet is available at link.
I have used JMF on a videoconference application and it worked well on two laptops: one with integrated webcam and another with an old USB webcam. It requires JMF being installed and configured before-hand, but once you're done you can access the hardware via Java code fairly easily.
You can try Marvin Framework. It provides an interface to work with cameras. Moreover, it also provides a set of real-time video processing features, like object tracking and filtering.
Take a look!
Real-time Video Processing Demo:
http://www.youtube.com/watch?v=D5mBt0kRYvk
You can use the source below. Just save a frame using MarvinImageIO.saveImage() every 5 second.
Webcam video demo:
public class SimpleVideoTest extends JFrame implements Runnable{
private MarvinVideoInterface videoAdapter;
private MarvinImage image;
private MarvinImagePanel videoPanel;
public SimpleVideoTest(){
super("Simple Video Test");
videoAdapter = new MarvinJavaCVAdapter();
videoAdapter.connect(0);
videoPanel = new MarvinImagePanel();
add(videoPanel);
new Thread(this).start();
setSize(800,600);
setVisible(true);
}
#Override
public void run() {
while(true){
// Request a video frame and set into the VideoPanel
image = videoAdapter.getFrame();
videoPanel.setImage(image);
}
}
public static void main(String[] args) {
SimpleVideoTest t = new SimpleVideoTest();
t.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
}
For those who just want to take a single picture:
WebcamPicture.java
public class WebcamPicture {
public static void main(String[] args) {
try{
MarvinVideoInterface videoAdapter = new MarvinJavaCVAdapter();
videoAdapter.connect(0);
MarvinImage image = videoAdapter.getFrame();
MarvinImageIO.saveImage(image, "./res/webcam_picture.jpg");
} catch(MarvinVideoInterfaceException e){
e.printStackTrace();
}
}
}
I used Webcam Capture API. You can download it from here
webcam = Webcam.getDefault();
webcam.open();
if (webcam.isOpen()) { //if web cam open
BufferedImage image = webcam.getImage();
JLabel imageLbl = new JLabel();
imageLbl.setSize(640, 480); //show captured image
imageLbl.setIcon(new ImageIcon(image));
int showConfirmDialog = JOptionPane.showConfirmDialog(null, imageLbl, "Image Viewer", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, new ImageIcon(""));
if (showConfirmDialog == JOptionPane.YES_OPTION) {
JFileChooser chooser = new JFileChooser();
chooser.setDialogTitle("Save Image");
chooser.setFileFilter(new FileNameExtensionFilter("IMAGES ONLY", "png", "jpeg", "jpg")); //this file extentions are shown
int showSaveDialog = chooser.showSaveDialog(this);
if (showSaveDialog == 0) { //if pressed 'Save' button
String filePath = chooser.getCurrentDirectory().toString().replace("\\", "/");
String fileName = chooser.getSelectedFile().getName(); //get user entered file name to save
ImageIO.write(image, "PNG", new File(filePath + "/" + fileName + ".png"));
}
}
}
http://grack.com/downloads/school/enel619.10/report/java_media_framework.html
Using the Player with Swing
The Player can be easily used in a Swing application as well. The following code creates a Swing-based TV capture program with the video output displayed in the entire window:
import javax.media.*;
import javax.swing.*;
import java.awt.*;
import java.net.*;
import java.awt.event.*;
import javax.swing.event.*;
public class JMFTest extends JFrame {
Player _player;
JMFTest() {
addWindowListener( new WindowAdapter() {
public void windowClosing( WindowEvent e ) {
_player.stop();
_player.deallocate();
_player.close();
System.exit( 0 );
}
});
setExtent( 0, 0, 320, 260 );
JPanel panel = (JPanel)getContentPane();
panel.setLayout( new BorderLayout() );
String mediaFile = "vfw://1";
try {
MediaLocator mlr = new MediaLocator( mediaFile );
_player = Manager.createRealizedPlayer( mlr );
if (_player.getVisualComponent() != null)
panel.add("Center", _player.getVisualComponent());
if (_player.getControlPanelComponent() != null)
panel.add("South", _player.getControlPanelComponent());
}
catch (Exception e) {
System.err.println( "Got exception " + e );
}
}
public static void main(String[] args) {
JMFTest jmfTest = new JMFTest();
jmfTest.show();
}
}
Java usually doesn't like accessing hardware, so you will need a driver program of some sort, as goldenmean said. I've done this on my laptop by finding a command line program that snaps a picture. Then it's the same as goldenmean explained; you run the command line program from your java program in the takepicture() routine, and the rest of your code runs the same.
Except for the part about reading pixel values into an array, you might be better served by saving the file to BMP, which is nearly that format already, then using the standard java image libraries on it.
Using a command line program adds a dependency to your program and makes it less portable, but so was the webcam, right?
I believe the web-cam application software which comes along with the web-cam, or you native windows webcam software can be run in a batch script(windows/dos script) after turning the web cam on(i.e. if it needs an external power supply). In the bacth script , u can add appropriate delay to capture after certain time period. And keep executing the capture command in loop.
I guess this should be possible
-AD
There's a pretty nice interface for this in processing, which is kind of a pidgin java designed for graphics. It gets used in some image recognition work, such as that link.
Depending on what you need out of it, you might be able to load the video library that's used there in java, or if you're just playing around with it you might be able to get by using processing itself.
FMJ can do this, as can the supporting library it uses, LTI-CIVIL. Both are on sourceforge.
Recommand using FMJ for multimedia relatived java app.
Try using JMyron How To Use Webcam Using Java. I think using JMyron is the easiest way to access a webcam using java. I tried to use it with a 64-bit processor, but it gave me an error. It worked just fine on a 32-bit processor, though.

Categories