For startes, i know libgdx is primarily used for gaming. I have just found out about it and wanted to try to give it an extra other (possible) purpose, for example a simple photo frame. The below code is just a part of a proof of concept and when it runs as it should evolve to a bigger app.
Below i have posted a very simple class at what i'm up to now. What it does is every [num] seconds, in a different thread, it loads an image from disk, put it in a pixmap and creates a texture from it on the GL thread (if i understand everything correctly).
I came to this code after a lot of trial and error, It took me an hour to found out that a texture should be created in the OpenGL thread. When the texture is created outside the thread the images where just big black boxes without the loaded texture.
Well, when i ran this class version with textures created on the thread I finally see the image showing, and nicely fading every [num] seconds.
But, after 15 executions the images are starting to appear as black boxes again as if the texture is created outside the GL thread. I'm not getting any exceptions printed in the console.
The application is running on a Raspberry Pi with memory split 128/128. The images are jpeg images in 1920*1080 (not progressive). Memory usage is as follows according to top:
VIRT: 249m
RES: 37m
SHR: 10m
Command line is: java -Xmx128M -DPI=true -DLWJGJ_BACKEND=GLES -Djava.library.path=libs:/opt/vc/lib:. -classpath *:. org.pidome.raspberry.mirrorclient.BootStrapper
I see the RES rising when a new image is loaded but after loading it is back to 37.
The System.out.println("Swap counter: " +swapCounter); keeps giving me output when the thread is run.
Could one of you guys point me in the right direction into solving the issue that after 15 iterations the textures are not shown anymore and the images are solid black?
Here is my current code (name PhotosActor is misleading, a result of first trying it to be an Actor):
public class PhotosActor {
List<Image> images = new ArrayList<>();
private String imgDir = "appimages/photos/";
List<String> fileSet = new ArrayList<>();
private final ScheduledExecutorService changeExecutor = Executors.newSingleThreadScheduledExecutor();
Stage stage;
int swapCounter = 0;
public PhotosActor(Stage stage) {
this.stage = stage;
}
public final void preload(){
loadFileSet();
changeExecutor.scheduleAtFixedRate(switchimg(), 10, 10, TimeUnit.SECONDS);
}
private Runnable switchimg(){
Runnable run = () -> {
try {
swapCounter++;
FileInputStream input = new FileInputStream(fileSet.get(new Random().nextInt(fileSet.size())));
Gdx2DPixmap gpm = new Gdx2DPixmap(input, Gdx2DPixmap.GDX2D_FORMAT_RGB888);
input.close();
Pixmap map = new Pixmap(gpm);
Gdx.app.postRunnable(() -> {
System.out.println("Swap counter: " +swapCounter);
Texture tex = new Texture(map);
map.dispose();
Image newImg = new Image(tex);
newImg.addAction(Actions.sequence(Actions.alpha(0),Actions.fadeIn(1f),Actions.delay(5),Actions.run(() -> {
if(images.size()>1){
Image oldImg = images.remove(1);
oldImg.getActions().clear();
oldImg.remove();
}
})));
images.add(0,newImg);
stage.addActor(newImg);
newImg.toBack();
if(images.size()>1){ images.get(1).toBack(); }
});
} catch (Exception ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
};
return run;
}
private void loadFileSet(){
File[] files = new File(imgDir).listFiles();
for (File file : files) {
if (file.isFile()) {
System.out.println("Loading: " + imgDir + file.getName());
fileSet.add(imgDir + file.getName());
}
}
}
}
Thanks in advance and cheers,
John.
I was able to resolve this myself, A couple of minutes ago it struck me that i have to dispose the texture. I was in the believe that removing the image also removed the texture. Which it clearly did not (or i have to update to more recent version).
So what i did was create a new class extending the image class:
public class PhotoImage extends Image {
Texture tex;
public PhotoImage(Texture tex){
super(tex);
this.tex = tex;
}
public void dispose(){
try {
this.tex.dispose();
} catch(Exception ex){
System.out.println(ex.getMessage());
}
}
}
On all the location i was refering to the image class i changed it to this PhotoImage class. The class modified some now looks like:
public class PhotosActor {
List<PhotoImage> images = new ArrayList<>();
private String imgDir = "appimages/photos/";
List<String> fileSet = new ArrayList<>();
private final ScheduledExecutorService changeExecutor = Executors.newSingleThreadScheduledExecutor();
Stage stage;
int swapCounter = 0;
public PhotosActor(Stage stage) {
this.stage = stage;
}
public final void preload(){
loadFileSet();
changeExecutor.scheduleAtFixedRate(switchimg(), 10, 10, TimeUnit.SECONDS);
}
private Runnable switchimg(){
Runnable run = () -> {
try {
swapCounter++;
byte[] byteResult = readLocalRandomFile();
Pixmap map = new Pixmap(byteResult, 0, byteResult.length);
Gdx.app.postRunnable(() -> {
System.out.println("Swap counter: " +swapCounter);
Texture tex = new Texture(map);
map.dispose();
PhotoImage newImg = new PhotoImage(tex);
images.add(0,newImg);
stage.addActor(newImg);
addTransform(newImg);
});
} catch (Exception ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
};
return run;
}
public void addTransform(Image img){
switch(new Random().nextInt(3)){
case 0:
img.toBack();
if(images.size()>1){ images.get(1).toBack(); }
img.addAction(Actions.sequence(Actions.alpha(0),Actions.fadeIn(1f),Actions.delay(5),Actions.run(() -> {
removeOldImg();
})));
break;
case 1:
img.toBack();
if(images.size()>1){ images.get(1).toBack(); }
img.setPosition(1920f, 1080f);
img.addAction(Actions.sequence(Actions.moveTo(0f, 0f, 5f),Actions.run(() -> {
removeOldImg();
})));
break;
case 2:
img.toBack();
if(images.size()>1){ images.get(1).toBack(); }
img.setScale(0f, 0f);
img.setPosition(960f, 540f);
img.addAction(Actions.sequence(Actions.parallel(Actions.scaleTo(1f, 1f, 5f), Actions.moveTo(0f, 0f, 5f)),Actions.run(() -> {
removeOldImg();
})));
break;
}
}
private void removeOldImg(){
if(images.size()>1){
PhotoImage oldImg = images.remove(1);
oldImg.remove();
oldImg.getActions().clear();
oldImg.dispose();
}
System.out.println("Amount of images: " + images.size());
}
private byte[] readLocalRandomFile() throws Exception{
FileInputStream input = null;
try {
input = new FileInputStream(fileSet.get(new Random().nextInt(fileSet.size())));
ByteArrayOutputStream out;
try (InputStream in = new BufferedInputStream(input)) {
out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int n = 0;
while (-1 != (n = in.read(buf))) {
out.write(buf, 0, n);
}
out.close();
return out.toByteArray();
} catch (IOException ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
} catch (FileNotFoundException ex) {
Logger.getLogger(PhotosActor.class.getName()).log(Level.SEVERE, null, ex);
}
throw new Exception("No data");
}
private void loadFileSet(){
File[] files = new File(imgDir).listFiles();
for (File file : files) {
if (file.isFile()) {
System.out.println("Loading: " + imgDir + file.getName());
fileSet.add(imgDir + file.getName());
}
}
}
}
In the remove function i now have added
oldImg.dispose();
to get rid of the texture. Image transitions are now happy running on 50+ fps on the Raspberry Pi and the image rotation counter is on: 88 now. If there where people thinking thanks for your time!
Related
I have an application where I want to show a FlowPane with PDF pages thumbnails drawn into canvases. I'm using PDFBox and FXGraphics2D to render pages.
My current implementation creates a number of canvases as the number of pages, adds them to the FlowPane and then spins a number or async tasks to draw the pages content into the canvas.
I'm not sure if the async drawing is the recommended way but the idea is to not use the JavaFX thread to do the PDF parsing to avoid freezing the application.
Now, my issue is this, I can see from the logs that all the rendering tasks have finished and the document is closed. The UI shows some rendered pages but it stays unresponsive for ~10 seconds. After that the application revives, all pages are rendered and everything works nicely.
I tried to profile and I think this is the relevant part:
But I have limited knowledge of what is going on under the hood and I couldn't figure out what I'm doing wrong. Do you have an idea or hint on what is wrong with my approach/code and how it can be improved? Ideally I'd like to have the application fast and responsive while the pages thumbnails are filled.
I'm on Linux with a pretty decent machine but I also tested on Windows and got the same behavior. I also tried to replace FlowPane with HBox or VBox but still the same happened.
Here is some ugly code to reproduce the behavior:
public class TestApp extends Application {
public static void main(String[] args) {
Application.launch(TestApp.class, args);
}
#Override
public void start(Stage primaryStage) {
try {
BorderPane root = new BorderPane();
Scene scene = new Scene(root, 400, 400);
var flow = new FlowPane();
flow.setOnDragOver(e -> {
if (e.getDragboard().hasFiles()) {
e.acceptTransferModes(TransferMode.COPY);
}
e.consume();
});
flow.setOnDragDropped(e -> {
var tasks = new ArrayList<TestApp.RenderTask>();
try {
var document = PDDocument.load(e.getDragboard()
.getFiles().get(0));
var renderer = new PDFRenderer(document);
for (int i = 1; i <= document.getNumberOfPages(); i++) {
var page = document.getPage(i - 1);
var cropbox = page.getCropBox();
var thumbnailDimension = new Dimension2D(400,
400 * (cropbox.getHeight() / cropbox.getWidth()));
var thumbnail = new Canvas(thumbnailDimension.getWidth(), thumbnailDimension.getHeight());
var gs = thumbnail.getGraphicsContext2D();
var pop = gs.getFill();
gs.setFill(Color.WHITE);
gs.fillRect(0, 0, thumbnailDimension.getWidth(), thumbnailDimension.getHeight());
gs.setFill(pop);
tasks.add(new TestApp.RenderTask(renderer, thumbnail, i));
flow.getChildren().add(new Group(thumbnail));
}
var exec = Executors.newSingleThreadExecutor();
tasks.forEach(exec::submit);
exec.submit(()-> {
try {
document.close();
System.out.println("close");
} catch (IOException ioException) {
ioException.printStackTrace();
}
});
} catch (Exception ioException) {
ioException.printStackTrace();
}
e.setDropCompleted(true);
e.consume();
});
var scroll = new ScrollPane(flow);
scroll.setFitToHeight(true);
scroll.setFitToWidth(true);
root.setCenter(scroll);
primaryStage.setScene(scene);
primaryStage.show();
} catch (Exception e) {
e.printStackTrace();
}
}
private static record RenderTask(PDFRenderer renderer, Canvas canvas, int page) implements Runnable {
#Override
public void run() {
var gs = new FXGraphics2D(canvas.getGraphicsContext2D());
gs.setBackground(WHITE);
try {
renderer.renderPageToGraphics(page - 1, gs);
} catch (IOException e) {
e.printStackTrace();
}
gs.dispose();
}
}
}
And this is the 310 pages PDF file I'm using to test it.
Thanks
I finally managed to get what I wanted, the application responds, the thumbnails are populated as they are ready and the memory usage is limited.
The issue
I thought I was drawing the canvases off of the FX thread but how Canvas works is that you fill a buffer of drawing instructions and they are executed when the canvas becomes part of the scene (I think). What was happening is that I was quickly filling the canvases with a lot of drawing instructions, adding all the canvases to the FlowPane and the the application was spending a lot of time actually executing the drawing instructions for 310 canvas and becoming unresponsive for ~10 seconds.
After some reading and suggestions from the JavaFX community here and on Twitter, I tried to switch to an ImageView implementation, telling PDFBox to create a BufferedImage of the page and use it to create an ImageView. It worked nicely but memory usage was 10 times compared to the Canvas impl:
The best of both worlds
In my final solution I add white canvases to the FlowPane, create a BufferedImage of the page and convert it to an Image off of the FX thread, draw the image to the Canvas and discard the Image.
This is memory usage for the same 310 pages PDF file:
And this is the application responsiveness:
EDIT (added code to reproduce)
Here the code I used (with SAMBox instead of PDFBox)
public class TestAppCanvasImageMixed extends Application {
public static void main(String[] args) {
Application.launch(TestAppCanvas.class, args);
}
#Override
public void start(Stage primaryStage) {
try {
BorderPane root = new BorderPane();
Scene scene = new Scene(root, 400, 400);
var button = new Button("render");
button.setDisable(true);
root.setTop(new HBox(button));
var flow = new FlowPane();
flow.setOnDragOver(e -> {
if (e.getDragboard().hasFiles()) {
e.acceptTransferModes(TransferMode.COPY);
}
e.consume();
});
flow.setOnDragDropped(e -> {
ArrayList<TestAppCanvasImageMixed.RenderTaskToImage> tasks = new ArrayList<>();
try {
var document = PDDocument.load(e.getDragboard()
.getFiles().get(0));
var renderer = new PDFRenderer(document);
for (int i = 1; i <= document.getNumberOfPages(); i++) {
var page = document.getPage(i - 1);
var cropbox = page.getCropBox();
var thumbnailDimension = new Dimension2D(400,
400 * (cropbox.getHeight() / cropbox.getWidth()));
var thumbnail = new Canvas(thumbnailDimension.getWidth(), thumbnailDimension.getHeight());
var gs = thumbnail.getGraphicsContext2D();
var clip = new Rectangle(thumbnailDimension.getWidth(), thumbnailDimension.getHeight());
clip.setArcHeight(15);
clip.setArcWidth(15);
thumbnail.setClip(clip);
var pop = gs.getFill();
gs.setFill(WHITE);
gs.fillRect(0, 0, thumbnailDimension.getWidth(), thumbnailDimension.getHeight());
gs.setFill(pop);
var g = new Group();
tasks.add(new TestAppCanvasImageMixed.RenderTaskToImage(renderer, 400 / cropbox.getWidth(), i, thumbnail,
() -> g.getChildren().setAll(thumbnail)));
flow.getChildren().add(new Group());
}
button.setOnAction(a -> {
var exec = Executors.newFixedThreadPool(1);
tasks.forEach(exec::submit);
exec.submit(() -> {
try {
document.close();
System.out.println("close");
} catch (IOException ioException) {
ioException.printStackTrace();
}
});
});
button.setDisable(false);
} catch (Exception ioException) {
ioException.printStackTrace();
}
e.setDropCompleted(true);
e.consume();
});
var scroll = new ScrollPane(new StackPane(flow));
scroll.setFitToHeight(true);
scroll.setFitToWidth(true);
root.setCenter(scroll);
primaryStage.setScene(scene);
primaryStage.show();
} catch (Exception e) {
e.printStackTrace();
}
}
private static record RenderTaskToImage(PDFRenderer renderer, float scale, int page, Canvas canvas, Runnable r) implements Runnable {
#Override
public void run() {
try {
var bufferedImage = renderer.renderImage(page - 1, scale, ImageType.ARGB, RenderDestination.VIEW);
var image = SwingFXUtils.toFXImage(bufferedImage, null);
var gc = canvas.getGraphicsContext2D();
gc.drawImage(image, 0, 0);
Platform.runLater(() -> r.run());
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
Objective:
I'm trying to draw bounding boxes around NPCs from a game client. I need to save the screenshot to a folder and save the box coordinates to a text file.
Problem:
When I rotate the camera, the boxes drawn on the canvas no longer align with the NPCs. It looks like the NPC coordinates lag behind the screenshot that is being draw on. Some of the screenshots also appear to be half previous frame and half new frame. I assume this may be related to multiple images being sent to save before the first one completes. Another thing to note is that the boxes don't finish drawing before the next frame appears on the canvas.
How can I force the screenshots to sync with the box coordinates? Is there some way I can get the game data and screenshot at the same time and then force the client to stop drawing until the task is complete?
This is the first time I've ever worked with Java (The data this script collects is intended for use with a Python project I'm working on) so please simplify answers as much as possible.
Update:
After messing around with things, it appears that the screen tearing may be the main issue. The delay looks like it's caused by the screen tear. The boxes are being drawn at the correct position on the parts of the screen that didn't get updated before they were saved.
https://media.giphy.com/media/39yEawmLBT4zKpPoz3/giphy.gif
Performing at its worst:
Tearing:
public class Pair {
public String key;
public Rectangle value;
public Pair(String key, Rectangle value) {
this.key = key;
this.value = value;
}
}
public class MyPaint extends JPanel{
private static final long serialVersionUID = 9091790544960515120L;
private BufferedImage paintImage = getClient().getCanvasImage();
#Override
protected void paintComponent(Graphics g){
super.paintComponent(g);
g.drawImage(paintImage, 0, 0, null);
}
public void updatePaint(){
synchronized (this) {
Graphics g = paintImage.createGraphics();
List<NPC> rsNpcs = new NPCs(getClient()).all()
.stream()
.filter(i -> i.getName().equals(npcName)
&& i.isOnScreen() == true
&& inTile(i.getTile()) == false)
.collect(Collectors.toList());
//Classifying the NPC in the box for the text file.
for (NPC rsNpc : rsNpcs) {
Rectangle bbox = rsNpc.getBoundingBox();
g.drawRect(bbox.x, bbox.y, bbox.width, bbox.height);
if (rsNpc.isInCombat() == true) {
Pair rsNpcPair = new Pair("CombatChicken", bbox);
onScreenRsNpcs.add(rsNpcPair);
} else {
Pair rsNpcPair = new Pair("RegularChicken", bbox);
onScreenRsNpcs.add(rsNpcPair);
}
}
//This bit is to save the boxes to a text file:
for (Pair rsNpc : onScreenRsNpcs) {
String data = (String.valueOf(rsNpc.value) + "Class Name: " + rsNpc.key + " Image Name: " + String.valueOf(count));
globalData.add(data);
}
g.dispose();
repaint();
}
}
public void save() throws IOException{
ImageIO.write(paintImage, "PNG", new File(String.format("%s/%s.jpg", getManifest().name(), count)));
count = count + 1;
}
public void load() throws IOException {
paintImage = ImageIO.read(new File(String.format("%s/%s.jpg", getManifest().name(), count)));
repaint();
}
}
//...
private MyPaint paint;
public void onStart() {
paint = new MyPaint();
}
//...
#Override
public int onLoop() {
try {
paint.updatePaint();
paint.save();
paint.load();
} catch (Exception e) {
e.printStackTrace();
}
return 1;
}
Let's say that I want to load an shp file, do my stuff on it and save the map as an image.
In order to save an image I am using:
public void saveImage(final MapContent map, final String file, final int imageWidth) {
GTRenderer renderer = new StreamingRenderer();
renderer.setMapContent(map);
Rectangle imageBounds = null;
ReferencedEnvelope mapBounds = null;
try {
mapBounds = map.getMaxBounds();
double heightToWidth = mapBounds.getSpan(1) / mapBounds.getSpan(0);
imageBounds = new Rectangle(0, 0, imageWidth, (int) Math.round(imageWidth * heightToWidth));
} catch (Exception e) {
// Failed to access map layers
throw new RuntimeException(e);
}
BufferedImage image = new BufferedImage(imageBounds.width, imageBounds.height, BufferedImage.TYPE_INT_RGB);
Graphics2D gr = image.createGraphics();
gr.setPaint(Color.WHITE);
gr.fill(imageBounds);
try {
renderer.paint(gr, imageBounds, mapBounds);
File fileToSave = new File(file);
ImageIO.write(image, "png", fileToSave);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
But, let's say I am doing something like this:
...
MapContent map = new MapContent();
map.setTitle("TEST");
map.addLayer(layer);
map.addLayer(shpLayer);
// zoom into the line
MapViewport viewport = new MapViewport(featureCollection.getBounds());
map.setViewport(viewport);
saveImage(map, "/tmp/img.png", 800);
1) The problem is that the zoom level isn't saved on the image file.Is there a way to save it?
2) When I am doing MapViewport(featureCollection.getBounds()); is there a way to extend a little bit the boundaries in order to have a better visual representation?
...
The reason that you aren't saving the map at the current zoom level is that in your saveImage method you have the line:
mapBounds = map.getMaxBounds();
which always uses the full extent of the map, you can change this to
mapBounds = map.getViewport().getBounds();
You can expand a bounding box by something like:
ReferencedEnvelope bounds = featureCollection.getBounds();
double delta = bounds.getWidth()/20.0; //5% on each side
bounds.expandBy(delta );
MapViewport viewport = new MapViewport(bounds);
map.setViewport(viewport );
A quicker (and easier) way to save a map from the GUI is to use a method like this which just saves exactly what is on the screen:
public void drawMapToImage(File outputFile, String outputType,
JMapPane mapPane) {
ImageOutputStream outputImageFile = null;
FileOutputStream fileOutputStream = null;
try {
fileOutputStream = new FileOutputStream(outputFile);
outputImageFile = ImageIO.createImageOutputStream(fileOutputStream);
RenderedImage bufferedImage = mapPane.getBaseImage();
ImageIO.write(bufferedImage, outputType, outputImageFile);
} catch (IOException ex) {
ex.printStackTrace();
} finally {
try {
if (outputImageFile != null) {
outputImageFile.flush();
outputImageFile.close();
fileOutputStream.flush();
fileOutputStream.close();
}
} catch (IOException e) {// don't care now
}
}
}
For the last few weeks I have been attempting to alter Zxing to take a photo immediately upon scan. Thanks to help I am at a point where I can be consistently saving an image from the onPreviewFrame class within PreviewCallback.java
The code I use within the onPreviewMethod method shall follow, and then a short rundown of how my app works.
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
android.hardware.Camera.Parameters parameters = camera.getParameters();
android.hardware.Camera.Size size = parameters.getPreviewSize();
int height = size.height;
int width = size.width;
System.out.println("HEIGHT IS" + height);
System.out.println("WIDTH IS" + width);
if (cameraResolution != null && thePreviewHandler != null) {
YuvImage im = new YuvImage(data, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
My application centers around its own GUI and functionality, but can engage Zxing via intent (Zxing is built into the apps build path, yes this is bad as it can intefere if Zxing is already installed). Once Zxing has scanned a QR code, the information encoded on it is returned to my app and stored, and then after a short delay Zxing is automatically re-initiated.
My current code saves an image every frame whilst Zxing is running, the functionality I would like is to have only the frame on scan be saved. Although Zxing stops saving images in the short window where my app takes over again, Zxing is quickly re-initialized however and I may not have time to manipulate the data. A possible workaround however is quickly renaming the saved file so that Zxing doesn't start overwriting it and manipulation can be performed in the background. Nevertheless, saving an image every frame is a waste of resources and less than preferable.
How do I only save an image upon scan?
Thanks in advance.
Updated to show found instances of multiFormatReader as requested:
private final CaptureActivity activity;
private final MultiFormatReader multiFormatReader;
private boolean running = true;
DecodeHandler(CaptureActivity activity, Map<DecodeHintType,Object> hints) {
multiFormatReader = new MultiFormatReader();
multiFormatReader.setHints(hints);
this.activity = activity;
}
#Override
public void handleMessage(Message message) {
if (!running) {
return;
}
if (message.what == R.id.decode) {
decode((byte[]) message.obj, message.arg1, message.arg2);
} else if (message.what == R.id.quit) {
running = false;
Looper.myLooper().quit();
}}
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
ZXing detects every received frame until finds out correct information. The image saving point is when ZXing returns a string which is not null. In addition, you can save file with different name "timestamp + .jpg", in case previous file will be overwritten.
Does anyone know how to decode H.264 video frame in Java environment?
My network camera products support the RTP/RTSP Streaming.
The service standard RTP/RTSP from my network camera is served and it also supports “RTP/RTSP over HTTP”.
RTSP : TCP 554
RTP Start Port: UDP 5000
Or use Xuggler. Works with RTP, RTMP, HTTP or other protocols, and can decode and encode H264 and most other codecs. And is actively maintained, free, and open-source (LGPL).
I found a very simple and straight-forward solution based on JavaCV's FFmpegFrameGrabber class. This library allows you to play a streaming media by wrapping the ffmpeg in Java.
How to use it?
First, you may download and install the library, using Maven or Gradle.
Here you have a StreamingClient class that calls a SimplePlayer class that has Thread to play the video.
public class StreamingClient extends Application implements GrabberListener
{
public static void main(String[] args)
{
launch(args);
}
private Stage primaryStage;
private ImageView imageView;
private SimplePlayer simplePlayer;
#Override
public void start(Stage stage) throws Exception
{
String source = "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov"; // the video is weird for 1 minute then becomes stable
primaryStage = stage;
imageView = new ImageView();
StackPane root = new StackPane();
root.getChildren().add(imageView);
imageView.fitWidthProperty().bind(primaryStage.widthProperty());
imageView.fitHeightProperty().bind(primaryStage.heightProperty());
Scene scene = new Scene(root, 640, 480);
primaryStage.setTitle("Streaming Player");
primaryStage.setScene(scene);
primaryStage.show();
simplePlayer = new SimplePlayer(source, this);
}
#Override
public void onMediaGrabbed(int width, int height)
{
primaryStage.setWidth(width);
primaryStage.setHeight(height);
}
#Override
public void onImageProcessed(Image image)
{
LogHelper.e(TAG, "image: " + image);
Platform.runLater(() -> {
imageView.setImage(image);
});
}
#Override
public void onPlaying() {}
#Override
public void onGainControl(FloatControl gainControl) {}
#Override
public void stop() throws Exception
{
simplePlayer.stop();
}
}
SimplePlayer class uses FFmpegFrameGrabber to decode a frame that is converted into an image and displayed in your Stage
public class SimplePlayer
{
private static volatile Thread playThread;
private AnimationTimer timer;
private SourceDataLine soundLine;
private int counter;
public SimplePlayer(String source, GrabberListener grabberListener)
{
if (grabberListener == null) return;
if (source.isEmpty()) return;
counter = 0;
playThread = new Thread(() -> {
try {
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(source);
grabber.start();
grabberListener.onMediaGrabbed(grabber.getImageWidth(), grabber.getImageHeight());
if (grabber.getSampleRate() > 0 && grabber.getAudioChannels() > 0) {
AudioFormat audioFormat = new AudioFormat(grabber.getSampleRate(), 16, grabber.getAudioChannels(), true, true);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
soundLine = (SourceDataLine) AudioSystem.getLine(info);
soundLine.open(audioFormat);
soundLine.start();
}
Java2DFrameConverter converter = new Java2DFrameConverter();
while (!Thread.interrupted()) {
Frame frame = grabber.grab();
if (frame == null) {
break;
}
if (frame.image != null) {
Image image = SwingFXUtils.toFXImage(converter.convert(frame), null);
Platform.runLater(() -> {
grabberListener.onImageProcessed(image);
});
} else if (frame.samples != null) {
ShortBuffer channelSamplesFloatBuffer = (ShortBuffer) frame.samples[0];
channelSamplesFloatBuffer.rewind();
ByteBuffer outBuffer = ByteBuffer.allocate(channelSamplesFloatBuffer.capacity() * 2);
for (int i = 0; i < channelSamplesFloatBuffer.capacity(); i++) {
short val = channelSamplesFloatBuffer.get(i);
outBuffer.putShort(val);
}
}
}
grabber.stop();
grabber.release();
Platform.exit();
} catch (Exception exception) {
System.exit(1);
}
});
playThread.start();
}
public void stop()
{
playThread.interrupt();
}
}
You can use a pure Java library called JCodec ( http://jcodec.org ).
Decoding one H.264 frame is as easy as:
ByteBuffer bb = ... // Your frame data is stored in this buffer
H264Decoder decoder = new H264Decoder();
Picture out = Picture.create(1920, 1088, ColorSpace.YUV_420); // Allocate output frame of max size
Picture real = decoder.decodeFrame(bb, out.getData());
BufferedImage bi = JCodecUtil.toBufferedImage(real); // If you prefere AWT image
If you want to read a from from a container ( like MP4 ) you can use a handy helper class FrameGrab:
int frameNumber = 150;
BufferedImage frame = FrameGrab.getFrame(new File("filename.mp4"), frameNumber);
ImageIO.write(frame, "png", new File("frame_150.png"));
Finally, here's a full sophisticated sample:
private static void avc2png(String in, String out) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
source = readableFileChannel(in);
sink = writableFileChannel(out);
MP4Demuxer demux = new MP4Demuxer(source);
H264Decoder decoder = new H264Decoder();
Transform transform = new Yuv420pToRgb(0, 0);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create((ine.getWidth() + 15) & ~0xf, (ine.getHeight() + 15) & ~0xf,
ColorSpace.YUV420);
Picture rgb = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.RGB);
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
BufferedImage bi = new BufferedImage(ine.getWidth(), ine.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
AvcCBox avcC = Box.as(AvcCBox.class, Box.findFirst(ine, LeafBox.class, "avcC"));
decoder.addSps(avcC.getSpsList());
decoder.addPps(avcC.getPpsList());
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null; i++) {
ByteBuffer data = inFrame.getData();
Picture dec = decoder.decodeFrame(splitMOVPacket(data, avcC), target1.getData());
transform.transform(dec, rgb);
_out.clear();
AWTUtil.toBufferedImage(rgb, bi);
ImageIO.write(bi, "png", new File(format(out, i)));
if (i % 100 == 0)
System.out.println((i * 100 / totalFrames) + "%");
}
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}
I think the best solution is using "JNI + ffmpeg". In my current project, I need to play several full screen videos at the same time in a java openGL game based on libgdx. I have tried almost all the free libs but none of them has acceptable performance. So finally I decided to write my own jni C codes to work with ffmpeg. Here is the final performance on my laptop:
Environment: CPU: Core i7 Q740 #1.73G, Video: nVidia GeForce GT 435M,
OS: Windows 7 64bit, Java: Java7u60 64bit
Video: h264rgb / h264 encoded, no sound, resolution: 1366 * 768
Solution: Decode: JNI + ffmpeg v2.2.2, Upload to GPU:
update openGL texture using lwjgl
Performance: Decoding speed:
700-800FPS, Texture Uploading: about 1ms per frame.
I only spent several days to complete the first version. But the first version's decoding speed was only about 120FPS, and uploading time was about 5ms per frame. After several months' optimization, I got this final performance and some additional features. Now I can play several HD videos at the same time without any slowness.
Most videos in my game have transparent background. This kind of transparent video is a mp4 file with 2 video streams, one stream stores h264rgb encoded rgb data, the other stream stores h264 encoded alpha data. So to play an alpha video, I need to decode 2 video streams and merge them together and then upload to GPU. As a result, I can play several transparent HD videos above an opaque HD video at the same time in my game.
Take a look at the Java Media Framework (JMF) - http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/formats.html
I used it a while back and it was a bit immature, but they may have beefed it up since then.