This question already has answers here:
Encode a series of Images into a Video
(4 answers)
Closed 8 years ago.
I would like to encode video from sequence of images with java only in my current android project. I mean without any use of external tools such as NDK.
Also is there any availability of java libraries for encoding video from sequence of images ?
You can use a pure java library called JCodec ( http://jcodec.org ). It contains a primitive yet working H.264 ( AVC ) encoder and a fully functioning MP4 ( ISO BMF ) muxer.
Here's a CORRECTED code sample that uses low-level API:
public void imageToMP4(BufferedImage bi) {
// A transform to convert RGB to YUV colorspace
RgbToYuv420 transform = new RgbToYuv420(0, 0);
// A JCodec native picture that would hold source image in YUV colorspace
Picture toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), yuv);
// Create MP4 muxer
MP4Muxer muxer = new MP4Muxer(sink, Brand.MP4);
// Add a video track
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Create H.264 encoder
H264Encoder encoder = new H264Encoder(rc);
// Allocate a buffer that would hold an encoded frame
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
// Allocate storage for SPS/PPS, they need to be stored separately in a special place of MP4 file
List<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
List<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
// Encode image into H.264 frame, the result is stored in '_out' buffer
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, 0, 25, 1, 0, true, null, 0, 0));
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
}
You can download JCodec library from a project web site or via Maven, for this add the below snippet to your pom.xml:
<dependency>
<groupId>org.jcodec</groupId>
<artifactId>jcodec</artifactId>
<version>0.1.3</version>
</dependency>
[UPDATE 1] Android users can use something like below to convert Android Bitmap object to JCodec native format:
public static Picture fromBitmap(Bitmap src) {
Picture dst = Picture.create((int)src.getWidth(), (int)src.getHeight(), RGB);
fromBitmap(src, dst);
return dst;
}
public static void fromBitmap(Bitmap src, Picture dst) {
int[] dstData = dst.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
src.getPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
for (int i = 0, srcOff = 0, dstOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, srcOff++, dstOff += 3) {
int rgb = packed[srcOff];
dstData[dstOff] = (rgb >> 16) & 0xff;
dstData[dstOff + 1] = (rgb >> 8) & 0xff;
dstData[dstOff + 2] = rgb & 0xff;
}
}
}
Related
The question is related to (print) DPI of various images ( for eg. of jpeg or png ) format.
This question does NOT relate to SCREEN DPI or sizes of various Android devices. It is also NOT related to showing the Bitmap on the device in screen size.
This will convert the image resolution dots per inch to 300.
Uint8List resultBytes = provider.bytes;
print(resultBytes.toString());
var dpi = 300;
resultBytes[13] = 1;
resultBytes[14] = (dpi >> 8);
resultBytes[15] = (dpi & 0xff);
resultBytes[16] = (dpi >> 8);
resultBytes[17] = (dpi & 0xff);
print(resultBytes.toString());
String tempPath = (await getTemporaryDirectory()).path;
String imgName = "IMG" + DateTime.now().microsecondsSinceEpoch.toString()+".jpg";
File file = File('$tempPath/$imgName');
await file.writeAsBytes(resultBytes);
/**You will not be able to see the image in android local storage, so rewriting the file, using the code below will show you image in Pictures Directory of android storage. Note: ImageGallerySaver is plugin, just copy and paste the dependency and have a go.*/
final result1 = ImageGallerySaver.saveFile(file.path);
print(result1);
I want to save the depth info from the arcore to the storage.
Here is the example from the developer guide.
public int getMillimetersDepth(Image depthImage, int x, int y) {
// The depth image has a single plane, which stores depth for each
// pixel as 16-bit unsigned integers.
Image.Plane plane = depthImage.getPlanes()[0];
int byteIndex = x * plane.getPixelStride() + y * plane.getRowStride();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
short depthSample = buffer.getShort(byteIndex);
return depthSample;
}
So I want to save this bytebuffer into a local file, but my output txt file is not readable. How could I fix this?
Here is what I have
Image depthImage = frame.acquireDepthImage();
Image.Plane plane = depthImage.getPlanes()[0];
int format = depthImage.getFormat();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
File mypath=new File(super.getExternalFilesDir("depDir"),Long.toString(lastPointCloudTimestamp)+".txt");
FileChannel fc = new FileOutputStream(mypath).getChannel();
fc.write(buffer);
fc.close();
depthImage.close();
I tried to decode them with
String s = new String(data, "UTF-8");
System.out.println(StandardCharsets.UTF-8.decode(buffer).toString());
but the output is still strange like this
.03579:<=>#ABCDFGHJKMNOPRQSUWY]_b
In order to obtain the depth data provided by the ARCore session, you need to write bytes into your local file. A Buffer object is a container, it countains a finite sequence of elements of a specific primitive type (here bytes for a ByteBuffer). So what you need to write inside your file is your data variable that corresponds to the information previously stored in the buffer (according to buffer.get(data)).
It works fine for me, I managed to draw the provided depth map within a python code (but the idea behind the following code can be easily adapted to a java code):
depthData = np.fromfile('depthdata.txt', dtype = np.uint16)
H = 120
W = 160
def extractDepth(x):
depthConfidence = (x >> 13) & 0x7
if (depthConfidence > 6): return 0
return x & 0x1FFF
depthMap = np.array([extractDepth(x) for x in depthData]).reshape(H,W)
depthMap = cv.rotate(depthMap, cv.ROTATE_90_CLOCKWISE)
For further details, read the information concerning the format of the depthmap (DEPTH16) here: https://developer.android.com/reference/android/graphics/ImageFormat#DEPTH16
You must also be aware that the depthmap resolution is set to 160x120 pixels and is oriented according to a landscape format.
Make also sure to surround your code by a try/catch code bloc in case of a IOException error.
I have converted my keras model into *.tflite file, after trying to figure out how to use tensorflow lite on the android studio, I am struggle at the point of pre-processing data from my android phone (or from my camera) then making classification (as it is a cnn models with 4 softmax output nodes, input images size is (1,256,256,3)). Since tensorflow and other sites did not mention more information about the input and output (their types,...etc) of the tflite.run(input, output) to generating prediction from images that could came from the phone's gallery or it's camera, and I am also new to Java application developing, hope you guys could help me to figure out and complete the application, thanks. (sorry for my bad grammar)
I have included tflite model, open image from the gallery but don't know how to pre-processing it as Java is new to me.
You can get a better understanding of how to perform the pre-processing for image files (atleast for ImageNet-style models) from the TFLite Android example here.
They convert a Bitmap using this function:
private void convertBitmapToByteBuffer(Bitmap bitmap) {
if (imgData == null) {
return;
}
imgData.rewind();
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
// Convert the image to floating point.
int pixel = 0;
long startTime = SystemClock.uptimeMillis();
for (int i = 0; i < getImageSizeX(); ++i) {
for (int j = 0; j < getImageSizeY(); ++j) {
final int val = intValues[pixel++];
addPixelValue(val);
}
}
long endTime = SystemClock.uptimeMillis();
LOGGER.v("Timecost to put values into ByteBuffer: " + (endTime - startTime));
}
The pixel preprocessing to be precise is done as follows:
protected void addPixelValue(int pixelValue) {
imgData.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
}
I am currently implementing Android PrintService, that is able to print PDFs via thermal printers. I managed to convert PDF to bitmap using PDFRenderer and I am even able to print the document.
The thing is, the document (bitmap) is not full page width.
I am receiving the document in 297x420 resolution and I am using printer with 58mm paper.
This is how I process the document (written in C#, using Xamarin):
// Create PDF renderer
var pdfRenderer = new PdfRenderer(fileDescriptor);
// Open page
PdfRenderer.Page page = pdfRenderer.OpenPage(index);
// Create bitmap for page
Bitmap bitmap = Bitmap.CreateBitmap(page.Width, page.Height, Bitmap.Config.Argb8888);
// Now render page into bitmap
page.Render(bitmap, null, null, PdfRenderMode.ForPrint);
And then, converting the bitmap into ESC/POS:
// Initialize result
List<byte> result = new List<byte>();
// Init ESC/POS
result.AddRange(new byte[] { 0x1B, 0x33, 0x21 });
// Init ESC/POS bmp commands (will be reapeated)
byte[] escBmp = new byte[] { 0x1B, 0x2A, 0x01, (byte)(bitmap.Width % 256), (byte)(bitmap.Height / 256) };
// Iterate height
for (int i = 0; i < (bitmap.Height / 24 + 1); i++)
{
// Add bitmapp commands to result
result.AddRange(escBmp);
// Init pixel color
int pixelColor;
// Iterate width
for (int j = 0; j < bitmap.Width; j++)
{
// Init data
byte[] data = new byte[] { 0x00, 0x00, 0x00 };
for (int k = 0; k < 24; k++)
{
if (((i * 24) + k) < bitmap.Height)
{
// Get pixel color
pixelColor = bitmap.GetPixel(j, (i * 24) + k);
// Check pixel color
if (pixelColor != 0)
{
data[k / 8] += (byte)(128 >> (k % 8));
}
}
}
// Add data to result
result.AddRange(data);
}
// Add some... other stuff
result.AddRange(new byte[] { 0x0D, 0x0A });
}
// Return data
return result.ToArray();
Current result looks like this:
Thank you all in advance.
There is no magic "scale-to-page-width" command in the ESC/POS command-set, you need to know the max width of your printer, available in the manual, and then you can:
Double the width and height for some image output commands -- You are using ESC *, which supports low-density, but height and width change in different ratios.
Render the PDF wider to begin with - match the Bitmap size to the printer page width, and not the PDF page width. The same problem is solved at PDFrenderer setting scale to screen
You can also simply stretch the image before you send it, if you are happy with the low quality. See: How to Resize a Bitmap in Android?
Aside, your ESC * implementation is incorrect. There are two bytes for the width- Check the ESC/POS manual for the correct usage, or read over the correct implementations in PHP or Python that I've linked in another question: ESC POS command ESC* for printing bit image on printer
as the title says, how to convert bitmap RGB back to YUV byte[] using ScriptIntrinsicColorMatrix? below is the sample code (cannot be decoded by zxing):
public byte[] getYUVBytes(Bitmap src, boolean initOutAllocOnce){
if(!initOutAllocOnce){
outYUV = null;
}
if(outYUV == null){
outYUV = Allocation.createSized(rs, Element.U8(rs), src.getByteCount());
}
byte[] yuvData = new byte[src.getByteCount()];
Allocation in;
in = Allocation.createFromBitmap(rs, src,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
scriptColor.setRGBtoYUV();
scriptColor.forEach(in, outYUV);
outYUV.copyTo(yuvData);
return yuvData;
}
one thing i notice is from original camera YUV byte is 3110400 bytes but after using the ScriptIntrinsicColorMatrix convertion in becomes 8294400, which i think is wrong.
Reason for YUV -> BW -> YUV is I want to convert the image into black and white (not grayscale) and back to YUV for zxing to decode at the same time show the black and white in surfaceview (like a custom camera filter).
tried below code but its a bit slow (can be decoded by zxing).
int[] intArray = null;
intArray = new int[bmp.getWidth() * bmp.getHeight()];
bmp.getPixels(intArray, 0, bmp.getWidth(), 0, 0, bmp.getWidth(), bmp.getHeight());
LuminanceSource source = new RGBLuminanceSource(cameraResolution.x, cameraResolution.y, intArray);
data = source.getMatrix();
any other alternative for RGB to YUV that is fast? if ScriptIntrinsicColorMatrix class cannot be done?
please and thank you