Tensorflow lite and its deployment to android studio - java

I have converted my keras model into *.tflite file, after trying to figure out how to use tensorflow lite on the android studio, I am struggle at the point of pre-processing data from my android phone (or from my camera) then making classification (as it is a cnn models with 4 softmax output nodes, input images size is (1,256,256,3)). Since tensorflow and other sites did not mention more information about the input and output (their types,...etc) of the tflite.run(input, output) to generating prediction from images that could came from the phone's gallery or it's camera, and I am also new to Java application developing, hope you guys could help me to figure out and complete the application, thanks. (sorry for my bad grammar)
I have included tflite model, open image from the gallery but don't know how to pre-processing it as Java is new to me.

You can get a better understanding of how to perform the pre-processing for image files (atleast for ImageNet-style models) from the TFLite Android example here.
They convert a Bitmap using this function:
private void convertBitmapToByteBuffer(Bitmap bitmap) {
if (imgData == null) {
return;
}
imgData.rewind();
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
// Convert the image to floating point.
int pixel = 0;
long startTime = SystemClock.uptimeMillis();
for (int i = 0; i < getImageSizeX(); ++i) {
for (int j = 0; j < getImageSizeY(); ++j) {
final int val = intValues[pixel++];
addPixelValue(val);
}
}
long endTime = SystemClock.uptimeMillis();
LOGGER.v("Timecost to put values into ByteBuffer: " + (endTime - startTime));
}
The pixel preprocessing to be precise is done as follows:
protected void addPixelValue(int pixelValue) {
imgData.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
}

Related

How to convert image dpi (Dots per inch) in dart flutter?

The question is related to (print) DPI of various images ( for eg. of jpeg or png ) format.
This question does NOT relate to SCREEN DPI or sizes of various Android devices. It is also NOT related to showing the Bitmap on the device in screen size.
This will convert the image resolution dots per inch to 300.
Uint8List resultBytes = provider.bytes;
print(resultBytes.toString());
var dpi = 300;
resultBytes[13] = 1;
resultBytes[14] = (dpi >> 8);
resultBytes[15] = (dpi & 0xff);
resultBytes[16] = (dpi >> 8);
resultBytes[17] = (dpi & 0xff);
print(resultBytes.toString());
String tempPath = (await getTemporaryDirectory()).path;
String imgName = "IMG" + DateTime.now().microsecondsSinceEpoch.toString()+".jpg";
File file = File('$tempPath/$imgName');
await file.writeAsBytes(resultBytes);
/**You will not be able to see the image in android local storage, so rewriting the file, using the code below will show you image in Pictures Directory of android storage. Note: ImageGallerySaver is plugin, just copy and paste the dependency and have a go.*/
final result1 = ImageGallerySaver.saveFile(file.path);
print(result1);

How to save 16bit depth Image to file in Arcore (java)

I want to save the depth info from the arcore to the storage.
Here is the example from the developer guide.
public int getMillimetersDepth(Image depthImage, int x, int y) {
// The depth image has a single plane, which stores depth for each
// pixel as 16-bit unsigned integers.
Image.Plane plane = depthImage.getPlanes()[0];
int byteIndex = x * plane.getPixelStride() + y * plane.getRowStride();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
short depthSample = buffer.getShort(byteIndex);
return depthSample;
}
So I want to save this bytebuffer into a local file, but my output txt file is not readable. How could I fix this?
Here is what I have
Image depthImage = frame.acquireDepthImage();
Image.Plane plane = depthImage.getPlanes()[0];
int format = depthImage.getFormat();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
File mypath=new File(super.getExternalFilesDir("depDir"),Long.toString(lastPointCloudTimestamp)+".txt");
FileChannel fc = new FileOutputStream(mypath).getChannel();
fc.write(buffer);
fc.close();
depthImage.close();
I tried to decode them with
String s = new String(data, "UTF-8");
System.out.println(StandardCharsets.UTF-8.decode(buffer).toString());
but the output is still strange like this
.03579:<=>#ABCDFGHJKMNOPRQSUWY]_b
In order to obtain the depth data provided by the ARCore session, you need to write bytes into your local file. A Buffer object is a container, it countains a finite sequence of elements of a specific primitive type (here bytes for a ByteBuffer). So what you need to write inside your file is your data variable that corresponds to the information previously stored in the buffer (according to buffer.get(data)).
It works fine for me, I managed to draw the provided depth map within a python code (but the idea behind the following code can be easily adapted to a java code):
depthData = np.fromfile('depthdata.txt', dtype = np.uint16)
H = 120
W = 160
def extractDepth(x):
depthConfidence = (x >> 13) & 0x7
if (depthConfidence > 6): return 0
return x & 0x1FFF
depthMap = np.array([extractDepth(x) for x in depthData]).reshape(H,W)
depthMap = cv.rotate(depthMap, cv.ROTATE_90_CLOCKWISE)
For further details, read the information concerning the format of the depthmap (DEPTH16) here: https://developer.android.com/reference/android/graphics/ImageFormat#DEPTH16
You must also be aware that the depthmap resolution is set to 160x120 pixels and is oriented according to a landscape format.
Make also sure to surround your code by a try/catch code bloc in case of a IOException error.

Print bitmap full page width using ESC/POS

I am currently implementing Android PrintService, that is able to print PDFs via thermal printers. I managed to convert PDF to bitmap using PDFRenderer and I am even able to print the document.
The thing is, the document (bitmap) is not full page width.
I am receiving the document in 297x420 resolution and I am using printer with 58mm paper.
This is how I process the document (written in C#, using Xamarin):
// Create PDF renderer
var pdfRenderer = new PdfRenderer(fileDescriptor);
// Open page
PdfRenderer.Page page = pdfRenderer.OpenPage(index);
// Create bitmap for page
Bitmap bitmap = Bitmap.CreateBitmap(page.Width, page.Height, Bitmap.Config.Argb8888);
// Now render page into bitmap
page.Render(bitmap, null, null, PdfRenderMode.ForPrint);
And then, converting the bitmap into ESC/POS:
// Initialize result
List<byte> result = new List<byte>();
// Init ESC/POS
result.AddRange(new byte[] { 0x1B, 0x33, 0x21 });
// Init ESC/POS bmp commands (will be reapeated)
byte[] escBmp = new byte[] { 0x1B, 0x2A, 0x01, (byte)(bitmap.Width % 256), (byte)(bitmap.Height / 256) };
// Iterate height
for (int i = 0; i < (bitmap.Height / 24 + 1); i++)
{
// Add bitmapp commands to result
result.AddRange(escBmp);
// Init pixel color
int pixelColor;
// Iterate width
for (int j = 0; j < bitmap.Width; j++)
{
// Init data
byte[] data = new byte[] { 0x00, 0x00, 0x00 };
for (int k = 0; k < 24; k++)
{
if (((i * 24) + k) < bitmap.Height)
{
// Get pixel color
pixelColor = bitmap.GetPixel(j, (i * 24) + k);
// Check pixel color
if (pixelColor != 0)
{
data[k / 8] += (byte)(128 >> (k % 8));
}
}
}
// Add data to result
result.AddRange(data);
}
// Add some... other stuff
result.AddRange(new byte[] { 0x0D, 0x0A });
}
// Return data
return result.ToArray();
Current result looks like this:
Thank you all in advance.
There is no magic "scale-to-page-width" command in the ESC/POS command-set, you need to know the max width of your printer, available in the manual, and then you can:
Double the width and height for some image output commands -- You are using ESC *, which supports low-density, but height and width change in different ratios.
Render the PDF wider to begin with - match the Bitmap size to the printer page width, and not the PDF page width. The same problem is solved at PDFrenderer setting scale to screen
You can also simply stretch the image before you send it, if you are happy with the low quality. See: How to Resize a Bitmap in Android?
Aside, your ESC * implementation is incorrect. There are two bytes for the width- Check the ESC/POS manual for the correct usage, or read over the correct implementations in PHP or Python that I've linked in another question: ESC POS command ESC* for printing bit image on printer

How do I apply the hanning function to my audio sample?

I am a university student. I am developing a music identification system for my final year project. According to the "Robust Audio Fingerprint Extraction Algorithm Based on 2-D Chroma" research paper, the following functions should need to be included in my system.
Capture Audio Signal ----> Framing Window (hanning window) -----> FFT ----->
High Pass Filter -----> etc.....
I was able to code for Audio Capture function and I was applied the FFT API as well to the code. But I am confused about how to apply the hanning window function to my the my code. Please can someone help me to do this function? Tell me where do I need to add this function and how do I need to add it to the code.
Here is the my Audio capturing code and applying FFT code:
private class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... params) {
started = true;
try {
DataOutputStream dos = new DataOutputStream(
new BufferedOutputStream(new FileOutputStream(
recordingFile)));
int bufferSize = AudioRecord.getMinBufferSize(frequency,
channelConfiguration, audioEncoding);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
frequency, channelConfiguration, audioEncoding,
bufferSize);
short[] buffer = new short[blockSize];
double[] toTransform = new double[blockSize];
long t = System.currentTimeMillis();
long end = t + 15000;
audioRecord.startRecording();
double[] w = new double[blockSize];
while (started && System.currentTimeMillis() < end) {
int bufferReadResult = audioRecord.read(buffer, 0,
blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0;
dos.writeShort(buffer[i]);
}
// new part
toTransform = hanning (toTransform);
transformer.ft(toTransform);
publishProgress(toTransform);
}
audioRecord.stop();
dos.close();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
}
These links are providing hanning window algorithm and code snippets:
WindowFunction.java
Hanning - MATLAB
The following code I have used to apply hanning function to the my application and it works for me....
public double[] hanningWindow(double[] recordedData) {
// iterate until the last line of the data buffer
for (int n = 1; n < recordedData.length; n++) {
// reduce unnecessarily performed frequency part of each and every frequency
recordedData[n] *= 0.5 * (1 - Math.cos((2 * Math.PI * n)
/ (recordedData.length - 1)));
}
// return modified buffer to the FFT function
return recordedData;
}
At first, I think you should consider having your FFT length fixed. If I understand your code correctly, you are now using some kind of minimum buffer size also as the FFT length. FFT length has huge effect on the performance and resolution of your calculation.
Your link to WindowFunction.java can generate you an array, that should be the same length as your FFT length (blockSize in your case, I think). You should then multiply each sample of your buffer with the value returned from the WindowFunction that has the same id in the array.
This should be done before the FFT.

Video encode from sequence of images from java android [duplicate]

This question already has answers here:
Encode a series of Images into a Video
(4 answers)
Closed 8 years ago.
I would like to encode video from sequence of images with java only in my current android project. I mean without any use of external tools such as NDK.
Also is there any availability of java libraries for encoding video from sequence of images ?
You can use a pure java library called JCodec ( http://jcodec.org ). It contains a primitive yet working H.264 ( AVC ) encoder and a fully functioning MP4 ( ISO BMF ) muxer.
Here's a CORRECTED code sample that uses low-level API:
public void imageToMP4(BufferedImage bi) {
// A transform to convert RGB to YUV colorspace
RgbToYuv420 transform = new RgbToYuv420(0, 0);
// A JCodec native picture that would hold source image in YUV colorspace
Picture toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), yuv);
// Create MP4 muxer
MP4Muxer muxer = new MP4Muxer(sink, Brand.MP4);
// Add a video track
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Create H.264 encoder
H264Encoder encoder = new H264Encoder(rc);
// Allocate a buffer that would hold an encoded frame
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
// Allocate storage for SPS/PPS, they need to be stored separately in a special place of MP4 file
List<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
List<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
// Encode image into H.264 frame, the result is stored in '_out' buffer
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, 0, 25, 1, 0, true, null, 0, 0));
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
}
You can download JCodec library from a project web site or via Maven, for this add the below snippet to your pom.xml:
<dependency>
<groupId>org.jcodec</groupId>
<artifactId>jcodec</artifactId>
<version>0.1.3</version>
</dependency>
[UPDATE 1] Android users can use something like below to convert Android Bitmap object to JCodec native format:
public static Picture fromBitmap(Bitmap src) {
Picture dst = Picture.create((int)src.getWidth(), (int)src.getHeight(), RGB);
fromBitmap(src, dst);
return dst;
}
public static void fromBitmap(Bitmap src, Picture dst) {
int[] dstData = dst.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
src.getPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
for (int i = 0, srcOff = 0, dstOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, srcOff++, dstOff += 3) {
int rgb = packed[srcOff];
dstData[dstOff] = (rgb >> 16) & 0xff;
dstData[dstOff + 1] = (rgb >> 8) & 0xff;
dstData[dstOff + 2] = rgb & 0xff;
}
}
}

Categories