It's my first time to post question. I've been building a object classification program using tensorflow lite for Android device(Java). I already made the same function program by using Python and Keras so I converted the model to tflite form and I used it on android. But the result is quite different than I got in Python. I suspect the image processing method before inference was incorrect.
In Python, the image process before inference is below:
img = cv2.imread(image_dir)/255.
img = cv2.resize(img, (64,64), interpolation = cv2.INTER_AREA)
img_list.append(img)
img_list = np.array(img_list).astype(np.float32)
model.predict(img_list, batch_size=512, verbose=0)
In Android(Java), the image process before the inference is below:
Mat img = new Mat();
Utils.bitmapToMat(bitmap, img, true);
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2GRAY); //to grayscale
Size size = new Size(64,64);
Imgproc.resize(img, img, size, Imgproc.INTER_AREA); //resize
Bitmap dst = Bitmap.createBitmap(img.width(), img.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(img, dst); //convert to bitmap
int imageTensorIndex = 0;
DataType imageDataType = tensorFlowLiteModel.getInputTensor(imageTensorIndex).dataType(); //tensorFlowLiteModel is interpreter which also inferences (loaded from XXXX.tflite)
TensorImage tfImage = new TensorImage(imageDataType);
tfImage.load(dst);
int probabilityTensorIndex = 0;
int[] probabilityShape =
tensorFlowLiteModel.getOutputTensor(probabilityTensorIndex).shape();
DataType probabilityDataType = tensorFlowLiteModel.getOutputTensor(probabilityTensorIndex).dataType();
outputProbabilityBuffer = TensorBuffer.createFixedSize(probabilityShape, probabilityDataType);
tensorFlowLiteModel.run(tfImage.getBuffer(), outputProbabilityBuffer.getBuffer().rewind())
It actually worked and showed result but the result is different from Python(Keras).
I guess the image process before inference was wrong.
I also know image process using ByteBuffer but it was didn't work because of OOM(Out of memory) so I want to use TensorImage class.
If you know how to deal with this problem, please let me know.
Related
I'm in the draw loop of an android view:
Bitmap bitmap = Bitmap.createBitmap(this.getWidth(),
this.getHeight(), Bitmap.Config.ARGB_4444);
Canvas newCanvas = new Canvas(bitmap);
super.draw(newCanvas);
Log.d("AndroidUnity","Canvas Drawn!");
mImageView.setImageBitmap(bitmap);
And the above code shows me the correct drawing on the attached Image Viewer.
When I convert the bitmap to a byte array:
ByteBuffer byteBuffer = ByteBuffer.allocate(bitmap.getByteCount());
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] bytes = byteBuffer.array();
importing the bytes into Unity does not work (shows a black image on my rawimage):
imageTexture2D = new Texture2D(width, height, TextureFormat.ARGB4444, false);
imageTexture2D.LoadRawTextureData(bytes);
imageTexture2D.Apply();
RawImage.texture = imageTexture2D;
Any ideas on how to get the Java bytes[] to display as a texture/image in Unity? I've tested that the bytes are sending correctly, i.e. when I push a byte array of {1,2,3,4} from android, I get {1,2,3,4} on the unity side.
this isn't mentioning that Unity throws an error when trying to transfer the bytes as a byte[], so instead I have to follow this advice, on the C# side:
void ReceieveAndroidBytes(AndroidJavaObject jo){
AndroidJavaObject bufferObject = jo.Get<AndroidJavaObject>("Buffer");
byte[] bytes = AndroidJNIHelper.ConvertFromJNIArray<byte[]>(bufferObject.GetRawObject()); }
and a trivial byte[] container class "Buffer" on the java side
I was trying to do the exact same thing and my initial attempts also had a black texture. I do the array conversion with AndroidJNIHelper.ConvertFromJNIArray like you do except I used sbyte[] instead of byte[]. To set the actual image data I ended up using
imageTexture2D.SetPixelData(bytes, 0);
If I'm not mistaken LoadRawTextureData is even rawer than an array of pixel data, it might be how graphics cards store textures with compression. If that is true then raw pixel data isn't in the right format and it can't be decoded.
as the title says, how to convert bitmap RGB back to YUV byte[] using ScriptIntrinsicColorMatrix? below is the sample code (cannot be decoded by zxing):
public byte[] getYUVBytes(Bitmap src, boolean initOutAllocOnce){
if(!initOutAllocOnce){
outYUV = null;
}
if(outYUV == null){
outYUV = Allocation.createSized(rs, Element.U8(rs), src.getByteCount());
}
byte[] yuvData = new byte[src.getByteCount()];
Allocation in;
in = Allocation.createFromBitmap(rs, src,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
scriptColor.setRGBtoYUV();
scriptColor.forEach(in, outYUV);
outYUV.copyTo(yuvData);
return yuvData;
}
one thing i notice is from original camera YUV byte is 3110400 bytes but after using the ScriptIntrinsicColorMatrix convertion in becomes 8294400, which i think is wrong.
Reason for YUV -> BW -> YUV is I want to convert the image into black and white (not grayscale) and back to YUV for zxing to decode at the same time show the black and white in surfaceview (like a custom camera filter).
tried below code but its a bit slow (can be decoded by zxing).
int[] intArray = null;
intArray = new int[bmp.getWidth() * bmp.getHeight()];
bmp.getPixels(intArray, 0, bmp.getWidth(), 0, 0, bmp.getWidth(), bmp.getHeight());
LuminanceSource source = new RGBLuminanceSource(cameraResolution.x, cameraResolution.y, intArray);
data = source.getMatrix();
any other alternative for RGB to YUV that is fast? if ScriptIntrinsicColorMatrix class cannot be done?
please and thank you
I recently started working with the Java bindings for OpenCV to make a quick and dirty project to do template matching. Basically I am trying to read a set of jpg images (saved in MS Paint) into Mats and then use template matching to find their locations from a screen shot taken with Java.Robot.
When it comes time to do the template matching this error is thrown
OpenCV Error: Assertion failed ((depth == CV_8U || depth == CV_32F)
&& type == _templ.type() && _img.dims() <= 2) in cv::matchTemplate
After searching it looks like the issue is that the two Mats I am trying to use do not have the same "type". What I am not sure of is what this refers to. I assume it is the Mats CvType, if I print out the CvType of the image and template I get a type() of 4 == CvType.CV_32SC1 for my template I get a type() of 20 == CvType.CV_32SC3.
But I feel like this is not the correct type() I am trying to compare, I have feeling it refers to the data type of how the data is stored in the Mat? But I have no good links to back this up just remembrances from many SO searches.
Here is my code for loading in my jpg images into a Mat
Mat pic_ = Imgcodecs.imread("MyPath\\image.jpg");
pic_.convertTo(pic_, CvType.CV_32SC1);
Here the second line turns my type() from 20 to 16, though as per my last comment I don't think this is the proper way to alter the Mat to match the image?... Because convertTo'ing this Mat to match the type of the screen shot `(below) does not fix the error?
Here is how I am creating the image Mat
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage screenShot = rob.createScreenCapture(screenRect);
Mat screenImage = bufferedImageToMat(screenShot);
So I first take a screenshot with Java.Robot.createScreenCapture I then convert it to a Mat with
private Mat bufferedImageToMat(BufferedImage inBuffImg)
{
BufferedImage image = new BufferedImage(inBuffImg.getWidth(), inBuffImg.getHeight(), BufferedImage.TYPE_INT_RGB);
Graphics2D g2d= image.createGraphics();
g2d.drawImage(inBuffImg, 0, 0, null);
g2d.dispose();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_32SC1);
int[] data = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
From what I could tell the BufferedImage created by Robot is of type BufferedImage.TYPE_3BYTE_BGR which gives me an error "DataBufferInt cannot be cast to DataBufferByte" when trying to get the pixel data. So per the linked question I redraw the BufferedImage as type BufferedImage.TYPE_INT_RGB and pull out the data as a DataBufferInt.
So in all, should I be trying to match the Mat.type() or does my problem lie elsewhere? If not elsewhere how can I alter either of the Mats so that they can be used with Imgproc.matchTemplate properly?
I feel like the easiest solution would be to convert the image loaded from file to match the screenshot Mat?
EDIT: The exact section of code that gives the error is below
// Mat imageTemplate is a function argument; the loaded jpg image
// Take a picture of the screen
Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage screenShot = rob.createScreenCapture(screenRect);
Mat screenImage = bufferedImageToMat(screenShot);
// Create the result matrix
int result_cols = screenImage.cols() - imageTemplate.cols() + 1;
int result_rows = screenImage.rows() - imageTemplate.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32SC1);
newStatus("ScreenType: " + screenImage.type());
newStatus("TemplaType: " + imageTemplate.type());
// Choose a matching method
int matchMethod = Imgproc.TM_SQDIFF_NORMED;
// Do the Matching and Normalize
Imgproc.matchTemplate(screenImage, imageTemplate, result, matchMethod);
// Error occurs on previous line
As #Miki pointed out in the comments the answer was getting the channe type to match for the image and template. I ended up changing my bufferedImageToMat function.
private Mat bufferedImageToMat(BufferedImage inBuffImg)
{
BufferedImage image = new BufferedImage(inBuffImg.getWidth(), inBuffImg.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D g2d= image.createGraphics();
g2d.drawImage(inBuffImg, 0, 0, null);
g2d.dispose();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
My templates are read in as CvType.CV_8UC3, so it was just a matter of creating a Mat from the screen image with this type!
I am trying to get frames from a gif using OpenCV. I found Convert each animated GIF frame to a separate BufferedImage and used the second suggestion. I modified it slightly to return an array of Mats instead of BufferedImages.
I tried two methods to get bufferedImages from the gif. Each presented different problems.
With the previous thread's suggestion
BufferedImage fImage=ir.read(i);
The program calls a "ArrayIndexOutOfBoundsException: 4096"
With the original code from the previous thread.
BufferedImage fImage=ir.getRawImageType(i).createBufferedImage(ir.getWidth(i),ir.getHeight(i));
Each frame is a monotone color(not all black though) and the mat derived from the BufferedImage is empty.
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
ArrayList<Mat> frames = new ArrayList<Mat>();
ImageReader ir = new GIFImageReader(new GIFImageReaderSpi());
ir.setInput(ImageIO.createImageInputStream(new File("ronPaulTestImage.gif")));
for(int i = 0; i < ir.getNumImages(true); i++){
BufferedImage fImage=ir.read(i);
//BufferedImage fImage=ir.getRawImageType(i).createBufferedImage(ir.getWidth(i), ir.getHeight(i));
fImage = toBufferedImageOfType(fImage, BufferedImage.TYPE_3BYTE_BGR);
//byte[] pixels = ((DataBufferByte) r.getRaster().getDataBuffer()).getData();
Mat m=new Mat();
//m.put(0,0,pixels);
m.put(0, 0,((DataBufferByte) fImage.getRaster().getDataBuffer()).getData());
if(i==40){
//a test, writes the mat and the image at the specified frame to files, exits
ImageIO.write(fImage,"jpg",new File("TestError.jpg"));
Imgcodecs.imwrite("TestErrorMat.jpg",m);
System.exit(0);
}
Here is the gif I used
Following Spektre's advice I found a better gif which fixed the monochromatic bufferedImages. The lack of viewable Mats was caused by my usage of the default constructor when declaring the Mat.
Working Code
public static ArrayList<Mat> getFrames(File gif) throws IOException{
ArrayList<Mat> frames = new ArrayList<Mat>();
ImageReader ir = new GIFImageReader(new GIFImageReaderSpi());
ir.setInput(ImageIO.createImageInputStream(gif));
for(int i = 0; i < ir.getNumImages(true); i++){
BufferedImage fImage=ir.read(i);
fImage = toBufferedImageOfType(fImage, BufferedImage.TYPE_3BYTE_BGR);
byte[] pixels = ((DataBufferByte) fImage.getRaster().getDataBuffer()).getData();
Mat m=new Mat(fImage.getHeight(), fImage.getWidth(), CvType.CV_8UC3);
m.put(0,0,pixels);
if(i==15){//a test, writes the mat and the image at the specified frame to files, exits
ImageIO.write(fImage,"jpg",new File("TestError.jpg"));
Imgcodecs.imwrite("TestErrorMat.jpg",m);
System.exit(0);
}
frames.add(m);
}
return frames;
}
I am not using libs for gif nor Java nor OpenCV but the ArrayIndexOutOfBoundsException: 4096
means that the dictionary is not cleared properly. The gif of yours is buggy I tested it and it contains errors not enough clear codes are present for some frames. If your GIF decoder does not check/handle such case then it simply crash because its dictionary growth more then GIF limit 4096/12bit
Try another GIF not some buggy ones ...
have tested your gif and it has around 7 clear codes per frame and contains 941 errors in total (absence of clear code resulting in dictionary overrun)
If you have source code for the GIF decoder
then just find part of decoder where new item is added to dictionary and add
if (dictionary_items<4096)
before it ... If you ignore the wrong entries the image looks still OK most likely the encoder in which this was created was not properly coded.
with bytedeco opencv, simple way to get the first frame of gif:
import java.awt.image.{BufferedImage, DataBufferByte}
import java.io.ByteArrayInputStream
import com.sun.imageio.plugins.gif.{GIFImageReader, GIFImageReaderSpi}
import javax.imageio.ImageIO
import javax.swing.JFrame
import org.bytedeco.javacv.{CanvasFrame, OpenCVFrameConverter}
import org.bytedeco.opencv.opencv_core.Mat
import org.opencv.core.CvType
def toMat(bi: BufferedImage): Mat = {
val convertedImage = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_3BYTE_BGR)
val graphics = convertedImage.getGraphics
graphics.drawImage(bi, 0, 0, null)
graphics.dispose()
val data = convertedImage.getRaster.getDataBuffer.asInstanceOf[DataBufferByte].getData
val mat = new Mat(convertedImage.getHeight, convertedImage.getWidth, CvType.CV_8UC3)
mat.data.put(data: _*)
mat
}
def show(image: Mat, title: String): Unit = {
val canvas = new CanvasFrame(title, 1)
canvas.setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE)
canvas.showImage(new OpenCVFrameConverter.ToMat().convert(image))
}
val imageByteArrayOfGif = Array(....) // can be downloaded with java.net.HttpURLConnection
val ir = new GIFImageReader(new GIFImageReaderSpi())
val in = new ByteArrayInputStream(imageByteArray)
ir.setInput(ImageIO.createImageInputStream(in))
val bi = ir.read(0) // first frame of gif
val mat = toMat(bi)
show(mat, "buffered2mat")
I am working on Android development where once I get byte array from Google Glass frame, I am trying to scan array using Zxing library and trying to detect 1d barcode(UPC code).
I have tried this code snippet.
BufferedImage image = ImageIO.read(game);
BufferedImageLuminanceSource bils = new BufferedImageLuminanceSource(image);
HybridBinarizer hb = new HybridBinarizer(bils);
BitMatrix bm = **hb.getBlackMatrix();**
MultiDetector detector = new MultiDetector(bm);
DetectorResult dResult = detector.detect();
if(dResult == null)
{
System.out.println("Image does not contain any barcode");
}
else
{
BitMatrix QRImageData = dResult.getBits();
Decoder decoder = new Decoder();
DecoderResult decoderResult = decoder.decode(QRImageData);
String QRString = decoderResult.getText();
System.out.println(QRString);
}
It works fine for QRcode, detects and decodes QR code well. But does not detect UPC code.
I also tried this code snippet,
InputStream barCodeInputStream = new FileInputStream(game);
BufferedImage barCodeBufferedImage = ImageIO.read(barCodeInputStream);
BufferedImage image = ImageIO.read(game);
LuminanceSource source = new BufferedImageLuminanceSource(image);
BinaryBitmap bitmap = new BinaryBitmap(new GlobalHistogramBinarizer(source));
RSSExpandedReader rssExpandedReader = new RSSExpandedReader();
int rowNumber = bitmap.getHeight()/2;
BitArray row = **bitmap.getBlackRow(0, null);**
Result theResult = rssExpandedReader.decodeRow(rowNumber, row, new Hashtable());
and in both I am getting "Exception in thread "main" com.google.zxing.NotFoundException".
Does anyone know how to fix this issue?
getBlackMatrix() -
Converts a 2D array of luminance data to 1 bit. As above, assume this method is expensive and do not call it repeatedly. This method is intended for decoding 2D barcodes and may or may not apply sharpening. Therefore, a row from this matrix may not be identical to one fetched using getBlackRow(), so don't mix and match between them.
getBlackRow()-
Converts one row of luminance data to 1 bit data. May actually do the conversion, or return cached data. Callers should assume this method is expensive and call it as seldom as possible. This method is intended for decoding 1D barcodes and may choose to apply sharpening.