So basically we are developing a script for android devices to normalise bitmap and our external team already developed image normalisation stuff in python.
We are using an android PyTorch library(version 1.10.0) to apply normalisation to bitmap image via tensor as shown in the below code
public static float[] TORCHVISION_NORM_MEAN_RGB = new float[] {0.485f, 0.456f, 0.406f};
public static float[] TORCHVISION_NORM_STD_RGB = new float[] {0.229f, 0.224f, 0.225f};
Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(bmp,
TORCHVISION_NORM_MEAN_RGB, TORCHVISION_NORM_STD_RGB);
Here we are getting the same image as an original image from tensor array to bitmap. So we think that the normalisation of a tensor is not properly applied.
Here we are looking for some bluish effect images which have been generated by python. For ex., we are expecting output like the below bluish image.
bluish python image
But we are getting the same image after normalisation as the original one. for ex., this
android image
and we are using the below code for converting tensor(inputTensor) to bitmap.
final float[] scoreInput = inputTensor.getDataAsFloatArray();
Bitmap outBitmap = TensorToBitmap.floatArrayToBitmap(scoreInput, 300, 300);
fun floatArrayToBitmap(floatArray: FloatArray, width: Int, height: Int): Bitmap {
// Create empty bitmap in ARGB format
val bmp: Bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
val pixels = IntArray(width * height * 4)
// mapping smallest value to 0 and largest value to 255
val maxValue = floatArray.maxOrNull() ?: 1.0f
val minValue = floatArray.minOrNull() ?: -1.0f
val delta = maxValue - minValue
// Define if float min..max will be mapped to 0..255 or 255..0
val conversion = { v: Float -> ((v - minValue) / delta * 255.0f).roundToInt() }
// copy each value from float array to RGB channels
for (i in 0 until width * height) {
val r = conversion(floatArray[i])
val g = conversion(floatArray[i + width * height])
val b = conversion(floatArray[i + 2 * width * height])
pixels[i] = rgb(r, g, b) // you might need to import for rgb()
}
bmp.setPixels(pixels, 0, width, 0, 0, width, height)
return bmp
}
Anyone can help to generate a bluish normalise image via android PyTorch?
So i'm trying to port some C++ code from a colleague that grabs image data over a Bluetoth serial port (I'm using an Android phone). From the data I will need to generate a bitmap.
Before testing the ported code, I wrote this quick function to suposedly generate a pure red rectangle. However, BitmapFactory.decodeByteArray() always fails and returns with a null bitmap. I've checked for both of the possible exeptions it can throw and neither one is thrown.
byte[] pixelData = new byte[225*160*4];
for(int i = 0; i < 225*160; i++) {
pixelData[i * 4 + 0] = (byte)255;
pixelData[i * 4 + 1] = (byte)255;
pixelData[i * 4 + 2] = (byte)0;
pixelData[i * 4 + 3] = (byte)0;
}
Bitmap image = null;
logBox.append("Creating bitmap from pixel data...\n");
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.outWidth = 225;
options.outHeight = 160;
try {
image = BitmapFactory.decodeByteArray(pixelData, 0, pixelData.length, options);
} catch (IllegalArgumentException e) {
logBox.append(e.toString() + '\n');
}
//pixelData = null;
logBox.append("Bitmap generation complete\n");
decodeByteArray() code:
public static Bitmap decodeByteArray(byte[] data, int offset, int length, Options opts) {
if ((offset | length) < 0 || data.length < offset + length) {
throw new ArrayIndexOutOfBoundsException();
}
Bitmap bm;
Trace.traceBegin(Trace.TRACE_TAG_GRAPHICS, "decodeBitmap");
try {
bm = nativeDecodeByteArray(data, offset, length, opts);
if (bm == null && opts != null && opts.inBitmap != null) {
throw new IllegalArgumentException("Problem decoding into existing bitmap");
}
setDensityFromOptions(bm, opts);
} finally {
Trace.traceEnd(Trace.TRACE_TAG_GRAPHICS);
}
return bm;
}
I would presume that it's nativeDecodeByteArray() that is failing.
I also notice the log message:
D/skia: --- SkImageDecoder::Factory returned null
Anyone got any ideas?
decodeByteArray of BitmapFactory actually decodes an image, i.e. an image that has been encoded in a format such as JPEG or PNG. decodeFile and decodeStream make a little more sense, since your encoded image would probably be coming from a file or server or something.
You don't want to decode anything. You are trying to get raw image data into a bitmap. Looking at your code it appears you are generating a 225 x 160 bitmap with 4 bytes per pixel, formatted ARGB. So this code should work for you:
int width = 225;
int height = 160;
int size = width * height;
int[] pixelData = new int[size];
for (int i = 0; i < size; i++) {
// pack 4 bytes into int for ARGB_8888
pixelData[i] = ((0xFF & (byte)255) << 24) // alpha, 8 bits
| ((0xFF & (byte)255) << 16) // red, 8 bits
| ((0xFF & (byte)0) << 8) // green, 8 bits
| (0xFF & (byte)0); // blue, 8 bits
}
Bitmap image = Bitmap.createBitmap(pixelData, width, height, Bitmap.Config.ARGB_8888);
I've been working on an extension of a current application to stream webcam data to an Android device. I can obtain the raw image data, in the form of a RGB byte array. The color space is sRGB. I need to send that array over the network to an Android client, who constructs it into a Bitmap image to display on the screen. My problem is that the color data is skewed. The arrays have the same hashcode before and after being sent, so I'm positive this isn't a data loss problem. I've attached a sample image of how the color looks, you can see that skin tones and darker colors reconstruct okay, but lighter colors end up with a lot of yellow/red artifacts.
Server (Windows 10) code :
while(socket.isConnected()) {
byte[] bufferArray = new byte[width * height * 3];
ByteBuffer buff = cam.getImageBytes();
for(int i = 0; i < bufferArray.length; i++) {
bufferArray[i] = buff.get();
}
out.write(bufferArray);
out.flush();
}
Client (Android) code :
while(socket.isConnected()) {
int[] colors = new int[width * height];
byte[] pixels = new byte[(width * height) * 3];
int bytesRead = 0;
for(int i = 0; i < (width * height * 3); i++) {
int temp = in.read();
if(temp == -1) {
Log.d("WARNING", "Problem reading");
break;
}
else {
pixels[i] = (byte) temp;
bytesRead++;
}
}
int colorIndex = 0;
for(int i = 0; i < pixels.length; i += 3 ) {
int r = pixels[i];
int g = pixels[i + 1];
int b = pixels[i + 2];
colors[colorIndex] = Color.rgb( r, g, b);
colorIndex++;
}
Bitmap image = Bitmap.createBitmap(colors, width, height, Bitmap.Config.ARGB_8888);
publishProgress(image);
}
The cam.getImageBytes() is from an external library, but I have tested it and it works properly. Reconstructing the RAW data into a BufferedImage works perfectly, using the code :
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
image.getRaster().setPixels(0,0,width,height, pixels);
But, of course, BufferedImages are not supported on Android.
I'm about to tear my hair out with this one, I've tried everything I can think of, so any and all insight would be extremely helpful!
I've written a Java methods ,but i have to use this method in android project,so someone can help me to convert it into android or help me what should i do?
public Image getImage(){
ColorModel cm = grayColorModel() ;
if( n == 1){// in case it's a 8 bit/pixel image
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h,cm, pixData, 0, w));
}//endif
}
protected ColorModel grayColorModel()
{
byte[] r = new byte[256] ;
for (int i = 0; i <256 ; i++ )
r[i] = (byte)(i & 0xff ) ;
return (new IndexColorModel(8,256,r,r,r));
}
For instance, to convert a grayscale image (byte array, imageSrc) to drawable:
byte[] imageSrc= [...];
// That's where the RGBA array goes.
byte[] imageRGBA = new byte[imageSrc.length * 4];
int i;
for (i = 0; i < imageSrc.length; i++) {
imageRGBA[i * 4] = imageRGBA[i * 4 + 1] = imageRGBA[i * 4 + 2] = ((byte) ~imageSrc[i]);
// Invert the source bits
imageRGBA[i * 4 + 3] = -1;// 0xff, that's the alpha.
}
// Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(imageRGBA));
Code may differ depending of input format.
How can I convert a BufferedImage to a Mat in OpenCV?
I'm using the JAVA wrapper for OpenCV(not JavaCV). As I am new to OpenCV I have some problems understanding how Mat works.
I want to do something like this. (Based on Ted W. reply):
BufferedImage image = ImageIO.read(b.getClass().getResource("Lena.png"));
int rows = image.getWidth();
int cols = image.getHeight();
int type = CvType.CV_16UC1;
Mat newMat = new Mat(rows, cols, type);
for (int r = 0; r < rows; r++) {
for (int c = 0; c < cols; c++) {
newMat.put(r, c, image.getRGB(r, c));
}
}
Highgui.imwrite("Lena_copy.png", newMat);
This doesn't work. Lena_copy.png is just a black picture with the correct dimensions.
I also was trying to do the same thing, because of need to combining image processed with two libraries. And what I’ve tried to do is to put byte[] in to Mat instead of RGB value. And it worked! So what I did was:
1.Converted BufferedImage to byte array with:
byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
2. Then you can simply put it to Mat if you set type to CV_8UC3
image_final.put(0, 0, pixels);
Edit:
Also you can try to do the inverse as on this answer
Don't want to deal with big pixel array? Simply use this
BufferedImage to Mat
public static Mat BufferedImage2Mat(BufferedImage image) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", byteArrayOutputStream);
byteArrayOutputStream.flush();
return Imgcodecs.imdecode(new MatOfByte(byteArrayOutputStream.toByteArray()), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
Mat to BufferedImage
public static BufferedImage Mat2BufferedImage(Mat matrix)throws IOException {
MatOfByte mob=new MatOfByte();
Imgcodecs.imencode(".jpg", matrix, mob);
return ImageIO.read(new ByteArrayInputStream(mob.toArray()));
}
Note, Though it's very negligible. However, in this way, you can get a reliable solution but it uses encoding + decoding. So you lose some performance. It's generally 10 to 20 milliseconds. JPG encoding loses some image quality also it's slow (may take 10 to 20ms). BMP is lossless and fast (1 or 2 ms) but requires little more memory (negligible). PNG is lossless but a little more time to encode than BMP. Using BMP should fit the most cases I think.
This one worked fine for me, and it takes from 0 to 1 ms to be performed.
public static Mat bufferedImageToMat(BufferedImage bi) {
Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
return mat;
}
I use following code in my program.
protected Mat img2Mat(BufferedImage in) {
Mat out;
byte[] data;
int r, g, b;
if (in.getType() == BufferedImage.TYPE_INT_RGB) {
out = new Mat(in.getHeight(), in.getWidth(), CvType.CV_8UC3);
data = new byte[in.getWidth() * in.getHeight() * (int) out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, in.getWidth(), in.getHeight(), null, 0, in.getWidth());
for (int i = 0; i < dataBuff.length; i++) {
data[i * 3] = (byte) ((dataBuff[i] >> 0) & 0xFF);
data[i * 3 + 1] = (byte) ((dataBuff[i] >> 8) & 0xFF);
data[i * 3 + 2] = (byte) ((dataBuff[i] >> 16) & 0xFF);
}
} else {
out = new Mat(in.getHeight(), in.getWidth(), CvType.CV_8UC1);
data = new byte[in.getWidth() * in.getHeight() * (int) out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, in.getWidth(), in.getHeight(), null, 0, in.getWidth());
for (int i = 0; i < dataBuff.length; i++) {
r = (byte) ((dataBuff[i] >> 0) & 0xFF);
g = (byte) ((dataBuff[i] >> 8) & 0xFF);
b = (byte) ((dataBuff[i] >> 16) & 0xFF);
data[i] = (byte) ((0.21 * r) + (0.71 * g) + (0.07 * b));
}
}
out.put(0, 0, data);
return out;
}
Reference: here
I found a solution here.
The solution is similar to Andriys.
Camera c;
c.Connect();
c.StartCapture();
Image f2Img, cf2Img;
c.RetrieveBuffer(&f2Img);
f2Img.Convert( FlyCapture2::PIXEL_FORMAT_BGR, &cf2Img );
unsigned int rowBytes = (double)cf2Img.GetReceivedDataSize()/(double)cf2Img.GetRows();
cv::Mat opencvImg = cv::Mat( cf2Img.GetRows(), cf2Img.GetCols(), CV_8UC3, cf2Img.GetData(),rowBytes );
To convert from BufferedImage to Mat I use the method below:
public static Mat img2Mat(BufferedImage image) {
image = convertTo3ByteBGRType(image);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
return mat;
}
Before converting into Mat, I change the type of bufferedImage to TYPE_3BYTE_BGR, because to some types BufferedImages the method ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); may return int[] and that would break the code.
Below is the method for converting to TYPE_3BYTE_BGR.
private static BufferedImage convertTo3ByteBGRType(BufferedImage image) {
BufferedImage convertedImage = new BufferedImage(image.getWidth(), image.getHeight(),
BufferedImage.TYPE_3BYTE_BGR);
convertedImage.getGraphics().drawImage(image, 0, 0, null);
return convertedImage;
}
When you use as JavaCP wrapper bytedeco library (version 1.5.3) then you can use Java2DFrameUtils.
Simple usage is:
import org.bytedeco.javacv.Java2DFrameUtils;
...
BufferedImage img = ImageIO.read(new File("some/image.jpg");
Mat mat = Java2DFrameUtils.toMat(img);
Note: don't mix different wrappers, bytedeco Mat is different than opencv Mat.
One simple way would be to create a new using
Mat newMat = Mat(rows, cols, type);
then get the pixel values from your BufferedImage and put into newMat using
newMat.put(row, col, pixel);
You can do it in OpenCV as follows:
File f4 = new File("aa.png");
Mat mat = Highgui.imread(f4.getAbsolutePath());