Android's Google Maps api v2 provides a TileOverlay to which TileProviders can be added. A TileProvider will generate a Tile object given the lat, long, and depth. In order to make a tile, one must give it width (easy), height (easy), and an image represented as a byte array (confusing me). If I wanted to 'draw' a simple object and then turn it into a byte array, how would I do this?
For instance I am looking for something that looks like
Canvas canvas = new canvas();
...
canvas.drawRect(); //Or something like this (just an example)
...
byte[] bytes = canvas.SomeConversionFunctionOrProcessThatIDontKnow();
return new Tile(1,1,bytes);
public byte[] getByteArray (String image) throws IOException {
File yourImg = new File(image);
BufferedImage bufferedImage = ImageIO.read(yourImg);
WritableRaster wRaster = bufferedImage .getRaster();
DataBufferByte data = (DataBufferByte) wRaster.getDataBuffer();
return data.getData();
}
This should do the trick ; )
Tile#data should be compressed image data in one of the supported image formats. In other words, the raw contents of an image file. If you already have a decoded Bitmap, use Bitmap#compress(...) to write it to a ByteArrayOutputStream, then get the byte[] from that.
Bitmap bitmap = Bitmap.createBitmap(TILE_DIMENSION, TILE_DIMENSION,Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
......
//draw something
......
Tile tile = convertBitmap(bitmap);
bitmap.recycle();
return tile;
}
private Tile convertBitmap(Bitmap bitmap){
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] bitmapData = stream.toByteArray();
return new Tile(TILE_DIMENSION, TILE_DIMENSION, bitmapData);
}
Related
I am trying to use this new feature of CameraX Image Analysis (version 1.1.0-alpha08): using setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888), images sent to the analyzer will have RGBA format.
See this for reference: https://developer.android.com/reference/androidx/camera/core/ImageAnalysis#OUTPUT_IMAGE_FORMAT_RGBA_8888
I need to turn the image sent to the analyzer into a Bitmap so that I can input it to a TensorFlow classifier.
Without this new feature I would receive the image in the standard YUV_420_888 format then I would have to use one of the several solutions that can be googled in order to turn YUV_420_888 to RGBA then to Bitmap. Like this: https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/.
I assume getting the Media Image directly in RGBA format should help me avoid implementing those painfull solutions (that I have actually tried and do not seem to work very well for me so far).
Problem is I don't know how to turn this RGBA Media Image into a Bitmap. I have noticed that calling mediaImage.getFormat() returns 1 which is not an ImageFormat value but a PixelFormat one, the one logically corresponding to RGBA_8888 format, which is in line with the documentation: "All ImageProxy sent to ImageAnalysis.Analyzer.analyze(ImageProxy) will have format PixelFormat.RGBA_8888".
I have tried this:
private Bitmap toBitmapRGBA(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
buffer.rewind();
int size = buffer.remaining();
byte[] bytes = new byte[size];
buffer.get(bytes);
Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
return bitmapImage;
}
This returns null indicating the decodeByteArray does not work. (I notice the image has got only one plane).
private Bitmap toBitmapRGBA2(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
buffer.rewind();
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
This returns a Bitmap that looks noting but noise.
Please help!
Kind regards
Mickael
I actually found a solution myself, so I post it here if anyone is interested:
private Bitmap toBitmap(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * image.getWidth();
Bitmap bitmap = Bitmap.createBitmap(image.getWidth()+rowPadding/pixelStride,
image.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
return bitmap;
}
if you want to process the pixel array further on without creating a bitmap object you can do something like this:
val data = imageProxy.planes[0].buffer.toByteArray()
val pixels = IntArray(data.size / imageProxy.planes[0].pixelStride) {
var index = it * imageProxy.planes[0].pixelStride
(data[index++].toInt() and 0xff.shl(16)) or
(data[index++].toInt() and 0xff).shl(8) or
(data[index++].toInt() and 0xff).shl(0) or
(data[index].toInt() and 0xff).shl(24)
}
And then you can create bitmap this way:
Bitmap.createBitmap(
pixels,
0,
imageProxy.planes[0].rowStride / imageProxy.planes[0].pixelStride,
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
as the title says, how to convert bitmap RGB back to YUV byte[] using ScriptIntrinsicColorMatrix? below is the sample code (cannot be decoded by zxing):
public byte[] getYUVBytes(Bitmap src, boolean initOutAllocOnce){
if(!initOutAllocOnce){
outYUV = null;
}
if(outYUV == null){
outYUV = Allocation.createSized(rs, Element.U8(rs), src.getByteCount());
}
byte[] yuvData = new byte[src.getByteCount()];
Allocation in;
in = Allocation.createFromBitmap(rs, src,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
scriptColor.setRGBtoYUV();
scriptColor.forEach(in, outYUV);
outYUV.copyTo(yuvData);
return yuvData;
}
one thing i notice is from original camera YUV byte is 3110400 bytes but after using the ScriptIntrinsicColorMatrix convertion in becomes 8294400, which i think is wrong.
Reason for YUV -> BW -> YUV is I want to convert the image into black and white (not grayscale) and back to YUV for zxing to decode at the same time show the black and white in surfaceview (like a custom camera filter).
tried below code but its a bit slow (can be decoded by zxing).
int[] intArray = null;
intArray = new int[bmp.getWidth() * bmp.getHeight()];
bmp.getPixels(intArray, 0, bmp.getWidth(), 0, 0, bmp.getWidth(), bmp.getHeight());
LuminanceSource source = new RGBLuminanceSource(cameraResolution.x, cameraResolution.y, intArray);
data = source.getMatrix();
any other alternative for RGB to YUV that is fast? if ScriptIntrinsicColorMatrix class cannot be done?
please and thank you
I have a problem with showing a picture from the database, the database saves the picture in blob, when i pick up the data the blob passes to Byte[], so after that i do that to show the image, why didnt work?
Select_1 xp = new Select_1();
byte[] img=xp.Select_1(username);
InputStream in = new ByteArrayInputStream(img);
BufferedImage image = ImageIO.read(in);
BufferedImage resizedImage=resize(image,204,204);
ImageIcon icon=new ImageIcon(resizedImage);
lblavatar.setIcon(icon);
Edit according to the comment:
Originally, the image was written using the following methods:
blob = (Blob) connect.createBlob();
ImageIcon ii = new ImageIcon(ficheiro);
ObjectOutputStream oos;
oos = new ObjectOutputStream(blob.setBinaryStream(1));
oos.writeObject(ii);
oos.close();
psInsert.setBlob(4, blob);
You are not serialzing a BufferedImage, but an ImageIcon.
In order to create an image from the blob data, you have to do "the opposite" of what you have been doing to create the blob. In this case, you'll have to do something along the lines of
byte[] img=xp.Select_1(username);
InputStream in = new ByteArrayInputStream(img);
ObjectInputStream ois = new ObjectInputStream(in);
ImageIcon imageIcon = (ImageIcon)ois.readObject();
Now, you have an ImageIcon, from which you can obtain the Image. For many cases, this image can be used directly. If you really need a BufferedImage, then you can do
BufferedImage bi = new BufferedImage(204,204,BufferedImage.TYPE_INT_ARGB);
Graphics g = bi.createGraphics();
g.drawImage(imageIcon.getImage(), 0, 0, 204, 204, null);
g.dispose();
Then the buffered image will contain your image (already scaled to the desired target size of 204,204 in this case).
In any case, you should consider to not store the image as a serialized ImageIcon. Instead, you should write your image into a byte array as a PNG or JPG file, and store the resulting byte array as the blob in the database, as, for example, shown in Java: BufferedImage to byte array and back
I'm having an image in Database in bytearray format. I want to display on browser. I don't know how to write Image using OutputStream. Here is my code.
byte[] imageInBytes = (byte[]) obj; // from Database
InputStream in = new ByteArrayInputStream(imageInBytes);
Image img = ImageIO.read(in).getScaledInstance(50, -1, Image.SCALE_SMOOTH);
OutputStream o = resp.getOutputStream(); // HttpServletResponse
o.write(imgByte);
You may try something like this:
File f=new File("image.jpg");
BufferedImage o=ImageIO.read(f);
ByteArrayOutputStream b=new ByteArrayOutputStream();
ImageIO.write(o, "jpg", b);
byte[] img=b.toByteArray();
You have to set the content type of the response to be an image type that you are sending.
Suppose your image was stored when it was a jpeg. then,
OutputStream o = resp.getOutputStream(); // HttpServletResponse
o.setContentType("image/jpeg");
o.write(img.getBytes() /* imgByte */);
would send the browser an image. ( The browser understands from the header information that the following information you just sent it, is a jpeg image. )
You could try using ImageIO.write...
ImageIO.write(img, "jpg", o);
But this will require you to use BufferedImage when reading...
BufferedImage img = ImageIO.read(in);
You could then use AffineTransform to scale the image...
BufferedImage scaled = new BufferedImage(img.getWidth() / 2, img.getHeight() / 2, img.getType());
Graphics2D g2d = scaled.createGraphics();
g2d.setTransform(AffineTransform.getScaledInstance(0.5, 0.5));
g2d.drawImage(img, 0, 0, null);
g2d.dispose();
img = scaled;
This, obviously, only scales the image by 50%, so you'll need to calculate the required scaling factor based on the original size of the image against your desired size...
Take a look at Java: maintaining aspect ratio of JPanel background image for some ideas on scaling images...
I'm making a program, which gets data about an image in byte array from a server. I'm converting this data into 24bit BMP format (whether its jpeg, png, bmp or 8-24-32bpp). First, I'm saving it to my HD, and then I'm loading it into a JLabel's Icon. Works perfectly, though there are some cases in which I get the following exception:
java.io.EOFException at
javax.imageio.stream.ImageInputStreamImpl.readFully(ImageInputStreamImpl.java:353) at
com.sun.imageio.plugins.bmp.BMPImageReader.read24Bit(BMPImageReader.java:1188) at
com.sun.imageio.plugins.bmp.BMPImageReader.read(BMPImageReader.java:843) at
javax.imageio.ImageIO.read(ImageIO.java:1448) at
javax.imageio.ImageIO.read(ImageIO.java:1308)
For this line (the second)
File imgFile = new File("d:/image.bmp");
BufferedImage image = ImageIO.read(imgFile);
In these cases:
the image does not load into the JLabel, but it can be found on my HD
the conversion is not proper, because something "slips"
the picture is like when you use italics in a word document
First, i thought maybe the bpp is the problem, then i thought that maybe the pictures are too large, but i have cases it works and cases it doesn't for both suggestions. I'm a little stuck here, and would be glad for ideas.
the picture is like .. when You use italics in a word document
Think I finally got what this bullet item meant now.. ;-)
Speculative answer, but here goes:
If the image you write looks "skewed", it's probably due to missing padding for each column as the BMP format specifies (or incorrect width field in the BMP header). I assume then, that the images you get EOF exceptions for, is where the width is not a multiple of 4.
Try to write the BMPs using ImageIO to see if that helps:
private static BufferedImage createRGBImage(byte[] bytes, int width, int height) {
DataBufferByte buffer = new DataBufferByte(bytes, bytes.length);
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[]{8, 8, 8}, false, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
return new BufferedImage(cm, Raster.createInterleavedRaster(buffer, width, height, width * 3, 3, new int[]{0, 1, 2}, null), false, null);
}
...
byte[] bytes = ...; // Your image bytes
OutputStream stream = ...; // Your output
BufferedImage image = createRGBImage(bytes, width, height);
try {
ImageIO.write(image, "BMP", stream);
}
finally {
stream.close();
}
Call it by class name, liek ClassName.byteArrayToImage(byte):
public static BufferedImage byteArrayToImage(byte[] bytes){
BufferedImage bufferedImage=null;
try {
InputStream inputStream = new ByteArrayInputStream(bytes);
bufferedImage = ImageIO.read(inputStream);
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
return bufferedImage;
}
You can use this code to convert the output image to a byte Array
Blob b = rs.getBlob(2);
byte barr[] = new byte[(int)b.length()]; //create empty array
barr = b.getBytes(1,(int)b.length());
FileOutputStream fout = new FileOutputStream("D:\\sonoo.jpg");
fout.write(barr);