Bitmap to byteArray, pixel logic (how to speed up algorithm) - java

In my app I am capturing images and on button press just apply some filter (function).
SOLVED:
So the problem is when I convert bitmap to byteArray I will get RGBA
byte format, so access to the RED is every first position of 4th byte.
Thats simple, but what I dont understand is: why black is 0 not -128
and white is -1 not 127
//4 bytes: R G B A
//4 bytes: 0 1 2 3
//byte => -128 to 127
//black -> grey -> greyer -> white
// 0 -> 127 -> -128 -> -1
SO any ideas how to get number from 0 to 255 or -128 to 127? equils to black to white transition.
WAS SOLVED by adding value=value & 0xFF
this.actualPosition = 1; //Green
for(int j=0; j < bmpHeight ; j++) {
for(int i=0 ; i < bmpBytesPerPixel/4 ;i++) {
byte midColor = (byte) ( (byteArray[actualPosition-1]& 0xFF) * 0.30 + (byteArray[actualPosition]& 0xFF) * 0.59 + (byteArray[actualPosition+1]& 0xFF) * 0.11 );
byteArray[actualPosition]=midColor;
byteArray[actualPosition+1]=midColor;
byteArray[actualPosition+-1]=midColor;
//byteArray[actualPosition+1]=byteArray[actualPosition];
//byteArray[actualPosition+-1]=byteArray[actualPosition];
actualPosition += 4;
}
}
Trying to make the fastest algorithm. This one is about ~2.7s when working with HD image / bitmap, so the bytearray lenght is 4 * 1080 * 720 = 3 110 400 bytes. Accesing 3/4.
there is how I am converting bitmap to byteArray and contrariwise.
private void getArrayFromBitmap() {
// Převod Bitmap -> biteArray
int size = this.bmpBytesPerPixel * this.bmpHeight;
ByteBuffer byteBuffer = ByteBuffer.allocate(size);
this.bmp.copyPixelsToBuffer(byteBuffer);
this.byteArray = byteBuffer.array();
}
private void getBitmapFromArray() {
// Převod biteArray -> bitmap
Bitmap.Config configBmp = Bitmap.Config.valueOf(this.bmp.getConfig().name());
this.bmp = Bitmap.createBitmap(this.bmpWidth, this.bmpHeight, configBmp);
ByteBuffer buffer = ByteBuffer.wrap(this.byteArray);
this.bmp.copyPixelsFromBuffer(buffer);
System.out.println("\n DONE "+ configBmp);
}

Related

BitmapFactory.decodeByteArray() always returns null (manually-created byte array)

So i'm trying to port some C++ code from a colleague that grabs image data over a Bluetoth serial port (I'm using an Android phone). From the data I will need to generate a bitmap.
Before testing the ported code, I wrote this quick function to suposedly generate a pure red rectangle. However, BitmapFactory.decodeByteArray() always fails and returns with a null bitmap. I've checked for both of the possible exeptions it can throw and neither one is thrown.
byte[] pixelData = new byte[225*160*4];
for(int i = 0; i < 225*160; i++) {
pixelData[i * 4 + 0] = (byte)255;
pixelData[i * 4 + 1] = (byte)255;
pixelData[i * 4 + 2] = (byte)0;
pixelData[i * 4 + 3] = (byte)0;
}
Bitmap image = null;
logBox.append("Creating bitmap from pixel data...\n");
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.outWidth = 225;
options.outHeight = 160;
try {
image = BitmapFactory.decodeByteArray(pixelData, 0, pixelData.length, options);
} catch (IllegalArgumentException e) {
logBox.append(e.toString() + '\n');
}
//pixelData = null;
logBox.append("Bitmap generation complete\n");
decodeByteArray() code:
public static Bitmap decodeByteArray(byte[] data, int offset, int length, Options opts) {
if ((offset | length) < 0 || data.length < offset + length) {
throw new ArrayIndexOutOfBoundsException();
}
Bitmap bm;
Trace.traceBegin(Trace.TRACE_TAG_GRAPHICS, "decodeBitmap");
try {
bm = nativeDecodeByteArray(data, offset, length, opts);
if (bm == null && opts != null && opts.inBitmap != null) {
throw new IllegalArgumentException("Problem decoding into existing bitmap");
}
setDensityFromOptions(bm, opts);
} finally {
Trace.traceEnd(Trace.TRACE_TAG_GRAPHICS);
}
return bm;
}
I would presume that it's nativeDecodeByteArray() that is failing.
I also notice the log message:
D/skia: --- SkImageDecoder::Factory returned null
Anyone got any ideas?
decodeByteArray of BitmapFactory actually decodes an image, i.e. an image that has been encoded in a format such as JPEG or PNG. decodeFile and decodeStream make a little more sense, since your encoded image would probably be coming from a file or server or something.
You don't want to decode anything. You are trying to get raw image data into a bitmap. Looking at your code it appears you are generating a 225 x 160 bitmap with 4 bytes per pixel, formatted ARGB. So this code should work for you:
int width = 225;
int height = 160;
int size = width * height;
int[] pixelData = new int[size];
for (int i = 0; i < size; i++) {
// pack 4 bytes into int for ARGB_8888
pixelData[i] = ((0xFF & (byte)255) << 24) // alpha, 8 bits
| ((0xFF & (byte)255) << 16) // red, 8 bits
| ((0xFF & (byte)0) << 8) // green, 8 bits
| (0xFF & (byte)0); // blue, 8 bits
}
Bitmap image = Bitmap.createBitmap(pixelData, width, height, Bitmap.Config.ARGB_8888);

Different code(.java file) for different platform?

I have a code where image data is passed from bitmap to FFmpeg frame recorder and converted to a video. But i need to make small changes while running it on LG G3(armv7) from Asus zenfone 5(x86).
Following are the class variables that create the issue:(declared under, class Main Activity)
inputWidth = 1024;
inputHeight = 650;
Following is the method where the issue occurs:
byte [] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
int [] argb = new int[inputWidth * inputHeight];
bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
Working CODE:
LG G3 :I can use the above variables at any place in the code to get the required output.
Bitmap size returned = 2734200
Asus Zenfone 5: Except at creating the bitmap, I have to use everywhere else bitmap.getHeight() and bitmap.getWidth(), to get the required output.
Surprisingly here Bitmap size returned = 725760 (So its not setting according to set bitmap parameters?)
INCORRECT CODE:
LG G3 : IF i use bitmap.getHeight() and bitmap.getWidth(), i get java.lang.ArrayIndexOutOfBoundsException: length = 102354 , index = 102354. #getNV21 method
Asus Zenfone 5 : If i use inputWidth , inputHeight i get
java.lang.IllegalArgumentException: x + width must be <= bitmap.width() #getNV21 method
How can i generalize the above code for both phones?
In cases like this you can use a Strategy pattern.
Strategy pattern allows you to change algorithms during runtime based on your environment. Basically you define an interface for your strategy. Something like this:
interface MyStrategy {
byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap);
}
Then you make multiple implementations of your interface, one for LG, one for Asus and, for example, one for all other devices (device neutral):
class MyStrategyForLG implements MyStrategy {
public byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
// ...
}
}
class MyStrategyForAsus implements MyStrategy {
public byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
// ...
}
}
class DefaultMyStrategy implements MyStrategy {
public byte[] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {
// ...
}
}
You can create a factory for MyStrategy so you can avoid use of if-else in your MainActivity. Something like this:
class MyStrategyFactory {
public void createMyStrategy() {
// ...
if ( deviceIsAsus ) {
return new MyStrategyForAsus();
}
if ( deviceIsLg ) {
return new MyStrategyForLG();
}
return new DefaultMyStrategy();
}
}
In your MainActivity you can invoke your strategy like this:
// ...
MyStrategy strategy = new MyStrategyFactory().createMyStrategy();
byte[] bytes = strategy.getNV21(width, height, image);
// ...
The advantage of this method is that you do not need to modify calling site when you add another device, for example, when you notice that Samsung is also a bit weird. Instead you implement MyStrategyForSamsung and change the factory to return it when the code is executed on Samsung device.

it's possible to use Java.awt.Image android application

I've written a Java methods ,but i have to use this method in android project,so someone can help me to convert it into android or help me what should i do?
public Image getImage(){
ColorModel cm = grayColorModel() ;
if( n == 1){// in case it's a 8 bit/pixel image
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h,cm, pixData, 0, w));
}//endif
}
protected ColorModel grayColorModel()
{
byte[] r = new byte[256] ;
for (int i = 0; i <256 ; i++ )
r[i] = (byte)(i & 0xff ) ;
return (new IndexColorModel(8,256,r,r,r));
}
For instance, to convert a grayscale image (byte array, imageSrc) to drawable:
byte[] imageSrc= [...];
// That's where the RGBA array goes.
byte[] imageRGBA = new byte[imageSrc.length * 4];
int i;
for (i = 0; i < imageSrc.length; i++) {
imageRGBA[i * 4] = imageRGBA[i * 4 + 1] = imageRGBA[i * 4 + 2] = ((byte) ~imageSrc[i]);
// Invert the source bits
imageRGBA[i * 4 + 3] = -1;// 0xff, that's the alpha.
}
// Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(imageRGBA));
Code may differ depending of input format.

image.getRaster().getDataBuffer() returns array of negative values

This answer suggests that it's over 10 times faster to loop pixel array instead of using BufferedImage.getRGB. Such difference is too important to by ignored in my computer vision program. For that reason, O rewritten my IntegralImage method to calculate integral image using the pixel array:
/* Generate an integral image. Every pixel on such image contains sum of colors or all the
pixels before and itself.
*/
public static double[][][] integralImage(BufferedImage image) {
//Cache width and height in variables
int w = image.getWidth();
int h = image.getHeight();
//Create the 2D array as large as the image is
//Notice that I use [Y, X] coordinates to comply with the formula
double integral_image[][][] = new double[h][w][3];
//Variables for the image pixel array looping
final int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
//final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
//If the image has alpha, there will be 4 elements per pixel
final boolean hasAlpha = image.getAlphaRaster() != null;
final int pixel_size = hasAlpha?4:3;
//If there's alpha it's the first of 4 values, so we skip it
final int pixel_offset = hasAlpha?1:0;
//Coordinates, will be iterated too
//It's faster than calculating them using % and multiplication
int x=0;
int y=0;
int pixel = 0;
//Tmp storage for color
int[] color = new int[3];
//Loop through pixel array
for(int i=0, l=pixels.length; i<l; i+=pixel_size) {
//Prepare all the colors in advance
color[2] = ((int) pixels[pixel + pixel_offset] & 0xff); // blue;
color[1] = ((int) pixels[pixel + pixel_offset + 1] & 0xff); // green;
color[0] = ((int) pixels[pixel + pixel_offset + 2] & 0xff); // red;
//For every color, calculate the integrals
for(int j=0; j<3; j++) {
//Calculate the integral image field
double A = (x > 0 && y > 0) ? integral_image[y-1][x-1][j] : 0;
double B = (x > 0) ? integral_image[y][x-1][j] : 0;
double C = (y > 0) ? integral_image[y-1][x][j] : 0;
integral_image[y][x][j] = - A + B + C + color[j];
}
//Iterate coordinates
x++;
if(x>=w) {
x=0;
y++;
}
}
//Return the array
return integral_image;
}
The problem is that if I use this debug output in the for loop:
if(x==0) {
System.out.println("rgb["+pixels[pixel+pixel_offset+2]+", "+pixels[pixel+pixel_offset+1]+", "+pixels[pixel+pixel_offset]+"]");
System.out.println("rgb["+color[0]+", "+color[1]+", "+color[2]+"]");
}
This is what I get:
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
...
So how should I properly retrieve pixel array for BufferedImage images?
A bug in the code above, that is easily missed, is that the for loop doesn't loop as you'd expect. The for loop updates i, while the loop body uses pixel for its array indexing. Thus, you will only ever see the values of pixel 1, 2 and 3.
Apart from that:
The "problem" with the negative pixel values, is most likely that the code assumes a BufferedImage that stores its pixels in "pixel interleaved" form, however, they are stored "pixel packed". That is, all samples (R, G, B and A) for one pixel is stored in a single sample, an int. This will be the case for all BufferedImage.TYPE_INT_* types (while the BufferedImage.TYPE_nBYTE_* types are stored interleaved).
It's completely normal to have negative values in the raster, this will happen for any pixel that is less than 50% transparent (more than or equal to 50% opaque), because of how the 4 samples are packed into the int, and because int is a signed type in Java.
In this case, use:
int[] color = new int[3];
for (int i = 0; i < pixels.length; i++) {
// Assuming TYPE_INT_RGB, TYPE_INT_ARGB or TYPE_INT_ARGB_PRE
// For TYPE_INT_BGR, you need to reverse the colors.
// You seem to ignore alpha, is that correct?
color[0] = ((pixels[i] >> 16) & 0xff); // red;
color[1] = ((pixels[i] >> 8) & 0xff); // green;
color[2] = ( pixels[i] & 0xff); // blue;
// The rest of the computations...
}
Another possibility, is that you have created a custom type image (BufferedImage.TYPE_CUSTOM) that really uses a 32 bit unsigned int per sample. This is possible, however, int is still a signed entity in Java, so you need to mask off the sign bit. To complicate this a little, in Java -1 & 0xFFFFFFFF == -1 because any computation on an int will still be an int, unless you explicitly say otherwise (doing the same on a byte or short value would have "scaled up" to int). To get a positive value, you need to use a long value like this: -1 & 0xFFFFFFFFL (which is 4294967295).
In this case, use:
long[] color = new long[3];
for(int i = 0; i < pixels.length / pixel_size; i += pixel_size) {
// Somehow assuming BGR order in input, and RGB output (color)
// Still ignoring alpha
color[0] = (pixels[i + pixel_offset + 2] & 0xFFFFFFFFL); // red;
color[1] = (pixels[i + pixel_offset + 1] & 0xFFFFFFFFL); // green;
color[2] = (pixels[i + pixel_offset ] & 0xFFFFFFFFL); // blue;
// The rest of the computations...
}
I don't know what type of image you have, so I can't say for sure which one is the problem, but it's one of those. :-)
PS: BufferedImage.getAlphaRaster() is a possibly an expensive and also inaccurate way to tell if the image has alpha. It's better to just use image.getColorModel().hasAlpha(). See also hasAlpha vs getAlphaRaster.

waveform code .. ..I do need help...please

Can you please , explain to me (in words) what this code does?
thank you
My concerns are actually these two parts :
1)
double y_new = (double) (h * (128 - my_byte) / 256);
lines.add(new Line2D.Double(x, y_last, x, y_new));
y_last = y_new;
2) the for loop , I dont'understand...
what's 32768 ?
my_byte?
int numChannels = format.getChannels();
for (double x = 0; x < w && audioData != null; x++) {
int idx = (int) (frames_per_pixel * numChannels * x);
// se a 8 bit è immediato
if (format.getSampleSizeInBits() == 8) {
my_byte = (byte) audioData[idx];
} else {
my_byte = (byte) (128 * audioData[idx] / 32768);
Here's the code . It was taken from here :
http://www.koders.com/java/fid3508156A13C80A263E7CE65C4C9D6F5D8651AF5D.aspx?s=%22David+Anderson%22
(class Sampling Graph)
int frames_per_pixel = audioBytes.size() / format.getFrameSize() / w;
byte my_byte = 0;
double y_last = 0;
int numChannels = format.getChannels();
for (double x = 0; x < w && audioData != null; x++) {
// scegli quale byte visualizzare
int idx = (int) (frames_per_pixel * numChannels * x);
// se a 8 bit è immediato
if (format.getSampleSizeInBits() == 8) {
my_byte = (byte) audioData[idx];
} else {
my_byte = (byte) (128 * audioData[idx] / 32768);
}
double y_new = (double) (h * (128 - my_byte) / 256);
lines.add(new Line2D.Double(x, y_last, x, y_new));
y_last = y_new;
}
repaint();
Don't know if it helps, but
128 * someInt / 32768
is the same as
someInt << 7 >> 15
You should at the first place learn how a sound file is organized. But anyway..
double y_new = (double) (h * (128 - my_byte) / 256);
lines.add(new Line2D.Double(x, y_last, x, y_new));
y_last = y_new;
The Y positions of the lines that this example draws are representing the sample values of the sound file. As you may know one sample can be 8/16/32...bit. In this exaple they scale all bit values down to 8 (1 byte). The ne Y will have its center in the screen mid position (its a signed sound file). h is the screen hight-we want to scale it as if 127 is the screen top and -127 the screen lower pixel.
2) 32768 is the max value for a signed 16 bit integer. Thus, the max sample value for a 16 bit sound file. myByte: A sound file is saved as a byte stream. So evan if you use 16bit samples, you will have to create your integer (16bit, 32bin in Java) from the 2 bytes from the screen. In this example they work only with 8bit sample data, so if this example reads a 16bit sound file it will convert the sample values to 8 bit in this line:
my_byte = (byte) (128 * audioData[idx] / 32768);
So my byte holds the sample value of frame "idx".
double y_new = (double) (h * (128 - my_byte) / 256);
This line of code is part of a method to plot a sequence of bytes on a 'windows' (rectangle, coordinate system, as you like)
h := the height of the drawable area in pixel
my_byte := a byte [-128,127] (a 8 bit sample from an audio file?)
(128 - my_byte) converts the input [-128,127] to [256, 1]. Division by 256 transforms this range to [1, 1/256] and multiplying with the height h results in the expected mapping from [-128,127] --> [~0,h]
The following to lines are for drawing a line from the point representing the previous byte to the actual point. The actual point will then become the previous point for the next iteration.
my_byte = (byte) (128 * audioData[idx] / 32768);
This line of code is execute if the sample size is 16Bit. But we need an 8Bit value just for the plotting. An int in Java is 32 Bit, we have a 16 Bit sample, so the upper 16 Bit are 0, the lower 16 Bit store the audio sample value. The algorithm now shifts the value 7 positions to the left, then 15 to the right:
00000000 00000000 11111111 11111111
-> 00000000 01111111 11111111 10000000 // * 128, << 7
-> 00000000 00000000 00000000 11111111 // : 32768, >> 15
Honestly, i do not know why the author didn't just divide by 256 once, that should give the same result.
But anyway - the result is a byte that represents the eight most significant bits of the sample. Pretty inaccurate, by the way, because if you have a very quiete 16 bit recording you'll see nothing on the wave panel.

Categories