Bitmap LockBits on Android? - java

My program on Android uses an algorithm that uses a lot of setPixel and getPixel, therefore, it's very slow. On .NET, I can use LockBits to make it faster. Is there LockBits or similar on Java or Android?
EDIT: After some searches, I found copyPixelToBuffer and copyPixelFromBuffer, wonder if it is what I need?

Yes, you should use the above two methods and make use of a ByteBuffer object where you will be first storing all the bitmap data. After doing so, copy all the buffer data into a byte array and then you can do all you argb manipulations within this array. After all done, wrap this byte array into a newly allocated ByteBuffer and then finally copy the pixels back from this buffer into the original bitmap.
Here's some sample:
"bmpData" is your Bitmap object holding image pixel data.
int size = bmpData.getRowBytes()*bmpData.getHeight()*4;
ByteBuffer buf = ByteBuffer.allocate(size);
bmpData.copyPixelsToBuffer(buf);
byte[] byt = buf.array();
for(int ctr=0;ctr<size;ctr+=4)
{
//access array in form of argb. for ex. byt[0] is 'r', byt[1] is 'g' and so on..
}
ByteBuffer retBuf = ByteBuffer.wrap(byt);
bmpData.copyPixelsFromBuffer(retBuf);

Related

Why some Java functions requires bytes array length when byte arrays object was already provided in the argument?

While writing Java code, I really wonder why some functions require byte arrays length as an argument when the first argument was byte arrays object. Why they don't get the length from the object provided?
For example:
// E.g.: 1. Bitmap
byte[] bytes = task.getResult();
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// E.g.: 2. Datagram
byte[] data = new byte[1024];
DatagramPacket request = new DatagramPacket(data, data.length);
If they want the length, why they don't use data.length?
The byte array is a buffer to which data, the length of which is less than the length of the buffer, is read. The length parameter defines the amount of bytes in the buffer that are relevant. You're not supposed to pass the length of the buffer in the parameter, that would be redundant. You're supposed to pass the number of bytes in the buffer that contain actual data.
The API documentation of DatagramPacket, for example, reveals this.
length - the number of bytes to read
The simple answer is: most read methods (in Java, and any other language) that operate on buffer arrays have to tell you the exact number of bytes that were actually read.
Keep in mind: that array is an buffer. The default behavior is that buffer.length or less bytes can be read. So, knowing how long the buffer is doesn't help you. You have to know how many bytes were actually put into the buffer.
Broadly a buffer is used as a temporary data in a data loading processing.
You fill the buffer until its size or less but never more than its capacity of course.
The DatagramPacket javadoc confirms that :
The length argument must be less than or equal to buf.length.
And a thing that you don't have to forget : conceptually you use a buffer because the data has to be progressively loaded or only a specific part of that.
In some cases you will read as much data as its maximal capacity but in some other cases you need to read only the X first bytes or the bytes from X to Y offset.
So the buffer class methods provide generally multiple way to read from the buffer.
Such as :
public DatagramPacket(byte buf[], int length);
public DatagramPacket(byte buf[], int offset, int length);
Now conceptually you are not wrong, sometimes you want to fill the whole buffer because you know that you will need to read exactly this size of data.
The java.net.DatagramSocket confirms that :
public synchronized void receive(DatagramPacket p) throws IOException {
...
tmp = new DatagramPacket(new byte[1024], 1024);
...
}
So an additional overloading such as :
public DatagramPacket(byte buf[]);
would make sense.
Because the data that you want to read can be less than or equal to byte[] buf's length.
Below is the API documentation :
public DatagramPacket(byte[] buf,
int length)
Constructs a DatagramPacket for receiving packets of length length.
The length argument must be less than or equal to buf.length.
Parameters:
buf - buffer for holding the incoming datagram.
length - the number of bytes to read.
https://docs.oracle.com/javase/7/docs/api/java/net/DatagramPacket.html

How do I convert mixed Java data types into a Java byte array?

I need to construct a Java byte array out of mixed data types, but I don't know how to do this. These are my types:
byte version = 1; // at offset 0
short message_length = // the size of the byte[] message I am constructing here, at offset 1
short sub_version = 15346; // at offset 3
byte message_id = 2; // at offset 5
int flag1 = 10; // at offset 6
int flag2 = 0; // at offset 10
int flag3 = 0; // at offset 14
int flag4 = 0; // at offset 18
String message = "the quick brown fox jumps over the lazy dog"; // at offset 22
I know for the String, I can use
message.getBytes("US_ASCII");
I know for the int values, I can use
Integer.byteValue();
I know for the short values, I can use
Short.byteValue();
And the byte values are already bytes, I am just not sure of how to combine all of these into a single byte array. I have read about
System.arraycopy();
Is this the correct process, I just convert all the data to bytes, and start "concatenating" the byte array with arraycopy?
I am communicating with some distant server I have no control over, and this is the message process they require.
Wrap a DataOutputStream around a ByteArrayOutputStream. This way you can write all the primitive types like int and short directly to the DataOutputStream, which converts them to bytes and forwards them to the ByteArrayOutputStream, from which you can then retrieve the whole thing as one byte array:
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
DataOutputStream dOut = new DataOutputStream(bOut);
dOut.writeByte(version);
dOut.writeShort(message_length);
dOut.writeShort(sub_version);
dOut.writeByte(message_id);
dOut.writeInt(flag1);
dOut.writeInt(flag2);
dOut.writeInt(flag3);
dOut.writeInt(flag4);
dOut.write(message.getBytes(), 0, message.length());
dOut.flush();
byte[] result = bOut.toByteArray();
The best thing about this is that you can do the exact opposite (extracting values from a byte array) with DataInputStream and ByteArrayInputStream completely analoguously to the above code.
If by a 'mixed type' you mean a class with different member field types, then one approach is to make your class serializable, and use ApacheUtils
byte[] data = SerializationUtils.serialize(yourObject);
All, I wanted to post my own solution to my problem here. I did a quick Google search on how to insert a short into java byte array. One of the results talked about a Java ByteBuffer. After some reading, I determined this was the best and quickest way for me to get the results I needed. One section in the Java API that really made me interested in the ByteBuffer was this:
Methods in this class that do not otherwise have a value to return are specified to return the buffer upon which they are invoked. This allows method invocations to be chained. The sequence of statements
bb.putInt(0xCAFEBABE);
bb.putShort(3);
bb.putShort(45);
can, for example, be replaced by the single statement
bb.putInt(0xCAFEBABE).putShort(3).putShort(45);
So, that is what I did:
byte version = 1;
short message_length = 72;
short sub_version = 15346;
byte message_id = 2;
int flag1 = 10;
int flag2 = 0;
int flag3 = 0;
int flag4 = 0;
String message = "the quick brown fox jumps over the lazy dog";
ByteBuffer messageBuffer = ByteBuffer.allocate(message_length);
messageBuffer.put(version).putShort(message_length).putShort(sub_version).put(message_id).putInt(flag1).putInt(flag2).putInt(flag3).putInt(flag4).put(message.getBytes());
byte[] myArray = messageBuffer.array();
That was fast and easy, and just what I needed. Thank you all who took the time to read and reply.
Certainly you can concatenate these values with arrayCopy, as you've suggested.
You can also append your bytes onto a ByteArrayOutputStream.
The key is to understand exactly what the receiving system is expecting. How does it know where one field ends and the next begins? How does it know what type it's reading at a given position in the stream? There are lots of ways they could have chosen to do that - with length headers in the protocol; with type headers; with null-termination of strings; with a set order of fields and their lengths; and so on.
Whatever method you choose, write unit tests that check for edge cases like negative numbers, very large numbers, non-ASCII text and so on. It's easy to get stung when everything has been working fine, then suddenly the server chokes on a Unicode character or a negative number that it interprets as a very large number.
One other option -- perhaps slight overkill for your needs, but flexible and with high performance -- is Google's protocol buffers library.

How do I change the LSB of each pixel according to my message

I am trying to implement a simple encoding program where I can hide a message in the LSB of the pixels of an image. So far I've got the byte array from the message
private static byte[] ConvertMessageToByte(String message,
byte[] messageBytes) {
// takes in the message and stores them into bytes
// returns message byte array
byte[] messageByteArray = message.getBytes();
return messageByteArray;
}
I have also got the byte array for the corresponding image that i want to encode onto
private static byte[] getPixelByteArray(BufferedImage bufferedImage) {
WritableRaster raster = bufferedImage.getRaster();
DataBufferByte buffer = (DataBufferByte) raster.getDataBuffer();
return buffer.getData();
}
Till this point I don't quite understand my following steps after. Do I iterate through the image byte array and store each ARGB values in another byte array? Also how would I apply the message bit values to the pixels?
private static byte[] ConvertMessageToByte(String message, byte[] messageBytes) {
byte[] messageByteArray = message.getBytes();
return messageByteArray;
}
Regarding this method: convertMessageToBytes is a better name as a lowercase first letter is more conventional as well as the fact that you are producing an array of multiple bytes and not just one. This method does not require the second byte[] parameter as it can be simplified to return message.getBytes(); and produce the same effect. Furthermore, String.getBytes() is typically called in the parent function of this as it is not widely considered worthy of wrapping given that is one line. In conclusion: remove this method and use byte[] ba = s.getBytes(); in your main code instead.
Personally, I process images as 3_BYTE_RGB as that is how they are most commonly thought of and, indeed, represented on a physical monitor or printer and stored in the case of many image formats. HSL (LSB) is typically a user-end representation of colour. You should be thinking in RGB not HSL.
Instead of dealing with the image as a byte[], use int BufferedImage.getRGB(int x, int y) as it's easier to do and easier to access.
The following function may be of use. I'll let you write the reverse.
private static int[] getColourAt(BufferedImage image, int x, int, y) {
int rgb = image.getRGB(x, y);
int r = rgb >> 16;
int g = (rgb >> 8) % 256;
int b = rgb % 256;
return new int[] {r, g, b};
}
From there, you should loop through each pixel and adjust as you like.

Signed 16 bit PCM transformations aren't working. Why?

For the past 2 days I've been trying to manipulate 16 bit PCM data on Android with little success. I'm currently using WAV recorder to capture audio. In the onPeriodicNotification(AudioRecord recorder) method before the buffer is written with the randomAccessWriter I send the buffer to a custom class, to manipulate the samples, and save the samples back into the buffer. The method in my custom class is as follows:
As the buffer is a byte array I first convert them into shorts, now one short represents a frame (there's only one channel). I will be implementing FFT algorithms, once I get past this hurdle, that need the input to be a float array - so I convert each short into a float. Now, the randomAccessWriter that writes the data into the WAV file accepts a byte array and is expecting each frame to be 2 bytes. Therefore I convert each float back into a short and use a ByteBuffer to reconstruct a byte array, which is then returned. When I run my recorder app, with the buffer being sent through the above code, everything is fine.
I try using a simple voice modulation algorithm to test if the recording is modified, the algorithm is placed where the TODO comment is:
Now if I used the above code on my iPhone the audio samples would be transformed, although the data is natively 32bit floats. However, on Android when I re-run the recorder app, with the above code inserted, all that's produced is white noise. Until I can successfully modify the samples with the above code, I can't proceed with my FFT algorithms.
Why is this occurring? I would be grateful if someone with knowledge on the topic could shed light on the topic.
SOLVED - By Bjorn Roche
Underlying cause: Recording was giving data in Little Endian whereas Java shorts are in Big Endian; when applying a function using the two different forms, white noise is produced. The below code shows how to take in a Little Endian byte array, convert to Big Endian float array and back to Little Endian byte array. Whilst floats you can do whatever you please, I'll now be using my FFT algorithms:
public byte[] manipulateSamples(byte[] data,
int samplingRate,
int numFrames,
short numChannels) {
// Convert byte[] to short[] (16 bit) to float[] (32 bit) (End result: Big Endian)
ShortBuffer sbuf = ByteBuffer.wrap(data).asShortBuffer();
short[] audioShorts = new short[sbuf.capacity()];
sbuf.get(audioShorts);
float[] audioFloats = new float[audioShorts.length];
for (int i = 0; i < audioShorts.length; i++) {
audioFloats[i] = ((float)Short.reverseBytes(audioShorts[i])/0x8000);
}
// Do your tasks here.
// Convert float[] to short[] to byte[] (End result: Little Endian)
audioShorts = new short[audioFloats.length];
for (int i = 0; i < audioFloats.length; i++) {
audioShorts[i] = Short.reverseBytes((short) ((audioFloats[i])*0x8000));
}
byte byteArray[] = new byte[audioShorts.length * 2];
ByteBuffer buffer = ByteBuffer.wrap(byteArray);
sbuf = buffer.asShortBuffer();
sbuf.put(audioShorts);
data = buffer.array();
return data;
}
Your problem is that shorts in java are bigendian, but if you got your data from a WAV file the data is little endian.

Convert byte[] to Buffer type

I am working in Android. I need to convert byte[] to Buffer type. In Android, I have seen a type Buffered that I needed to use in particular functions. But, I my data source is type byte[].
Take a look at ByteBuffer.wrap:
byte[] bytes = ...;
Buffer buf = ByteBuffer.wrap(bytes);
There's also a ByteBuffer.wrap(byte[] array, int start, int byteCount) if you only want to wrap part of an array.

Categories