java.lang.UnsupportedOperationException at java.nio.ByteBuffer.array(ByteBuffer.java:959) - java

The following Java code compiles, but there's an error at runtime:
# javac ByteBufTest.java
# java ByteBufTest
Exception in thread "main" java.lang.UnsupportedOperationException
at java.nio.ByteBuffer.array(ByteBuffer.java:959)
at ByteBufTest.<init>(ByteBufTest.java:12)
at ByteBufTest.main(ByteBufTest.java:33)
#
Why does this happen?
Note:Next, I need to use mDirectBuffer in JNI, so I have to use the ByteBuffer.allocateDirect(TEST_BUFFER_SIZE) function。
ByteBufTest.java:
import java.nio.ByteBuffer;
public class ByteBufTest {
public static final int TEST_BUFFER_SIZE = 128;
private ByteBuffer mDirectBuffer;
public ByteBufTest() {
mDirectBuffer = ByteBuffer.allocateDirect(TEST_BUFFER_SIZE);
byte[] buf = mDirectBuffer.array();
buf[1]=100;
}
public void test() {
printBuffer("nativeInitDirectBuffer",mDirectBuffer.array());
}
private void printBuffer( String tag, byte[] buffer ) {
StringBuffer sBuffer = new StringBuffer();
for( int i=0; i<buffer.length; i++ ) {
sBuffer.append(buffer[i]);
sBuffer.append(" ");
}
//System.out.println(tag+sBuffer);
}
public static void main(String[] args) throws Exception {
ByteBufTest item = new ByteBufTest();
item.test();
}
}

This is the expected behaviour. The Javadoc states
throws UnsupportedOperationException - If this buffer is not backed by an accessible array
You should try another approach or search for another implementation, e.g.
mDirectBuffer = ByteBuffer.wrap(new byte[TEST_BUFFER_SIZE]);

This exception occurs at runtime if the resulting buffer is not backed by an accessible array. You can try allocate() method.

You should call java.nio.ByteBuffer.hasArray() to ensure that java.nio.ByteBuffer.array() will succeed in order to write clean and portable code as stated in the Java API documentation:
If this method returns true then the array and arrayOffset methods may safely be invoked
You can allocate a direct writable NIO byte buffer in your Java source code by calling java.nio.ByteBuffer.allocateDirect(int) as you already do and call java.nio.ByteBuffer.get(byte[]) to store the content of the buffer into an array, this method is supported by Android too. Keep in mind that it's a relative operation that affects the position of the NIO buffer.
Maybe another approach would consist in using the NIO buffer as is without doing any conversion but I'm not sure that it suits your needs.

Related

what is meaning of "readlimit" parameter at mark method in java?

BufferedInputStream.mark(int readlimit)
I read java doc but I don't understand when we use this parameter "readlimit"
in this code, I don't understand what's different between mark(1) or mark(100)
public static void main(String[] args) throws Exception {
String s="123456789ABCDEFGHIJKLMNOPQRSDVWXYZ";
byte byteArray[]=s.getBytes();
ByteArrayInputStream BArrayIS=new ByteArrayInputStream(byteArray);
BufferedInputStream BIS=new BufferedInputStream(BArrayIS);
BIS.mark(1);
System.out.println(BIS.read());
}
It has no effect because BufferedInputStream.mark method effects `` which is used in reset method that you don't use.
Repositions this stream to the position at the time the mark method

How can I get my actual bytes that I used to make a big byte array?

I have a method which makes one byte array as per below format.
First it gets avroBytes.
Then it snappy compresses it.
Then it makes another byte array with particular format as shown below.
Below is the method:
public static byte[] serialize(final Record record, final int clientId,
final Map<String, String> holderMap) throws IOException {
byte[] avroBytes = getAvroBytes(holderMap, record);
byte[] snappyCompressed = Snappy.compress(avroBytes);
int size = (2+8+4) + snappyCompressed.length;
ByteBuffer buffer = ByteBuffer.allocate(size);
buffer.order(ByteOrder.BIG_ENDIAN);
buffer.putShort((short) clientId);
buffer.putLong(System.currentTimeMillis());
buffer.putInt(snappyCompressed.length);
buffer.put(snappyCompressed);
buffer.rewind();
byte[] bytesToStore = new byte[size];
buffer.get(bytesToStore);
return bytesToStore;
}
Now I want to get my actual avroBytes once I have bytesToStore
byte[] bytesToStore = serialize(......);
// now how can I get actual `avroBytes` using bytesToStore?
Is there any way to get it back?
Based on the code, the compressed version starts at bytesToStore[14], so one simple, but not necessarily most efficient way would be to make a copy of the bytes from that location, and call Snappy.uncompress(bytes).
Something like this:
public static int HEADER_SIZE = 2 + 8 + 4;
public static byte[] extractAvroBytes(byte[] bytesToStore) throws IOException {
byte[] bytes = Arrays.copyOfRange(bytesToStore, HEADER_SIZE, bytesToStore.length);
return Snappy.uncompress(bytes);
}
I haven't tested this, so some tweaking may be required.
Depending on the Java interface to snappy that you are using, there may be methods available to decompress data directly from the serialized bytes without making an intermediate copy.
From the code, it looks like there is already a method that returns avroBytes, e.g.:
byte[] avroBytes = getAvroBytes(holderMap, record);
This method needs holderMap and record as aguments, and looking at the code where serialize is called, you already have those two values. So, if possible, you can call getAvroBytes before calling serialize and pass it as an argument to serialize method.

how to use ByteBuffer properly in multiple thread environment?

In one data class, class A, I have the following:
class A
{
private byte[] coverInfo = new byte[CoverInfo.SIZE];
private ByteBuffer coverInfoByteBuffer = ByteBuffer.wrap(coverInfo);
...
}
In a CoverInfo class, I have few fields:
class CoverInfo
{
public static final int SIZE = 48;
private byte[] name = new byte[DataConstants.Cover_NameLength];
private byte[] id = new byte[DataConstants.Cover_IdLength];
private byte[] sex = new byte[DataConstants.Cover_SexLength];
private byte[] age = new byte[DataConstants.Cover_AgeLength];
}
When class A get the coverInfo data, I create an instance of CoverInfo and and populate data into the CoverInfo object like this inside the Class A:
public void createCoverInfo()
{
CoverInfo tempObj = new CoverInfo();
tempObj.populate(coverInfoByteBuffer);
....
}
In the populate() method of the CoverInfo class, I have the following:
public void populate(ByteBuffer dataBuf)
{
dataBuf.rewind();
dataBuf.get(name, 0, DataConstants.Cover_NameLength);
dataBuf.get(id, 0, DataConstants.Cover_IdLength);
dataBuf.get(sex, 0, DataConstants.Cover_SexLength);
dataBuf.get(age, 0, DataConstants.Cover_AgeLength);
}
The populate() method will throw exception on Windows (always) but it works on Linux:
java.nio.BufferUnderflowException
java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
com.bowing.uiapp.common.socketdata.message.out.CoverInfo.populate(CoverInfo.java:110)
And the Exception line number is not fixed in one line.
It is running on multiple threads environment.
If I use a duplicated (read-only is fine) ByteBuffer, the issue resolved:
tempObj.populate(coverInfoByteBuffer.duplicate());
Few questions about this:
why it works on Linux but not on Windows (just a timing issue)?
I guess the issue is caused by the limit/position/mark values are changed by others while this CoverInfo object is accessing the ByteBuffer, the duplicate() is the preferred way for this situation?
If the ByteBuffer's slice() is used, how to guarantee data integrity if more than one user to modify the ByteBuffer?
how to use ByteBuffer properly in multiple thread environment?
From the Javadoc of the Buffer class:
Thread safety
Buffers are not safe for use by multiple concurrent threads. If a buffer is to be used by more than one thread then access to the buffer should be controlled by appropriate synchronization.
That's what the spec says. As you said, creating multiple views of the buffer with their own independent positions, etc. can work. Also, using absolute reads (where you specify a position) might also work. None of these are guaranteed to work according to the documentation and might work only on some buffer implementations.
I'm guess the problem is that you have multiple threads all trying to work on the buffer at the same time, despite none of them modifying the data in the buffer they are changing the state of the buffer specifically the read/write position.
Solutions:
Only allow one thread at a time to interact with the buffer...
public void populate(ByteBuffer dataBuf)
{
synchronized(dataBuf){
dataBuf.rewind();
dataBuf.get(name, 0, DataConstants.Cover_NameLength);
dataBuf.get(id, 0, DataConstants.Cover_IdLength);
dataBuf.get(sex, 0, DataConstants.Cover_SexLength);
dataBuf.get(age, 0, DataConstants.Cover_AgeLength);
}
}
OR
Create a new ByteBuffer for each.
public void populate(ByteBuffer dataBuf)
{
ByteBuffer myDataBuf = dataBuf.asReadOnlyBuffer();
myDataBuf.get(name, 0, DataConstants.Cover_NameLength);
myDataBuf.get(id, 0, DataConstants.Cover_IdLength);
myDataBuf.get(sex, 0, DataConstants.Cover_SexLength);
myDataBuf.get(age, 0, DataConstants.Cover_AgeLength);
}

WriteObject not properly writing a Set?

I hope I didn't just find a bug in Java! I am running JDK 7u11 (mostly because that is the sanctioned JVM allowed by my employer) and I am noticing a very odd issue.
Namely, I am chunking data into a LinkedHashSet and writing it to a file using the ObjectOutputStream daisy changed through the GZIpOutputStream (mentioning this just in case it matters).
Now, when I get to the other side of the program and readObject() I notice that the size always reads 68, which I is the first size. The underlying table can have many more or less than 68, but the .size() method always returns 68. More troubling, when I try to manually iterate the underlying Set, it also stops at 68.
while(...) {
oos.writeInt(p_rid);
oos.writeObject(wptSet);
wptSet.clear();
// wptSet = new LinkedHashSet<>(); // **This somehow causes the heapsize to increase dramatically, but it does solve the problem**
}
And when reading it
Set<Coordinate> coordinates = (Set<Coordinate>) ois.readObject();
the coordinates.size() always returns 68. Now, I could make a workaround by also .writeInt() the size, but I can only iterate through 68 members!
Notice the wptSet = new LinkedHashSet<>() line actually solves the issue. The main problem with that is it makes my heapsize skyrocket when looking at the program in JVisualVM.
Update:
I actually just found a viable workaround that fixes the memory leak of re-instantiating wptSet... System.gc() Calling that after each call to .clear() actually keeps the memory leak away.
Either way, I shouldn't have to do this and shipping out the LinkedHashSet should not exhibit this behavior.
Alright, I think I understand what you are asking.
Here is an example to reproduce...
import java.util.*;
import java.io.*;
class Example {
public static void main(String[] args) throws Exception {
Set<Object> theSet = new LinkedHashSet<>();
final int size = 3;
for(int i = 0; i < size; ++i) {
theSet.add(i);
}
ByteArrayOutputStream bytesOut = new ByteArrayOutputStream();
ObjectOutputStream objectsOut = new ObjectOutputStream(bytesOut);
for(int i = 0; i < size; ++i) {
objectsOut.writeObject(theSet);
theSet.remove(i); // mutate theSet for each write
}
ObjectInputStream objectsIn = new ObjectInputStream(
new ByteArrayInputStream(bytesOut.toByteArray()));
for(;;) {
try {
System.out.println(((Set<?>)objectsIn.readObject()).size());
} catch(EOFException e) {
break;
}
}
}
}
The output is
3
3
3
What is going on here is that ObjectOutputStream detects that you are writing the same object every time. Each time theSet is written, a "shared reference" to the object is written so that the same object is deserialized each time. This is explained in the documentation:
Multiple references to a single object are encoded using a reference sharing mechanism so that graphs of objects can be restored to the same shape as when the original was written.
In this case you should use writeUnshared(Object) which will bypass this mechanism, instead of writeObject(Object).

How to Declare a Byte Array of Infinite Size/Dynamic in Java?

I am declaring a byte array which is of unknown size to me as it keeps on updating, so how can I declare the byte array of infinite size/variable size?
You cannot declare an array of infinite size, as that would require infinite memory. Additionally, all the allocation calls deal with numbers, not infinite quantities.
You can allocate a byte buffer that resizes on demand. I believe the easiest choice would be a ByteArrayOutputStream.
ByteBuffer has an API which makes manipulation of the buffer easier, but you would have to build the resize functionality yourself. The easiest way will be to allocate a new, larger array, copy the old contents in, and swap the new buffer for the old.
Other answers have mentioned using a List<Byte> of some sort. It is worth noting that if you create a bunch of new Byte() objects, you can dramatically increase memory consumption. Byte.valueOf sidesteps this problem, but you have to ensure that it is consistently used throughout your code. If you intend to use this list in many places, I might consider writing a simple List decorator which interns all the elements. For example:
public class InterningList extends AbstractList<Byte>
{
...
#Override
public boolean add(Byte b) {
return super.add(Byte.valueOf(b));
}
...
}
This is not a complete (or even tested) example, just something to start with...
Arrays in Java are not dynamic. You can use list instead.
List<Byte> list = new ArrayList<Byte>();
Due to autoboxing feature you can freely add either Byte objects or primitive bytes to this list.
To define a bytearray of varying length just use the apache commons.io.IOUtils library instead of assigning manual length like
byte[] b=new byte[50];
You can pass your input stream to IOUtils function which will perform a read function on this inputstream thus byte array will have exact length of bytes as required.
ex.
byte[] b = IOUtils.toByteArray(inpustream);
Chaos..
ByteArrayOutputStream will allow for writing to a dynamic byte array. However, methods such as remove, replace and insert are not available. One has to extract the byte array and then manipulate it directly.
Your best bet is to use an ArrayList. As it resizes as you fill it.
List<Byte> array = new ArrayList<Byte>();
The obvious solution would be to use an ArrayList.
But this is a bad solution if you need performance or are constrained in memory, as it doesn't really store bytes but Bytes (that is, objects).
For any real application, the answer is simple : you have to manage yourself the byte array, by using methods making it grow as necessary. You may embed it in a specific class if needed :
public class AlmostInfiniteByteArray {
private byte[] array;
private int size;
public AlmostInfiniteByteArray(int cap) {
array = new byte[cap];
size = 0;
}
public int get(int pos) {
if (pos>=size) throw new ArrayIndexOutOfBoundsException();
return array[pos];
}
public void set(int pos, byte val) {
if (pos>=size) {
if (pos>=array.length) {
byte[] newarray = new byte[(pos+1)*5/4];
System.arraycopy(array, 0, newarray, 0, size);
array = newarray;
}
size = pos+1;
}
array[pos] = val;
}
}
Use an ArrayList of any subtype of List
The different implementations of List can allow you to do different things on the list (eg different traversal strategy, different performance etc)
Initial capacity of ArrayList is 10. You can change it by ArrayList(5000).
ArrayList will double it's size when needed (it will create new array and copy the old one to to the new one).
I would tweak slightly other people's answers.
Create a LargeByteArray class to manage your array. It will have get and set methods, etc, whatever you will need.
Behind the scenes that class will use a long to hold the current length and use an ArrayList to store the contents of the array.
I would pick to store byte[8192] or byte[16384] arrays in the ArrayList. That will give a reasonable trade off in terms of size wasted and reduce the need for resizing.
You can even make the array 'sparse' ie only allocate the list.get(index/8192) entry if there is a non-zero value stored in that box.
Such a structure can give you significantly more storage in some cases.
Another strategy you can use is to compress the byte[] boxes after write and uncompress before read (use a LRU cache for reading) which can allow storing twice or more than available ram... Though that depends on the compression strategy.
After that you can look at paging some boxes out to disk...
That's as close to an infinite array as I can get you ;-)
You can make use of IOUtils from piece, as Prashant already told.
Here's a little piece from it which can solve the task (you will need IOUtils.toByteArray):
public class IOUtils {
private static final int DEFAULT_BUFFER_SIZE = 1024 * 4;
public static byte[] toByteArray(InputStream input) throws IOException {
ByteArrayOutputStream output = new ByteArrayOutputStream();
copy(input, output);
return output.toByteArray();
}
public static int copy(InputStream input, OutputStream output)
throws IOException {
long count = copyLarge(input, output);
if (count > Integer.MAX_VALUE) {
return -1;
}
return (int) count;
}
public static long copyLarge(InputStream input, OutputStream output)
throws IOException {
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
long count = 0;
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
count += n;
}
return count;
}
}

Categories