I'm trying to build a simple parser, and since InputStream doesn't have some peek-like method, I'm using mark and reset.
But I suspect that successive calls to mark, invalidate the previous ones. Is that the case?
Is it possible to do something like
foo.mark(1);
...
foo.mark(2);
...
foo.reset();
...
foo.reset();
If not, is there some other way to simulate this or the peek method?
Thx.
Your suspicion is correct, the InputStream.mark(int readlimit) method will allow you reposition the stream only to the last marked position, provided you have read less than readlimit bytes. If you want a "peekable" InputStream you may want to consider the PushbackInputStream. It doesn't explicitly offer peek functionality, but it will allow you to "push back" bytes you have read.
Marks don't nest.
If you want to reread the stream several times, you might need to copy (a portion of) the stream into a byte array, and make a ByteArrayInputStream of it. You still can't have multiple marks, but you can have multiple ByteArrayInputStreams. (Or just forget about ByteArrayInputStream and pick bytes off the array directly.)
Related
I simply use
IOUtils.copy(myInputStream, myOutputStream);
And I see that before calling IOUtils.copy the input stream is avaible to read and after not.
flux.available()
(int) 1368181 (before)
(int) 0 (after)
I saw some explanation on this post, and I see I can copy the bytes from my InputStream to a ByteArrayInputStream and then use mark(0) and read(), in order to read multiple times an input stream.
Here is the code resulted (which is working).
I find this code very verbose, and I'd like if there is a better solution to do that.
ByteArrayInputStream fluxResetable = new ByteArrayInputStream(IOUtils.toByteArray(myInputStream));
fluxResetable.mark(0);
IOUtils.copy(fluxResetable, myOutputStream);
fluxResetable.reset();
An InputStream, unless otherwise stated, is single shot: you consume it once and that's it.
If you want to read it many times, that isn't just a stream any more, it's a stream with a buffer. Your solution reflects that accurately, so it is acceptable. The one thing I would probably change is to store the byte array and always create a new ByteArrayInputStream from it when needed, rather than resetting the same one:
byte [] content = IOUtils.toByteArray(myInputStream);
IOUtils.copy(new ByteArrayInputStream(content), myOutputStream);
doSomethingElse(new ByteArrayInputStream(content));
The effect is more or less the same but it's slightly easier to see what you're trying to do.
I'm working on a Java program where I'm reading from a file in dynamic, unknown blocks. That is, each block of data will not always be the same size and the size is determined as data is being read. For I/O I'm using a MappedByteBuffer (the file inputs are on the order of MB).
My goal:
Find an efficient way to store each complete block during the input phase so that I can process it.
My constraints:
I am reading one byte at a time from the buffer
My processing method takes a primitive byte array as input
Each block gets processed before the next block is read
What I've tried:
I played around with dynamic structures like Lists but they don't have backing arrays and the conversion time to a primitive array concerns me
I also thought about using a String to store each block and then getBytes() to get the byte[], but it's so slow
Reading the file multiple times in order to find the block size first, and then grab the relevant bytes
I am trying to find an approach that doesn't defeat the purpose of fast I/O. Any advice would be greatly appreciated.
Additional Info:
I'm using a rolling hash to decide where blocks should end
Here's a bit of pseudo-code:
circular_buffer[] = read first 128 bytes
rolling_hash = hash(buffer[])
block_storage = ??? // this is the data structure I'd like to use
while file has more text
b = next byte
add b to block_storage
add b to next index in circular_buffer (if reached end, start adding/overwriting front)
shift rolling_hash one byte to the right
if hash has a certain characteristic
process block_storage as a byte[] //should contain entire block of data
As you can see, I'm reading one byte at a time, and storing/overwriting that one byte repeatedly. However, once I get to the processing stage, I want to be able to access all of the info in the block. There is no predetermined max size of a block either, so I can't pre-allocate.
It seems to me, that you reqire a dynamically growing buffer. You can use the built in BytaArrayOutputStream to achieve that. It will automatically grow to store all data written to it. You can use write(int b) and toByteArray() to realize add b to block_storage and process block_storage as a byte[].
But take care - this stream will grow unbounded. You should implement some sanity checks around it to avoid using up all memory (e.g. count bytes written to it and break by throwing an exception, when it exceeds an reasonable amount). Also make sure to close and throw away the reference to a stream after consuming the block, to allow the GC to free up memory.
edit: As #marcman pointed out, the buffer can be reset().
I am implementing a facility that uses generic reading interface. I am using java.lang.Readable interface which uses CharBuffer to write out the data to.
What it doesn't say is whether the read call will block. It does however return amount of characters written in the buffer, but to me it can also indicate that the buffer didn't have enough space left for the entire waiting input to be written, so only part of it was written instead. But what happens if the buffer has plenty of space, but no input (or fewer character than buffer can hold) is available? Will read return zero (or a small integer number) or will it block?
Yes, it blocks. After operation is completed this method returns the number of char values added to the buffer, or -1 if this source of characters is at its end.
If it's non-blocking, there has to be a mechanism to notify when it becomes readable. So it can't be non-blocking.
I am learning how to use an InputStream. I was trying to use mark for BufferedInputStream, but when I try to reset I have these exceptions:
java.io.IOException: Resetting to invalid mark
I think this means that my mark read limit is set wrong. I actually don't know how to set the read limit in mark(). I tried like this:
is = new BufferedInputStream(is);
is.mark(is.available());
This is also wrong.
is.mark(16);
This also throws the same exception.
How do I know what read limit I am supposed to set? Since I will be reading different file sizes from the input stream.
mark is sometimes useful if you need to inspect a few bytes beyond what you've read to decide what to do next, then you reset back to the mark and call the routine that expects the file pointer to be at the beginning of that logical part of the input. I don't think it is really intended for much else.
If you look at the javadoc for BufferedInputStream it says
The mark operation remembers a point in the input stream and the reset operation causes all the bytes read since the most recent mark operation to be reread before new bytes are taken from the contained input stream.
The key thing to remember here is once you mark a spot in the stream, if you keep reading beyond the marked length, the mark will no longer be valid, and the call to reset will fail. So mark is good for specific situations and not much use in other cases.
This will read 5 times from the same BufferedInputStream.
for (int i=0; i<5; i++) {
inputStream.mark(inputStream.available()+1);
// Read from input stream
Thumbnails.of(inputStream).forceSize(160, 160).toOutputStream(out);
inputStream.reset();
}
The value you pass to mark() is the amount backwards that you will need to reset. if you need to reset to the beginning of the stream, you will need a buffer as big as the entire stream. this is probably not a great design as it will not scale well to large streams. if you need to read the stream twice and you don't know the source of the data (e.g. if it's a file, you could just re-open it), then you should probably copy it to a temp file so you can re-read it at will.
Question
What is (if there is any) terminating characters/byte sequences in serialized java objects?
Background
I'm working on a small self-education project where I would like to serialize java objects and write them to a stream where there are read and then unserialized. Since, I will need to identify the borders between serialized objects and I can't be sure that the current object is not the last one, is there a terminating character that is always there that I can use as my identifier?
I noticed that there is a magic number ACED that allows me to identify the start of the object, so how do I identify the end?
EDIT:
If there is no terminating character, is there any safe terminating characters/sequences that I can use (insert) to identify the end of the object?
In theory you should always be able to find the end of an object, in practice you cannot. I understand the problem is customised writeObject implementations that don't call either defaultReadObject or readFields have a non-standard representation.
I've played about with serialisation in the past. Including creating streams for use when I've been doing unusual things to the ObjectInputStream. It's not pleasant(!).
You can read the details in the spec, and the source is worth a read.
there are none. AFAIK the only requirement is that the deserialiser know when to stop reading, when given a corresponding serialisation. subject to that, the serialiser can write whatever it wants -- in any position not just the last.
if you're old skool dump a 32-bit length field at the beginning a refuse to handle objects bigger than 4 gig.
nu scool, you just make sure your read and your write logic are consistent and don't care about the length.
You can add a terminating object to your object stream. e.g. null or a special String.
However, I suggest that you instead convert the ObjectsStream to a byte[] and write the byte length of the byte[] followed by its data. This way each ObjectStream is independent and you always know where it finishes.
Have you considered applying a record-marking layer similar to HTTP Chunked encoding?
The Chunked encoding is intended to solve a generalization of this scenario: identifying the end of a message of indeterminate length that both itself contains no identifiable end, and is embedded in a longer stream without ending it.