Accessing byte elements in "Memory" - java

Iam trying to read a binary file to memory and pass the starting address of the memory block to a native function:
Memory image = new Memory(length);
int offset = 0;
int numRead = 0;
try
{
while (offset < image.size() && (numRead = in.read(image.getByteArray(0,(int)image.size()), offset, (int)image.size() - offset)) >= 0)
{
offset += numRead;
}
if (offset < image.size())
{
throw new IOException("Could not completely read file " + fileFileName.getName());
}
in.close();
}
catch(Exception IOException)
{
System.out.println("\nError Occured in try block!!!");
}
byte imageByte = image.getByte(0);
The problem is that the value of imageByte is -60 instead of 127. I checked by taking a byte array(instead of Memory) and reading the file into it. But it too showed 127 for array[0]. What can be the problem here???

ok i resolved the problem :D since getByteArray() returns a new byte array, the data was being copied to that new byte array and the memory region that i want to use remained uninitialised

Related

Java: How to count read bytes from InputStream without allocating the full memory before

I have a Java-backend where user can upload files to it. I want to limit these uploaded files to a max size and want to check the amount of uploaded bytes while the upload happens and break the transmission as soon as the limit is reached.
Currently I am using InputStream.available() before allocation for determination of estimated size, but that seems to be seen as unreliable.
Any suggestions?
You can use Guava's CountingInputstream or Apache IO's CountingInputStream when you want to know how many bytes have been read.
On the other hand when you want to stop the upload immediatly when reaching some limit then just count while reading chunks of bytes and close the stream when the limit has been exceeded.
You don't have to 'allocat[e] the full memory before'. Just use a normally sized buffer, say 8k, and perform the normal copy loop, tallying the total transferred. If it exceeds the quota, stop, and destroy the output file.
int count = 1;
InputStream stream;
if (stream.available() < 3) {
count++;
}
Result:
[0][1]{2][3]
1 1 1 1
If you're using a servlet and a multipart request you can do this:
public void doPost( final HttpServletRequest request, final HttpServletResponse response )
throws ServletException, IOException {
String contentLength = request.getHeader("Content-Length");
if (contentLength != null && maxRequestSize > 0 &&
Integer.parseInt(contentLength) > maxRequestSize) {
throw new MyFileUploadException("Multipart request is larger than allowed size");
}
}
My solution looks like this:
public static final byte[] readBytes (InputStream in, int maxBytes)
throws IOException {
byte[] result = new byte[maxBytes];
int bytesRead = in.read (result);
if (bytesRead > maxBytes) {
throw new IOException ("Reached max bytes (" + maxBytes + ")");
}
if (bytesRead < 0) {
result = new byte[0];
}
else {
byte[] tmp = new byte[bytesRead];
System.arraycopy (result, 0, tmp, 0, bytesRead);
result = tmp;
}
return result;
}
EDIT:
New variant
public static final byte[] readBytes (InputStream in, int bufferSize, int maxBytes)
throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buffer = new byte[bufferSize];
int bytesRead = in.read (buffer);
out.write (buffer, 0, bytesRead);
while (bytesRead >= 0) {
if (maxBytes > 0 && out.size() > maxBytes) {
String message = "Reached max bytes (" + maxBytes + ")";
log.trace (message);
throw new IOException (message);
}
bytesRead = in.read (buffer);
if (bytesRead < 0)
break;
out.write (buffer, 0, bytesRead);
}
return out.toByteArray();
}
All method implementations of read return the number of bytes read. So you can initiate a counter and increment it appropriately with each read to see how many bytes you've reads so far. Method available() allows you to see how many bytes are available for reading at the buffer at the moment and it has no relation to the total size of the file. this method could be very useful though to optimize your reading so each time you can request to read the chunk that is readily available and avoid blocking. Also in your case you can predict before reading if the amount of bytes that you will have after the upcoming reading will exceed your limit and thus you can cancel it even before you read the next chunk

Error while sending large files through socket

I'm trying to send large files via socket. The program works fine for small files (such as html pages or pdf), but when i send files over 3/4 mb the output is always corrupted (viewing it with a text editor i noticed that the last few lines are always missing).
Here's the code of the server:
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
server = new ServerSocket(port);
sock = server.accept();
in = new BufferedInputStream(sock.getInputStream());
setPerc(0);
received = 0;
int incByte = -1;
fout = new FileOutputStream(path+name, true);
long size = length;
do{
int buffSize;
if(size >= 4096){
buffSize = 4096;
}else{
buffSize = 1;
}
byte[] o = new byte[buffSize];
incByte = in.read(o, 0, buffSize);
fout.write(o);
received+=buffSize;
setPerc(calcPerc(received, length));
size -= buffSize;
//d("BYTE LETTI => "+incByte);
}while(size > 0);
server.close();
} catch (IOException e) {
e("Errore nella ricezione file: "+e);
}finally{
try {
fout.flush();
fout.close();
in.close();
} catch (IOException e) {
e("ERRORE INCOMINGFILE");
}
}
pr.release(port);
And here's the code of the client:
FileInputStream fin = null;
BufferedOutputStream out = null;
try {
sock = new Socket(host, port);
fin = new FileInputStream(file);
out = new BufferedOutputStream(sock.getOutputStream());
long size = file.length();
int read = -1;
do{
int buffSize = 0;
if(size >= 4096){
buffSize = 4096;
}else{
buffSize = (int)size;
}
byte[] o = new byte[buffSize];
for(int i = 0; i<o.length;i++){
o[i] = (byte)0;
}
read = fin.read(o, 0, buffSize);
out.write(o);
size -= buffSize;
//d("BYTE LETTI DAL FILE => "+read);
}while(size > 0);
} catch (UnknownHostException e) {
} catch (IOException e) {
d("ERRORE NELL'INVIO DEL FILE: "+e);
e.printStackTrace();
}finally{
try {
out.flush();
out.close();
fin.close();
} catch (IOException e) {
d("Errore nella chiusura dei socket invio");
}
}
i think it's something related with the buffer size, but i can't figure out what's wrong here.
This is incorrect:
byte[] o = new byte[buffSize];
incByte = in.read(o, 0, buffSize);
fout.write(o);
You are reading up to buffSize bytes and then writing exactly buffSize bytes.
You are doing the same thing at the other end as well.
You may be able to get away with this when reading from a file1, but when you read from a socket then a read is liable to give you a partially filled buffer, especially if the writing end can't always keep ahead of the reading end 'cos you are hammering the network with a large transfer.
The right way to do it is:
incByte = in.read(o, 0, buffSize);
fout.write(o, 0, incByte);
1 - It has been observed that when you read from a local file, a read call will typically give you all of the bytes that you requested (subject to the file size, etc). So, if you set buffSize to the length of the file, this code would probably work when reading from a local file. But doing this is a bad idea, because you are relying behaviour that is not guaranteed by either Java or a typical operating system.
You might have a problem e.g. here.
read = fin.read(o, 0, buffSize);
out.write(o);
Here read gives you the count of bytes you've actually just read.
On the next line you should write out only as many bytes as you've read.
In other words, you cannot expect the size of the file
you're reading to be multiple of your buffer size.
Review your server code too for the same issue.
The correct way to copy streams in Java is as follows:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
where count is an int, and buffer is a byte[] array of length > 0, typically 8k. You don't need to allocate byte arrays inside the loop, and you don't need a byte array of a specific size. Specifically, it's a complete waste of space to allocate a buffer as large as the file; it only works up to files of Integer.MAX_VALUE bytes, and it doesn't scale.
You do need to save the count returned by 'read()' and use it in the 'write()' method as shown above.

I can't get all bytes from website

I'm trying to read all bytes from a web site but I think I don't get all bytes. I give a high value for bytes array length. I used this method but it always returns an exception.
Here is the code:
DataInputStream dis = new DataInputStream(s2.getInputStream());
byte[] bytes = new byte[900000];
// Read in the bytes
int offset = 0;
int numRead = 0;
while (offset < bytes.length
&& (numRead=dis.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
// Ensure all the bytes have been read in
if (offset < bytes.length) {
throw new IOException("Could not completely read website");
}
out.write(bytes);
Edited Version:
ByteArrayOutputStream bais = new ByteArrayOutputStream();
InputStream is = null;
try {
is = s2.getInputStream();
byte[] byteChunk = new byte[4096]; // Or whatever size you want to read in at a time.
int n;
while ( (n = is.read(byteChunk)) > 0 ) {
bais.write(byteChunk, 0, n);
}
}
catch (IOException e) {
System.err.printf ("Failed while reading bytes");
e.printStackTrace ();
// Perform any other exception handling that's appropriate.
}
finally {
if (is != null) { is.close(); }
}
byte[] asd = bais.toByteArray();
out.write(asd);
This is the problem:
if (offset < bytes.length)
You'll only trigger that if the original data is more than 900,000 bytes. If the response is entirely complete in less than that, read() will return -1 correctly to indicate the end of the stream.
You should actually be throwing an exception if offset is equal to bytes.length, as that indicates that you might have truncated data :)
It's not clear where you got the 900,000 value from, mind you...
I would suggest that if you want to stick with the raw stream, you use Guava's ByteStreams.toByteArray method to read all the data. Alternatively, you could keep looping round, reading into a smaller buffer, writing into a ByteArrayOutputStream on each iteration.
I realise this doesn't answer your specific question. However I really wouldn't hand-code this sort of thing, when libraries such as HttpClient exist and are debugged/profiled etc.
e.g. here's how to use the fluent interface
Request.Get("http://targethost/homepage").execute().returnContent();
JSoup is an alternative if you're dealing with grabbing and scraping HTML.

InputStream.read(byte[], 0 length) stops early?

I have been writing something to read a request stream (containing gzipped data) from an incoming HttpServletRequest ('request' below), however it appears that the normal InputStream read method doesn't actually read all content?
My code was:
InputStream requestStream = request.getInputStream();
if ((length = request.getContentLength()) != -1)
{
received = new byte[length];
requestStream.read(received, 0, length);
}
else
{
// create a variable length list of bytes
List<Byte> bytes = new ArrayList<Byte>();
boolean endLoop = false;
while (!endLoop)
{
// try and read the next value from the stream.. if not -1, add it to the list as a byte. if
// it is, we've reached the end.
int currentByte = requestStream.read();
if (currentByte != -1)
bytes.add((byte) currentByte);
else
endLoop = true;
}
// initialize the final byte[] to the right length and add each byte into it in the right order.
received = new byte[bytes.size()];
for (int i = 0; i < bytes.size(); i++)
{
received[i] = bytes.get(i);
}
}
What I found during testing was that sometimes the top part (for when a content length is present) would just stop reading part way through the incoming request stream and leave the remainder of the 'received' byte array blank. If I just make it run the else part of the if statement at all times, it reads fine and all the expected bytes are placed in 'received'.
So, it seems like I can just leave my code alone now with that change, but does anyone have any idea why the normal 'read'(byte[], int, int)' method stopped reading? The description says that it may stop if an end of file is present. Could it be that the gzipped data just happened to include bytes matching whatever the signature for that looks like?
You need to add a while loop at the top to get all the bytes. The stream will attempt to read as many bytes as it can, but it is not required to return len bytes at once:
An attempt is made to read as many as len bytes, but a smaller number may be read, possibly zero.
if ((length = request.getContentLength()) != -1)
{
received = new byte[length];
int pos = 0;
do {
int read = requestStream.read(received, pos, length-pos);
// check for end of file or error
if (read == -1) {
break;
} else {
pos += read;
}
} while (pos < length);
}
EDIT: fixed while.
You need to see how much of the buffer was filled. Its only guaranteed to give you at at least one byte.
Perhaps what you wanted was DataInputStream.readFully();

Java: Issue with available() method of BufferedInputStream

I'm dealing with the following code that is used to split a large file into a set of smaller files:
FileInputStream input = new FileInputStream(this.fileToSplit);
BufferedInputStream iBuff = new BufferedInputStream(input);
int i = 0;
FileOutputStream output = new FileOutputStream(fileArr[i]);
BufferedOutputStream oBuff = new BufferedOutputStream(output);
int buffSize = 8192;
byte[] buffer = new byte[buffSize];
while (true) {
if (iBuff.available() < buffSize) {
byte[] newBuff = new byte[iBuff.available()];
iBuff.read(newBuff);
oBuff.write(newBuff);
oBuff.flush();
oBuff.close();
break;
}
int r = iBuff.read(buffer);
if (fileArr[i].length() >= this.partSize) {
oBuff.flush();
oBuff.close();
++i;
output = new FileOutputStream(fileArr[i]);
oBuff = new BufferedOutputStream(output);
}
oBuff.write(buffer);
}
} catch (Exception e) {
e.printStackTrace();
}
This is the weird behavior I'm seeing... when I run this code using a 3GB file, the initial iBuff.available() call returns a value of a approximatley 2,100,000,000 and the code works fine. When I run this code on a 12GB file, the initial iBuff.available() call only returns a value of 200,000,000 (which is smaller than the split file size of 500,000,000 and causes the processing to go awry).
I'm thinking this discrepancy in behvaior has something to do with the fact that this is on 32-bit windows. I'm going to run a couple more tests on a 4.5 GB file and a 3.5 GB file. If the 3.5 file works and the 4.5 one doesn't, that will further confirm the theory that it's a 32bit vs 64bit issue since 4GB would then be the threshold.
Well if you read the javadoc it quite clearly states:
Returns the number of bytes that can
be read from this input stream
without blocking (emphasis added by me)
So it's quite clear that what you want is not what this method offers. So depending on the underlying InputStream you may get problems much earlier (eg a stream over the network with a server that doesn't return the filesize - you'd have to read the complete file and buffer it just to return the "correct" available() count, which would take a lot of time - what if you only want to read a header?)
So the correct way to handle this is to change your parsing method to be able to handle the file in pieces. Personally I don't see much reason at all to even use available() here - just calling read() and stopping as soon as read() returns -1 should work fine. Can be made more complicated if you want to assure that every file really contains blockSize byte - just add an internal loop if that scenario is important.
int blockSize = XXX;
byte[] buffer = new byte[blockSize];
int i = 0;
int read = in.read(buffer);
while(read != -1) {
out[i++].write(buffer, 0, read);
read = in.read(buffer);
}
There are few correct uses of available(), and this isn't one of them. You don't need all that junk. Memorize this:
int count;
byte[] buffer = new byte[8192]; // or more
while ((count = in.read(buffer)) > 0)
out.write(buffer, 0, count);
That's the canonical way to copy a stream in Java.
You should not use the InputStream.available() function at all. It is only needed in very special circumstances.
You should also not create byte arrays that are larger than 1 MB. It's a waste of memory. The commonly accepted way is to read a small block (4 kB up to 1 MB) from the source file and then store only as many bytes as you have read in the destination file. Do that until you have reached the end of the source file.
available isn't a measure of how much is still to be read but more a measure how much is guaranteed to be able to read before it might EOF or block waiting for input
and put close calls in the finallies
BufferedInputStream iBuff = new BufferedInputStream(input);
int i = 0;
FileOutputStream output;
BufferedOutputStream oBuff=0;
try{
int buffSize = 8192;
int offset=0;
byte[] buffer = new byte[buffSize];
while(true){
int len = iBuff.read(buffer,offset,buffSize-offset);
if(len==-1){//EOF write out last chunk
oBuff.write(buffer,0,offset);
break;
}
if(len+offset==buffSize){//end of buffer write out to file
try{
output = new FileOutputStream(fileArr[i]);
oBuff = new BufferedOutputStream(output);
oBuff.write(buffer);
}finally{
oBuff.close();
}
++i;
offset=0;
}
offset+=len;
}//while
}finally{
iBuff.close();
}
Here is some code that splits a file. If performance is critical to you, you can experiment with the buffer size.
package so6164853;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Formatter;
public class FileSplitter {
private static String printf(String fmt, Object... args) {
Formatter formatter = new Formatter();
formatter.format(fmt, args);
return formatter.out().toString();
}
/**
* #param outputPattern see {#link Formatter}
*/
public static void splitFile(String inputFilename, long fragmentSize, String outputPattern) throws IOException {
InputStream input = new FileInputStream(inputFilename);
try {
byte[] buffer = new byte[65536];
int outputFileNo = 0;
OutputStream output = null;
long writtenToOutput = 0;
try {
while (true) {
int bytesToRead = buffer.length;
if (bytesToRead > fragmentSize - writtenToOutput) {
bytesToRead = (int) (fragmentSize - writtenToOutput);
}
int bytesRead = input.read(buffer, 0, bytesToRead);
if (bytesRead != -1) {
if (output == null) {
String outputName = printf(outputPattern, outputFileNo);
outputFileNo++;
output = new FileOutputStream(outputName);
writtenToOutput = 0;
}
output.write(buffer, 0, bytesRead);
writtenToOutput += bytesRead;
}
if (output != null && (bytesRead == -1 || writtenToOutput == fragmentSize)) {
output.close();
output = null;
}
if (bytesRead == -1) {
break;
}
}
} finally {
if (output != null) {
output.close();
}
}
} finally {
input.close();
}
}
public static void main(String[] args) throws IOException {
splitFile("d:/backup.zip", 1440 << 10, "d:/backup.zip.part%04d");
}
}
Some remarks:
Only those bytes that have actually been read from the input file are written to one of the output files.
I left out the BufferedInputStream and BufferedOutputStream since their buffer's size is only 8192 bytes, which less than the buffer I use in the code.
As soon as I open a file, I make sure that it will be closed at the end, no matter what happens. (The finally blocks.)
The code contains only one call to input.read and only one call to output.write. This makes it easier to check for correctness.
The code for splitting a file does not catch the IOException, since it doesn't know what to do in such a case. It is just passed to the caller; maybe the caller knows how to handle it.
Both #ratchet and #Voo are correct.
As for what is happening.
int max value is 2,147,483,647 (http://download.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html).
14 gigabytes is 15,032,385,536 which clearly don't fit an int.
See that according to the API Javadoc (http://download.oracle.com/javase/6/docs/api/java/io/BufferedInputStream.html#available%28%29) and as stated by #Voo, this don't break the method contract at all (just isn't what you are looking for).

Categories