I am currently writing an application which has a result a file this is composed by blocks of bytes that are processed in blocks so the goal is to process one block convert it into bytes and write(append) that block of bytes into the file then process the next block and so on..until it finishes having all of the bytes of all of the blocks stored into the file, i have been trying using the following piece of code:
try (ObjectOutputStream oos = new ObjectOutputStream(file)) {
oos.writeObject(bytestobwritten);
oos.flush();
oos.close();
stat = 1;
} catch(FileNotFoundException ex) {
Logger.getLogger(Filer.class.getName()).log(Level.WARNING, "Error by writing block of bytes", ex);
} //end catch
The above code is inside the while structure that process the bytes the variable bytestobwritten contains the bytes of the current block
The issue here is that it is not appending all of the bytes only remains the last block of bytes..i need all of them being "Concatenated" to make the result amount of bytes for that file..
do you have any idea on how to deal with this situation in java? will appreciate any help, thanks in advance.
So I'm not sure you understand your problem you're trying to solve. But first ditch ObjectOutputStream for now. We can use OutputStream (or DataOutputStream) to write bytes.
When you talk about writing blocks of bytes to a file you have to answer the question: Are all blocks the same size? That's really important because if you write blocks with varying lengths you won't be able to read it back in because you don't know where a block begins and ends. You will need to know the size of the next block before you read it. If it's fixed block size that changes the code, but fixed block sizes has the limitation that no one block can be bigger than the block size.
public saveBLocks( List<Block> blocks ) {
DataOutputStream stream = new DataOutputStream( new FileOutputStream( "someFile.txt" ) );
try {
for( int i = 0; i < blocks.size(); i++ ) {
byte[] buffer = createBuffer( blocks.get(i) );
// save out the block size to the stream if we have varying block size
stream.writeInt( buffer.length );
// save the block, assumes buffer is the exact size of the block
stream.write( buffer, 0, buffer.length );
}
stream.flush();
} finally {
stream.close();
}
}
After reading part of your question I wonder if you are just copying bytes between two streams which makes this simpler, and you don't really have to worry about blocks per se.
Related
I tried to send an image from One device to other Device using Bluetooth.For that I take Android Bluetooth chat application source code and it works fine when I send String.But If i send image as byte array the while loop not breaks or EOF not reached when read from Inputstream.
Model:1
It receives image properly.But here I need to pass resultByteArray length.But I dont know the length.How to know the length of byte array in inputstream? inputstream.available() returns 0.
while(true)
{
byte[] resultByteArray = new byte[150827];
DataInputStream dataInputStream = new DataInputStream(mmInStream);
dataInputStream.readFully(resultByteArray);
mHandler.obtainMessage(AppConstants.MESSAGE_READ, dataInputStream.available(),-1, resultByteArray).sendToTarget();
}
Model:2
In this code while loop not breaks,
ByteArrayOutputStream bao = new ByteArrayOutputStream();
byte[] resultByteArray = new byte[1024];
int bytesRead;
while ((bytesRead = mmInStream.read(resultByteArray)) != -1) {
Log.i("BTTest1", "bytesRead=>"+bytesRead);
bao.write(resultByteArray,0,bytesRead);
}
final byte[] data = bao.toByteArray();
Also tried byte[] resultByteArray = IOUtils.toByteArray(mmInStream);but it also not works.I followed Bluetooth chat sample.
How to solve this issue?
As noted in the comment, the server needs to put the length of image at front of the actual image data. And the length of the image length information should be fixed like 4 bytes.
Then in the while loop, you need to get 4 bytes first to figure out the length of the image. After that, read bytes of the exact length from the input stream. That is the actual image.
The while loop doesn't need to break during the connection is alive. Actually it needs to wait another image data in the same while loop. The InputStream.read() is a blocking function and the thread will be sleeping until it receives enough data from the input stream.
And then you can expect another 4 bytes right after the previous image data as a start of another image.
while(true) {
try {
// Get the length first
byte[] bytesLengthOfImage = new byte[4];
mmInStream.read(bytesLengthOfImage);
int lengthOfImage = 0;
{
ByteBuffer buffer = ByteBuffer.wrap(bytesLengthOfImage);
buffer.order(ByteOrder.BIG_ENDIAN); // Assume it is network byte order.
lengthOfImage = buffer.getInt();
}
byte[] actualImage = new byte[lengthOfImage]; // Mind the memory allocation.
mmInStream.read(actualImage);
mHandler.obtainMessage(AppConstants.MESSAGE_READ, lengthOfImage,-1, actualImage).sendToTarget();
} catch (Exception e) {
if(e instanceof IOException) {
// If the connection is closed, break the loop.
break;
}
else {
// Handle errors
break;
}
}
}
This is a kind of simplified communication protocol. There is an open source framework for easy protocol implementation, called NFCommunicator.
https://github.com/Neofect/NFCommunicator
It might be an over specificiation for a simple project, but is worth a look.
I am having a socket listener program running(eclipse) on a mac machine and iOS client app is sending image to it in Bytes format. Normally, Image bytes will be 40 K and above.
I am facing a strange issue while reading the image bytes in socket. I have checked many links, they are suggesting to use like below code for reading all the bytes. The issue is, its reading all the bytes and NOT coming out of 'While' loop. After reading all the bytes, just struck inside the while loop only. I don't know what to do? Could someone please help me to solve this issue?
InputStream input = socket.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] bufferr = new byte[1024];
int read = 0;
long numWritten = 0;
try {
// Tried both the below while conditions, both are giving same issue
// while ((read = input.read(bufferr, 0, bufferr.length)) != -1)
while ((read = input.read(bufferr)) > 0) {
baos.write(bufferr, 0, read);
numWritten += read;
System.out.println("numWritten: " + numWritten);
}
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
try {
baos.flush();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
byte[] data = baos.toByteArray();
The below is my iOS code. I am closing the stream, still the same issue.
-(void) shareImage
{
AppDelegate *appDelegate = [UIApplication sharedApplication].delegate;
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
//[data writeToFile:#"screenshot.png" atomically:YES];
NSLog(#"[data length] %i: ", [data length]);
self.sentPing = YES;
int num = [self.outputStream write:[data bytes] maxLength:([data length])];
if (-1 == num) {
NSLog(#"Error writing to stream %#: %#", self.outputStream, [self.outputStream streamError]);
}else{
NSLog(#"Wrote %i bytes to stream %#.", num, self.outputStream);
[self.outputStream close];
//NSTimer *myRegularTime = [NSTimer scheduledTimerWithTimeInterval:5.0 target:self selector:#selector(ShareNextScreen:) userInfo:nil repeats:NO];
}
}
input.read(buffer) will block until data is received. If the stream is closed, it will return -1 as you are testing for. But, since the stream is still open and is waiting for data to arrive, it'll block.
Since you did update your question, I will update my answer. Closing a stream is not the same as terminating a TCP session.
Closing a stream will put the connection into FIN_WAIT_1 or FIN_WAIT_2 and it needs to finish and reset to be fully closed. You need to tell your server that you're shutting down your client and then shut down, or tell the client you're shutting down the server, and then close. Basically, both sides need to close when they wish to terminate the connection. Closing also may, depending on your environment, not even do anything but release references.
In most implementations of low level socket APIs, you have socket_shutdown(2) which actually sends the FIN TCP packet for a mutual shutdown initiation.
Basically both parties need to close, or the connection will be stuck in a waiting state. This is a defined behavior in various RFCs. An explanation can be found here.
From the post I linked, you can review the diagram here.
You are reading to end of stream but the peer hasn't closed the connection. So, you block.
Is there a way to ask a DataInputStream, if it has content to read? .readByte() will just hang it, waiting for a byte to be read :( Or do I always have to send a Dummy-Byte, to make sure it always sees something?
dis.available();
Returns:
an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking.
Is this what you looking for?
also check answers here. You might get even more informations. "available" of DataInputStream from Socket
Look at
public int available() throws IOException
according to docs it "Returns an estimate of the number of bytes that can be read"
so you should call dis.available()
When reading past the end of the file, an EOFException is thrown. So you can tell there's no more data to read. For examle:
DataInputStream inputStream = new DataInputStream(new FileInputStream(file));
int data = 0;
try {
while (true) {
data += inputStream.readInt();
}
} catch (EOFException e) {
System.out.println("All data were read");
System.out.println(data);
}
I made a small program to download data and write it to a file.
Here is the code:
public void run()
{
byte[] bytes = new byte[1024];
int bytes_read;
URLConnection urlc = null;
RandomAccessFile raf = null;
InputStream i = null;
try
{
raf = new RandomAccessFile("file1", "rw");
}
catch(Exception e)
{
e.printStackTrace();
return;
}
try
{
urlc = new URL(link).openConnection();
i = urlc.getInputStream();
}
catch(Exception e)
{
e.printStackTrace();
return;
}
while(canDownload())
{
try
{
bytes_read = i.read(bytes);
}
catch(Exception e)
{
e.printStackTrace();
return;
}
if(bytes_read != -1)
{
try
{
raf.write(bytes, 0, bytes_read);
}
catch(Exception e)
{
e.printStackTrace();
return;
}
}
else
{
try
{
i.close();
raf.close();
return;
}
catch(Exception e)
{
e.printStackTrace();
return;
}
}
}
}
The problem is that when I download big files, I get few bytes missing in the end of the file.
I tried to change the byte array size to 2K, and the problem was solved. But when I downloaded a bigger file (500 MB) , I got few bytes missing again.
I said "Ok, let's try with 4K size". And I changed the byte array size to 4K. It worked!
Nice, but then I downloaded a 4 GB file, I got bytes missing in the end again!
I said "Cool, let's try with 8K size". And then I changed the byte array size to 8K. Worked.
My first question is: Why this happens? (when I change buffer size, the file doesn't get corrupted).
Ok, in theory, the file corrupted problem can be solved changing the byte array size to bigger values.
But there's another problem: how can I measure the download speed (in one second interval) with big byte array sizes?
For example: Let's say that my download speed is 2 KB/s. And the byte array size is 4 K.
My second question is: How can I measure the speed (in one second interval) if the thread will have to wait the byte array to be full? My answer should be: change the byte array size to a smaller value. But the file will get corrupted xD.
After trying to solve the problem by myself, I spent 2 days searching over the internet for a solution. And nothing.
Please, can you guys answer my two questions? Thanks =D
Edit
Code for canDownload():
synchronized private boolean canDownload()
{
return can_download;
}
My advice is to use a proven library such as Apache Commons IO instead of trying to roll your own code. For your particular problem, take a look at the copyURLToFile(URL, File) method.
I would:
Change the RandomAccessFile to a FileOutputStream.
Get rid of canDownload(), whatever it's for, and set a read timeout on the connection instead.
Simplify the copy loop to this:
while ((bytes_read = i.read(bytes)) > 0)
{
out.write(bytes, 0, bytes_read);
}
out.close();
i.close();
with all the exception handling outside this loop.
I think you will find the problem is that you closed the underlying InputStream while the RandomAccessFile still had data in its write buffers. This will be why you are occasionally missing the last few bytes of data.
The race condition is between the JVM flushing the final write, and your call to i.close().
Removing the i.close() should fix the problem; it isn't necessary as the raf.close() closes the underlying stream anyway, but this way you give the RAF a chance to flush any outstanding buffers before it does so.
Here is how I compressed the string into a file:
public static void compressRawText(File outFile, String src) {
FileOutputStream fo = null;
GZIPOutputStream gz = null;
try {
fo = new FileOutputStream(outFile);
gz = new GZIPOutputStream(fo);
gz.write(src.getBytes());
gz.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
gz.close();
fo.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Here is how I decompressed it:
static int BUFFER_SIZE = 8 * 1024;
static int STRING_SIZE = 2 * 1024 * 1024;
public static String decompressRawText(File inFile) {
InputStream in = null;
InputStreamReader isr = null;
StringBuilder sb = new StringBuilder(STRING_SIZE);//constant resizing is costly, so set the STRING_SIZE
try {
in = new FileInputStream(inFile);
in = new BufferedInputStream(in, BUFFER_SIZE);
in = new GZIPInputStream(in, BUFFER_SIZE);
isr = new InputStreamReader(in);
char[] cbuf = new char[BUFFER_SIZE];
int length = 0;
while ((length = isr.read(cbuf)) != -1) {
sb.append(cbuf, 0, length);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
in.close();
} catch (Exception e1) {
e1.printStackTrace();
}
}
return sb.toString();
}
The decompression seems to take forever to do. I have got a feeling that I am doing too much redundant steps in the decompression bit. any idea of how I could speed it up?
EDIT: have modified the code to the above based on the following given recommendations,
1. I chaged the pattern, so to simply my code a bit, but if I couldn't use IOUtils is this still ok to use this pattern?
2. I set the StringBuilder buffer to be of 2M, as suggested by entonio, should I set it to be a little bit more? the memory is still OK, I still have around 10M available as it is suggested by the heap monitor from eclipse
3. I cut the BufferedReader and added a BufferedInputStream, but I am still not sure about the BUFFER_SIZE, any suggestions?
The above modification has improved the time taken to loop all my 30 2M files from almost 30 seconds to around 14, but I need to reduce it to under 10, is it even possible on android? Ok, basically, I need to process a text file in all 60M, I have divided them up into 30 2M, and before I start processing on each strings, I did the above timing on the time cost for me just to loop all the files and get the String in the file into my memory. Since I don't have much experience, will it be better, if I use 60 of 1M files instead? or any other improvement should I adopt? Thanks.
ALSO: Since physical IO is quite time consuming, and since my compressed version of files are all quite small(around 2K from 2M of text), is it possible for me to still do the above, but on a file that is already mapped to memory? possibly using java NIO? Thanks
The BufferedReader's only purpose is the readLine() method you don't use, so why not just read from the InputStreamReader? Also, maybe decreasing the buffer size may be helpful. Also, you should probably specify the encoding while both reading and writing, though that shouldn't have an impact on performance.
edit: more data
If you know the size of the string ahead, you should add a length parameter to decompressRawText and use it to initialise the StringBuilder. Otherwise it will be constantly resized in order to accomodate the result, and that's costly.
edit: clarification
2MB implies a lot of resizes. There is no harm if you specify a capacity higher than the length you end up with after reading (other than temporarily using more memory, of course).
You should wrap the FileInputStream with a BufferedInputStream before wrapping with a GZipInputStream, rather than using a BufferedReader.
The reason is that, depending on implementation, any of the various input classes in your decoration hierarchy could decide to read on a byte-by-byte basis (and I'd say the InputStreamReader is most likely to do this). And that would translate into many read(2) calls once it gets to the FileInputStream.
Of course, this may just be superstition on my part. But, if you're running on Linux, you can always test with strace.
Edit: once nice pattern to follow when building up a bunch of stream delegates is to use a single InputStream variable. Then, you only have one thing to close in your finally block (and can use Jakarta Commons IOUtils to avoid lots of nested try-catch-finally blocks).
InputStream in = null;
try
{
in = new FileInputStream("foo");
in = new BufferedInputStream(in);
in = new GZIPInputStream(in);
// do something with the stream
}
finally
{
IOUtils.closeQuietly(in);
}
Add a BufferedInputStream between the FileInputStream and the GZIPInputStream.
Similarly when writing.