How to get file size of http file path - java

I'm using the below code to download files from a remote location via http. Some assets are not fully downloading and appear to be corrupt. It looks to be about 5% of the time. I'm thinking it'd be good to ensure I've downloaded the full file by getting the file size in advance and comparing it to what I've downloaded to be sure nothing was missed.
Through some google searches and looking at the objects I'm already working with I don't see an obvious way to obtain this file size. Can someone point me in the right direction?
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setInstanceFollowRedirects(true);
InputStream is = con.getInputStream();
file = new File(destinationPath+"."+remoteFile.getExtension());
BufferedInputStream bis = new BufferedInputStream(is);
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(file.getAbsolutePath()));
while ((i = bis.read()) != -1) {
bos.write(i);
}
bos.flush();
bis.close();

con.getContentLength() may give you what you want, but only if the server provided it as a response header. If the server used "chunked" encoding instead of providing a Content-Length header then the total length is not available up-front.

Check out the getContentLength() method here HttpURLConnection that inherits from URLConnection.

You can use InputStream#available() method.
It returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream.
The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes.
FileInputStream fis = new FileInputStream(destinationPath+"."+remoteFile.getExtension());
System.out.println("Total file size to read (in bytes) : "+ fis.available());

Related

Missing one byte when transferring an image over Socket Java

I have a problem transferring a file over socket.
I Wrote a simple client / server app and the client takes a screenshot and send it to server.
The problem is the file is not completed whatever i do, It's always missing the first byte from the array which makes the photo damaged.
When I open the photo in any hex editor and compare the original photo with the one that the client sent, I can see the missing byte, as if I add it, the photo opens without the problem. The size of the sent file missing just one byte !
Here is a photo for the problem :
Original photo
sent photo
Here is the code :
Server ( Receiver ) :
byte[] buf;
InputStream inp;
try (BufferedOutputStream out1 = new BufferedOutputStream(new FileOutputStream(new File("final.jpeg")))) {
buf = new byte[s.getReceiveBufferSize()];
inp = new DataInputStream(s.getInputStream());
Thread.sleep(200);
int len = 0;
while ((len = inp.read(buf)) >0){
out1.write(buf,0,len);
}
out1.flush();
inp.close();
out1.close();
}
Client ( Sender ):
BufferedImage screenshot = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
ByteArrayOutputStream os = new ByteArrayOutputStream();
ImageIO.write(screenshot, "jpeg", os);
ImageIO.write(screenshot, "jpeg", new File("test.jpeg"));
OutputStream out = new BufferedOutputStream( connection.getOutputStream());
out.write(os.toByteArray());
out.close();
I have tried to send the array with the same way I receive it but no lock. I have tried with, and without buffered, I have tried flush in both sides, I tried to turn off Nod antivirus, Tried a sleep when sending length,
I almost tried everything without success .
I have tried on both, My pc and a virtual machine windows 7 !
Any help will be appreciated.
Edit :
First 10 bytes from the original file :
first 10 bytes from the sent file :
The code you posted does not lose data. Somewhere prior to executing the server code you posted, you have executed a single InputStream.read() of one byte, possibly in a misguided attempt to test for end of stream.
The sleep is just literally a waste of time. Remove it. You don't need the DataInput/OutputStreams either.
Please keep in mind that DataInputStream signals end of stream by returning value -1 from read() therefore your server reading loop should look like this:
while ((len = inp.read(buf)) != -1){
out1.write(buf,0,len);
}
Perhaps this helps.
The client code looks fine. Must be the server. You only posted the part when "some" input stream is written to a file. What happens before? Anyone doing a read() on the input stream?
Sorry for writing this in the "answer" section. Apparently, I cannot comment yet.
Ok it was my fault ! I was looking for something wrong in server side but the fault was in client side ! I opened a DataInputStream to read the order coming from server without closing it and that was the problem.

JAVA: Stream any file to browser correctly

So I have created my own personal HTTP Server in Java from scratch.
So far it is working fine but with one major flaw.
When I try to pass big files to the browser I get a Java Heap Space error. I know how to fix this error through the JVM but I am looking for the long term solution for this.
//declare an integer for the byte length of the file
int length = (int) f.length();
//start the fileinput stream.
FileInputStream fis = new FileInputStream(f);
//byte array with the length of the file
byte[] bytes = new byte[length];
//write the file until the bytes is empty.
while ((length = fis.read(bytes)) != -1 ){
write(bytes, 0, length);
}
flush();
//close the file input stream
fis.close();
This way sends the file to the browser successfully and streams it perfectly but the issue is, because I am creating a byte array with the length of the file. When the file is very big I get the Heap Space error.
I have eliminated this issue by using a buffer as shown below and I dont get Heap Space errors anymore. BUT the way shown below does not stream the files in the browser correctly. It's as if the file bytes are being shuffled and are being sent to the browser all together.
final int bufferSize = 4096;
byte buffer[] = new byte[bufferSize];
FileInputStream fis = new FileInputStream(f);
BufferedInputStream bis = new BufferedInputStream(fis);
while ( true )
{
int length = bis.read( buffer, 0, bufferSize );
if ( length < 0 ) break;
write( buffer, 0, length );
}
flush();
bis.close();
fis.close();
NOTE1:
All the correct Response Headers are being sent perfectly to the browser.
Note2:
Both ways work perfectly on a computer browser but only the first way works on a smartphone's browser (but sometimes it gives me Heap Space error).
If someone knows how to correctly send files to a browser and stream them correctly I would be a very very happy man.
Thank you in advance! :)
When reading from a BufferedInputStream you can allow its' buffer to handle the buffering, there is no reason to read everything into a byte[] (and certainly not a byte[] of the entire File). Read one byte at a time, and rely on the internal buffer of the stream. Something like,
FileInputStream fis = new FileInputStream(f);
BufferedInputStream bis = new BufferedInputStream(fis);
int abyte;
while ((abyte = bis.read()) != -1 ){
write(abyte);
}
Emm... As I can see it, you try to use chunks in your code anyway,
as I can remember, even the apache HttpClient+FileUpload solution has file size limit about <=2.1GB or something (correct me if I am wrong) so it is a bit hard thing...
I haven't tried the solution yet but as a test you can use java.io.RandomAccessFile in combination with File(Input/Output)Stream on the client and server not to read and write the whole file at a time but sequence of lets say <=30MB blocks for example to avoid the annoying outofmemory errors ; An example of using RandomAccessFile can be found here https://examples.javacodegeeks.com/core-java/io/randomaccessfile/java-randomaccessfile-example/
But still you give less details :( I mean is your client suppose to be a common Java application or not?
If you have some additional information please let me know
Good luck :)

Get the total number of bytes loaded to the BufferReader before finish reading from it

I'm reading a large xml file using HttpURLConnection in java as follows.
StringBuilder responseBuilder = new StringBuilder(1024);
char[] buffer = new char[4096];
BufferedReader br = new BufferedReader(new InputStreamReader(((InputStream)new DataInputStream(new GZIPInputStream(connection.getInputStream()))),"UTF-8"));
int n = 0;
while(n>=0){
n=br.read(buffer,0,buffer.length);
if(n>0) responseBuilder.append(buffer,0,n);
}
Is there any way to get the total number of bytes loaded to the BufferedReader before finish reading it char by char / line by line / char block by char block.
It sounds like you're trying to find out the size of the BufferedReader without consuming it.
You could try using the HttpURLConnection's getContentLength() method. This may or may not work. What it certainly wouldn't do is give you the uncompressed size of the stream. If it's the latter that you're after, you're almost certainly out of luck.
If I have misunderstood your question, please clarify what it is exactly that you're after.
If the content-length header has been set then you can access that through the connection. But if the content has been compressed then it might not be set, or might give the compressed size, where I assume you are looking for the uncompressed size.

getResourceAsStream returns HttpInputStream not of the entire file

I am having a web application with an applet which will copy a file packed witht the applet to the client machine.
When I deploy it to webserver and use: InputStream in = getClass().getResourceAsStream("filename") ;
The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer.
The InputStream is of type HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream). But while I test applet in applet viewer, the files are copied fine, with the InputStream returned is of type BufferedInputStream, which has the file's byte sizes. I guess that when getResourceStream in file system the BufferedInputStream will be used and when at http protocol, HttpInputStream will be used.
How will I copy the file completely, is there a size limited for HttpInputStream?
Thanks a lot.
in.available() tells you how many bytes you can read without blocking, not the total number of bytes you can read from a stream.
Here's an example of copying an InputStream to an OutputStream from org.apache.commons.io.IOUtils:
public static long copyLarge(InputStream input, OutputStream output)
throws IOException {
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
long count = 0;
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
count += n;
}
return count;
}
The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer.
It does not mean that at all!
The in.available() method returns the number of characters that can be read without blocking. It is not the length of the stream. In general, there is no way to determine the length of an InputStream apart from reading (or skipping) all bytes in the stream.
(You may have observed that new FileInputStream("someFile").available() usually gives you the file size. But that behaviour is not guaranteed by the spec, and is certainly untrue for some kinds of file, and possibly for some kinds of file system as well. A better way to get the size of a file is new File("someFile").length(), but even that doesn't work in some cases.)
See #tdavies answer for example code for copying an entire stream's contents. There are also third party libraries that can do this kind of thing; e.g. org.apache.commons.net.io.Util.

Java: Reading a pdf file from URL into Byte array/ByteBuffer in an applet

I'm trying to figure out why this particular snippet of code isn't working for me. I've got an applet which is supposed to read a .pdf and display it with a pdf-renderer library, but for some reason when I read in the .pdf files which sit on my server, they end up as being corrupt. I've tested it by writing the files back out again.
I've tried viewing the applet in both IE and Firefox and the corrupt files occur. Funny thing is, when I trying viewing the applet in Safari (for Windows), the file is actually fine! I understand the JVM might be different, but I am still lost. I've compiled in Java 1.5. JVMs are 1.6. The snippet which reads the file is below.
public static ByteBuffer getAsByteArray(URL url) throws IOException {
ByteArrayOutputStream tmpOut = new ByteArrayOutputStream();
URLConnection connection = url.openConnection();
int contentLength = connection.getContentLength();
InputStream in = url.openStream();
byte[] buf = new byte[512];
int len;
while (true) {
len = in.read(buf);
if (len == -1) {
break;
}
tmpOut.write(buf, 0, len);
}
tmpOut.close();
ByteBuffer bb = ByteBuffer.wrap(tmpOut.toByteArray(), 0,
tmpOut.size());
//Lines below used to test if file is corrupt
//FileOutputStream fos = new FileOutputStream("C:\\abc.pdf");
//fos.write(tmpOut.toByteArray());
return bb;
}
I must be missing something, and I've been banging my head trying to figure it out. Any help is greatly appreciated. Thanks.
Edit:
To further clarify my situation, the difference in the file before I read then with the snippet and after, is that the ones I output after reading are significantly smaller than they originally are. When opening them, they are not recognized as .pdf files. There are no exceptions being thrown that I ignore, and I have tried flushing to no avail.
This snippet works in Safari, meaning the files are read in it's entirety, with no difference in size, and can be opened with any .pdf reader. In IE and Firefox, the files always end up being corrupted, consistently the same smaller size.
I monitored the len variable (when reading a 59kb file), hoping to see how many bytes get read in at each loop. In IE and Firefox, at 18kb, the in.read(buf) returns a -1 as if the file has ended. Safari does not do this.
I'll keep at it, and I appreciate all the suggestions so far.
Just in case these small changes make a difference, try this:
public static ByteBuffer getAsByteArray(URL url) throws IOException {
URLConnection connection = url.openConnection();
// Since you get a URLConnection, use it to get the InputStream
InputStream in = connection.getInputStream();
// Now that the InputStream is open, get the content length
int contentLength = connection.getContentLength();
// To avoid having to resize the array over and over and over as
// bytes are written to the array, provide an accurate estimate of
// the ultimate size of the byte array
ByteArrayOutputStream tmpOut;
if (contentLength != -1) {
tmpOut = new ByteArrayOutputStream(contentLength);
} else {
tmpOut = new ByteArrayOutputStream(16384); // Pick some appropriate size
}
byte[] buf = new byte[512];
while (true) {
int len = in.read(buf);
if (len == -1) {
break;
}
tmpOut.write(buf, 0, len);
}
in.close();
tmpOut.close(); // No effect, but good to do anyway to keep the metaphor alive
byte[] array = tmpOut.toByteArray();
//Lines below used to test if file is corrupt
//FileOutputStream fos = new FileOutputStream("C:\\abc.pdf");
//fos.write(array);
//fos.close();
return ByteBuffer.wrap(array);
}
You forgot to close fos which may result in that file being shorter if your application is still running or is abruptly terminated. Also, I added creating the ByteArrayOutputStream with the appropriate initial size. (Otherwise Java will have to repeatedly allocate a new array and copy, allocate a new array and copy, which is expensive.) Replace the value 16384 with a more appropriate value. 16k is probably small for a PDF, but I don't know how but the "average" size is that you expect to download.
Since you use toByteArray() twice (even though one is in diagnostic code), I assigned that to a variable. Finally, although it shouldn't make any difference, when you are wrapping the entire array in a ByteBuffer, you only need to supply the byte array itself. Supplying the offset 0 and the length is redundant.
Note that if you are downloading large PDF files this way, then ensure that your JVM is running with a large enough heap that you have enough room for several times the largest file size you expect to read. The method you're using keeps the whole file in memory, which is OK as long as you can afford that memory. :)
I thought I had the same problem as you, but it turned out my problem was that I assumed you always get the full buffer until you get nothing. But you do not assume that.
The examples on the net (e.g. java2s/tutorial) use a BufferedInputStream. But that does not make any difference for me.
You could check whether you actually get the full file in your loop. Than the problem would be in the ByteArrayOutputStream.
Have you tried a flush() before you close the tmpOut stream to ensure all bytes written out?

Categories