We have a JSP which is supposed to fetch a PDF from an internal URL and pass this PDF on to the client (like a proxy).
The resulting download is corrupted. After about 18'400 bytes we only get 00 bytes till the end. Interestingly the download is exactly the right size in bytes.
// Get the download
URL url = new URL(url);
HttpURLConnection req = (HttpURLConnection)url.openConnection();
req.setDoOutput(true);
req.setRequestMethod("GET");
// Get Binary Response
int contentLength = req.getContentLength();
byte ba[] = new byte[contentLength];
req.getInputStream().read(ba);
ByteArrayInputStream in = new ByteArrayInputStream(ba);
// Prepare Reponse Headers
response.setContentType(req.getContentType());
response.setContentLength(req.getContentLength());
response.setHeader("Content-Disposition", "attachment; filename=download.pdf");
// Stream to Response
OutputStream output = response.getOutputStream();
//OutputStream output = new FileOutputStream("c:\\temp\\op.pdf");
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0) output.write(buffer, 0, count);
in.close();
output.close();
req.disconnect();
UPDATE 1: I'm not the only one seeing Java stop streaming at 4379 bytes (link).
UPDATE 2: If I do output.flush after every write I get more data 14599 bytes and then the nulls. Must have something to do with tomcat's output buffer limit.
int contentLength = req.getContentLength();
byte ba[] = new byte[contentLength];
req.getInputStream().read(ba);
ByteArrayInputStream in = new ByteArrayInputStream(ba);
// Prepare Reponse Headers
response.setContentType(req.getContentType());
response.setContentLength(req.getContentLength());
response.setHeader("Content-Disposition", "attachment; filename=download.pdf");
// Stream to Response
OutputStream output = response.getOutputStream();
//OutputStream output = new FileOutputStream("c:\\temp\\op.pdf");
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0) output.write(buffer, 0, count);
This code is all nonsense. You are ignoring the result of the first read() and you are also wasting both time and space with the ByteArrayInputStream. All you need is this:
int contentLength = req.getContentLength();
// Prepare Reponse Headers
response.setContentType(req.getContentType());
response.setHeader("Content-Disposition", "attachment; filename=download.pdf");
// Stream to Response
InputStream in = req.getInputStream();
OutputStream output = response.getOutputStream();
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0) output.write(buffer, 0, count);
Note that the Content-Length is already set for you.
Related
public String downloadfile(#RequestParam long id){
Report report = reportService.getReportDetails(String.valueOf(id));
ResponseEntity<FileResponse> fileResponse = null;
String clusterKey = clusterService.getCLusterFromId(report.getId());
File file = new File(report.getFilePath());
fileResponse = restClient.getFileData(report.getFilePath(), clusterKey);
response.setContentType("application/vnd.ms-excel ");
response.setHeader("Content-disposition", "inline; filename=" + file.getName());
try (InputStream is = new ByteArrayInputStream(fileResponse.getBody().getResponseFile());
OutputStream out = response.getOutputStream();)
{
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = is.read(buffer)) != -1)
{
out.write(buffer, 0, bytesRead);
}
I am getting Cross-Site Scripting: Persistent issue in fortify tool for above piece of code out.write(buffer, 0, bytesRead) line. As per suggestion they are telling that we need to validate data before sending it back to browser. I am not sure how to validate byte array. After doing some search I found that we can use getVAlidatedFileContent() method of esapi validator but not sure how to use it.
I have data that is coming into an input stream. I have a servlet where i get the data. I want to download the received data as a file.
This is the code i have come up with so far, but with no success. Any help will be great! Thanks!
final InputStream inStream = new BufferedInputStream(fetchFile.getObjectContent());
byte[] bytes = IOUtils.toByteArray(inStream);
ServletOutputStream out = response.getOutputStream();
InputStream in = new ByteArrayInputStream(bytes);
response.setContentType("application/octet-stream");
response.setContentLength(bytes.length);
response.setHeader("Content-Disposition","attachment;filename=\"" + filename + "\"");
byte[] outputByte = new byte[bytes.length];
while(in.read(outputByte, 0, bytes.length) != -1)
{
out.write(outputByte, 0, bytes.length);
}
in.close();
out.flush();
out.close();
I have the following problem: I have an HttpServlet that create a file and return it to the user that have to receive it as a download
byte[] byteArray = allegato.getFile();
InputStream is = new ByteArrayInputStream(byteArray);
Base64InputStream base64InputStream = new Base64InputStream(is);
int chunk = 1024;
byte[] buffer = new byte[chunk];
int bytesRead = -1;
OutputStream out = new ByteArrayOutputStream();
while ((bytesRead = base64InputStream.read(buffer)) != -1) {
out.write(buffer, 0, bytesRead);
}
As you can see I have a byteArray object that is an array of bytes (byte[] byteArray) and I convert it into a file in this way:
First I convert it into an InputStream object.
Then I convert the InputStream object into a Base64InputStream.
Finally I write this Base64InputStream on a ByteArrayOutputStream object (the OutputStream out object).
I think that up to here it should be ok (is it ok or am I missing something in the file creation?)
Now my servlet have to return this file as a dowload (so the user have to receive the download into the browser).
So what have I to do to obtain this behavior? I think that I have to put this OutputStream object into the Servlet response, something like:
ServletOutputStream stream = res.getOutputStream();
But I have no idea about how exactly do it? Have I also to set a specific MIME type for the file?
It's pretty easy to do.
byte[] byteArray = //your byte array
response.setContentType("YOUR CONTENT TYPE HERE");
response.setHeader("Content-Disposition", "filename=\"THE FILE NAME\"");
response.setContentLength(byteArray.length);
OutputStream os = response.getOutputStream();
try {
os.write(byteArray , 0, byteArray.length);
} catch (Exception excp) {
//handle error
} finally {
os.close();
}
EDIT:
I've noticed that you are first decoding your data from base64, the you should do the following:
OutputStream os = response.getOutputStream();
byte[] buffer = new byte[chunk];
int bytesRead = -1;
while ((bytesRead = base64InputStream.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
You do not need the intermediate ByteArrayOutputStream
With org.apache.commons.compress.utils.IOUtils you can just "copy" from one file or stream (e.g. your base64InputStream) to the output stream:
response.setContentType([your file mime type]);
IOUtils.copy(base64InputStream, response.getOutputStream());
response.setStatus(HttpServletResponse.SC_OK);
You'll find that class here https://mvnrepository.com/artifact/org.apache.commons/commons-compress
A similar class (also named IOUtils) is also in Apache Commons IO (https://mvnrepository.com/artifact/commons-io/commons-io).
I'm currently trying to read in a image file from the server but either getting a incomplete data or
Exception in thread "main"
java.lang.NegativeArraySizeException.
Has this something to do with the buffer size? I have tried to use static size instead of contentlength. Please kindly advise.
URL myURL = new URL(url);
HttpURLConnection connection = (HttpURLConnection)myURL.openConnection();
connection.setRequestMethod("GET");
status = connection.getResponseCode();
if (status == 200)
{
int size = connection.getContentLength() + 1024;
byte[] bytes = new byte[size];
InputStream input = new ByteArrayInputStream(bytes);
FileOutputStream out = new FileOutputStream(file);
input = connection.getInputStream();
int data = input.read(bytes);
while(data != -1){
out.write(bytes);
data = input.read(bytes);
}
out.close();
input.close();
Let's examine the code:
int size = connection.getContentLength() + 1024;
byte[] bytes = new byte[size];
why do you add 1024 bytes to the size? What's the point? The buffer size should be something large enough to avoid too many reads, but small enough to avoid consuming too much memory. Set it at 4096, for example.
InputStream input = new ByteArrayInputStream(bytes);
FileOutputStream out = new FileOutputStream(file);
input = connection.getInputStream();
Why do you create a ByteArrayInputStream, and then forget about it completely? You don't need a ByteArrayInputStream, since you don't read from a byte array, but from the connection's input stream.
int data = input.read(bytes);
This reads bytes from the input. The max number of bytes read is the length of the byte array. The actual number of bytes read is returned and stored in data.
while (data != -1) {
out.write(bytes);
data = input.read(bytes);
}
So you have read data bytes, but you don't write only the first data bytes of the array. You write the whole array of bytes. That is wrong. Suppose your array if of size 4096 and data is 400, instead of writing the 400 bytes that have been read, you write the 400 bytes + the remaining 3696 bytes of the array, which could be 0, or could have values coming from a previous read. It should be
out.write(bytes, 0, data);
Finally:
out.close();
input.close();
If any exception occurs before, those two streams will never be closed. Do that a few times, and your whold OS won't have file descriptos available anymore. Use the try-with-resources statement to be sure your streams are closed, no matter what happens.
This code can help you
input = connection.getInputStream();
byte[] buffer = new byte[4096];
int n = - 1;
OutputStream output = new FileOutputStream( file );
while ( (n = input.read(buffer)) != -1)
{
if (n > 0)
{
output.write(buffer, 0, n);
}
}
output.close();
I use following code to download file from URL..
while(status==Status.DOWNLOADING){
HttpURLConnection conn=(HttpURLConnection)url.openConnection();
conn.connect();
int size=conn.getContentLength();
BufferedInputStream bin=new BufferedInputStream(conn.getInputStream());
byte[] buffer=new byte[1024];
int read=bin.read(buffer);
if(read==-1)
break;
downloaded+=read;
}
for some URL's read() method return -1 before reading upto size(content length) of download..
can anybody suggest me, what's happening with this code..
pls provide your suggestion..
Its not guaranteed that a webserver provides a content length in the http header. Therfore you should not rely on it. Just read the stream like this:
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int len;
while ((len = bin.read(buf)) > 0) {
bos.write(buf, 0, len);
}
byte[] data = bos.toByteArray();