Cross-Site Scripting: Persistent issue while writing byte array to outputstream - java

public String downloadfile(#RequestParam long id){
Report report = reportService.getReportDetails(String.valueOf(id));
ResponseEntity<FileResponse> fileResponse = null;
String clusterKey = clusterService.getCLusterFromId(report.getId());
File file = new File(report.getFilePath());
fileResponse = restClient.getFileData(report.getFilePath(), clusterKey);
response.setContentType("application/vnd.ms-excel ");
response.setHeader("Content-disposition", "inline; filename=" + file.getName());
try (InputStream is = new ByteArrayInputStream(fileResponse.getBody().getResponseFile());
OutputStream out = response.getOutputStream();)
{
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = is.read(buffer)) != -1)
{
out.write(buffer, 0, bytesRead);
}
I am getting Cross-Site Scripting: Persistent issue in fortify tool for above piece of code out.write(buffer, 0, bytesRead) line. As per suggestion they are telling that we need to validate data before sending it back to browser. I am not sure how to validate byte array. After doing some search I found that we can use getVAlidatedFileContent() method of esapi validator but not sure how to use it.

Related

Java Proxy Output Stream corrupted with null bytes

We have a JSP which is supposed to fetch a PDF from an internal URL and pass this PDF on to the client (like a proxy).
The resulting download is corrupted. After about 18'400 bytes we only get 00 bytes till the end. Interestingly the download is exactly the right size in bytes.
// Get the download
URL url = new URL(url);
HttpURLConnection req = (HttpURLConnection)url.openConnection();
req.setDoOutput(true);
req.setRequestMethod("GET");
// Get Binary Response
int contentLength = req.getContentLength();
byte ba[] = new byte[contentLength];
req.getInputStream().read(ba);
ByteArrayInputStream in = new ByteArrayInputStream(ba);
// Prepare Reponse Headers
response.setContentType(req.getContentType());
response.setContentLength(req.getContentLength());
response.setHeader("Content-Disposition", "attachment; filename=download.pdf");
// Stream to Response
OutputStream output = response.getOutputStream();
//OutputStream output = new FileOutputStream("c:\\temp\\op.pdf");
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0) output.write(buffer, 0, count);
in.close();
output.close();
req.disconnect();
UPDATE 1: I'm not the only one seeing Java stop streaming at 4379 bytes (link).
UPDATE 2: If I do output.flush after every write I get more data 14599 bytes and then the nulls. Must have something to do with tomcat's output buffer limit.
int contentLength = req.getContentLength();
byte ba[] = new byte[contentLength];
req.getInputStream().read(ba);
ByteArrayInputStream in = new ByteArrayInputStream(ba);
// Prepare Reponse Headers
response.setContentType(req.getContentType());
response.setContentLength(req.getContentLength());
response.setHeader("Content-Disposition", "attachment; filename=download.pdf");
// Stream to Response
OutputStream output = response.getOutputStream();
//OutputStream output = new FileOutputStream("c:\\temp\\op.pdf");
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0) output.write(buffer, 0, count);
This code is all nonsense. You are ignoring the result of the first read() and you are also wasting both time and space with the ByteArrayInputStream. All you need is this:
int contentLength = req.getContentLength();
// Prepare Reponse Headers
response.setContentType(req.getContentType());
response.setHeader("Content-Disposition", "attachment; filename=download.pdf");
// Stream to Response
InputStream in = req.getInputStream();
OutputStream output = response.getOutputStream();
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0) output.write(buffer, 0, count);
Note that the Content-Length is already set for you.

How can I put a downloadable file into the HttpServletResponse?

I have the following problem: I have an HttpServlet that create a file and return it to the user that have to receive it as a download
byte[] byteArray = allegato.getFile();
InputStream is = new ByteArrayInputStream(byteArray);
Base64InputStream base64InputStream = new Base64InputStream(is);
int chunk = 1024;
byte[] buffer = new byte[chunk];
int bytesRead = -1;
OutputStream out = new ByteArrayOutputStream();
while ((bytesRead = base64InputStream.read(buffer)) != -1) {
out.write(buffer, 0, bytesRead);
}
As you can see I have a byteArray object that is an array of bytes (byte[] byteArray) and I convert it into a file in this way:
First I convert it into an InputStream object.
Then I convert the InputStream object into a Base64InputStream.
Finally I write this Base64InputStream on a ByteArrayOutputStream object (the OutputStream out object).
I think that up to here it should be ok (is it ok or am I missing something in the file creation?)
Now my servlet have to return this file as a dowload (so the user have to receive the download into the browser).
So what have I to do to obtain this behavior? I think that I have to put this OutputStream object into the Servlet response, something like:
ServletOutputStream stream = res.getOutputStream();
But I have no idea about how exactly do it? Have I also to set a specific MIME type for the file?
It's pretty easy to do.
byte[] byteArray = //your byte array
response.setContentType("YOUR CONTENT TYPE HERE");
response.setHeader("Content-Disposition", "filename=\"THE FILE NAME\"");
response.setContentLength(byteArray.length);
OutputStream os = response.getOutputStream();
try {
os.write(byteArray , 0, byteArray.length);
} catch (Exception excp) {
//handle error
} finally {
os.close();
}
EDIT:
I've noticed that you are first decoding your data from base64, the you should do the following:
OutputStream os = response.getOutputStream();
byte[] buffer = new byte[chunk];
int bytesRead = -1;
while ((bytesRead = base64InputStream.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
You do not need the intermediate ByteArrayOutputStream
With org.apache.commons.compress.utils.IOUtils you can just "copy" from one file or stream (e.g. your base64InputStream) to the output stream:
response.setContentType([your file mime type]);
IOUtils.copy(base64InputStream, response.getOutputStream());
response.setStatus(HttpServletResponse.SC_OK);
You'll find that class here https://mvnrepository.com/artifact/org.apache.commons/commons-compress
A similar class (also named IOUtils) is also in Apache Commons IO (https://mvnrepository.com/artifact/commons-io/commons-io).

Network related information about the file transfer program

I am relatively very new to java. I wanted one information.
I am using the following code to do file transfer between one location to another:
InputStream in = new FileInputStream(sourceLocation);
OutputStream out = new FileOutputStream(targetLocation);
// Copy the bits from instream to outstream
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
in.close();
out.close();
I wanted to know, what channel it is using for transfer, and any other network related information for the program?

EOFException: Unexpected end of ZLIB input stream

I am facing this issue while unzipping a file and writing it in to another file. Here is the code. Can any one please let me know what changes are required.
I get this exception on the line with while ((len = zis.read(buffer)) > 0)
private FileItem readZippedFileRequest(HttpServletRequest request,Part part, String fileName) {
FileItem fileItem = null;
byte[] buffer = new byte[1024];
InputStream inputStream = part.getInputStream();
ZipInputStream zis = new ZipInputStream(inputStream);
ZipEntry entry;
while ((entry = zis.getNextEntry()) != null) {
ByteArrayOutputStream fos = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
InputStream myByteArray = new ByteArrayInputStream(fos.toByteArray());
fileItem = createCSVFile(myByteArray, fileName,ImportExportConstant.FILE_TYPE_EXCEL);
}
return fileItem;
}
There's nothing wrong with your code. There is something wrong with the file, as the message says. Are you sure it is zipped, and not GZipped for example? It would be more usual for a part to be GZipped. Try a GZIPInputStream.
NB there's no need for the ByteArrayInputStream. It's a complete waste of time and space. Just pass the zip/gzip input stream directly to your createCSVFile() method.
I have this error too and I searched a little bit... I've read that there must be zis.closeEntry(); before len = zis.read(buffer) but I tried it and then the error appears at zis.closeEntry();
I asked google and here is the answer:
!Answer!
I tried and wrote a little bit, then I switched the throws IOException in a try/catch-block and now there is all right.
The Exception is a well known bug. You have to put all in a try/catch-block and do nothing in the catch.
private FileItem readZippedFileRequest(HttpServletRequest request,Part part, String fileName) {
FileItem fileItem = null;
byte[] buffer = new byte[1024];
try{
InputStream inputStream = part.getInputStream();
ZipInputStream zis = new ZipInputStream(inputStream);
ZipEntry entry;
while ((entry = zis.getNextEntry()) != null) {
ByteArrayOutputStream fos = new ByteArrayOutputStream();
int len;
while ((len = zis.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
InputStream myByteArray = new ByteArrayInputStream(fos.toByteArray());
fileItem = createCSVFile(myByteArray, fileName,ImportExportConstant.FILE_TYPE_EXCEL);
}
}catch(IOException ex){
//Do nothing here
}
return fileItem;
}

how to download large files without memory issues in java

When I am trying to download a large file which is of 260MB from server, I get this error: java.lang.OutOfMemoryError: Java heap space. I am sure my heap size is less than 252MB. Is there any way I can download large files without increasing heap size?
How I can download large files without getting this issue? My code is given below:
String path= "C:/temp.zip";
response.addHeader("Content-Disposition", "attachment; filename=\"test.zip\"");
byte[] buf = new byte[1024];
try {
File file = new File(path);
long length = file.length();
BufferedInputStream in = new BufferedInputStream(new FileInputStream(file));
ServletOutputStream out = response.getOutputStream();
while ((in != null) && ((length = in.read(buf)) != -1)) {
out.write(buf, 0, (int) length);
}
in.close();
out.close();
There are 2 places where I can see you could potentially be building up memory usage:
In the buffer reading your input file.
In the buffer writing to your output stream (HTTPOutputStream?)
For #1 I would suggest reading directly from the file via FileInputStream without the BufferedInputStream. Try this first and see if it resolves your issue. ie:
FileInputStream in = new FileInputStream(file);
instead of:
BufferedInputStream in = new BufferedInputStream(new FileInputStream(file));
If #1 does not resolve the issue, you could try periodically flushing the output stream after so much data is written (decrease chunk size if necessary):
ie:
try
{
FileInputStream fileInputStream = new FileInputStream(file);
byte[] buf=new byte[8192];
int bytesread = 0, bytesBuffered = 0;
while( (bytesread = fileInputStream.read( buf )) > -1 ) {
out.write( buf, 0, bytesread );
bytesBuffered += bytesread;
if (bytesBuffered > 1024 * 1024) { //flush after 1MB
bytesBuffered = 0;
out.flush();
}
}
}
finally {
if (out != null) {
out.flush();
}
}
Unfortunately you have not mentioned what type out is. If you have memory issues I guess it is ByteArrayOutpoutStream. So, replace it by FileOutputStream and write the byte you are downloading directly to file.
BTW, do not use read() method that reads byte-by-byte. Use read(byte[] arr) instead. This is much faster.
First you can remove the (in != null) from your while statement, it's unnecessary. Second, try removing the BufferedInputStream and just do:
FileInputStream in = new FileInputStream(file);
There's nothing wrong (in regard to memory usage) with the code you're show. Either the servlet container is configured to buffer the entire response (look at the web.xml configuration), or the memory is being leaked elsewhere.

Categories