I have a servlet which is used to display image.This servlet actually called by the
<img src="/displaySessionImage?widgetName=something"/>
My get & post redirect to this method,
protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
HttpSession session = request.getSession();
String widgetName = request.getParameter("widgetName");
try {
//this is my file manager which was store ealier
StorageFile file = (StorageFile)session.getAttribute(widgetName);
response.setContentType(file.getContentType());
//the file manager can retrieve input stream
InputStream in = file.getInputStream();
OutputStream outImage = response.getOutputStream();
byte[] buf = new byte[1024];
int count = 0;
while ((count = in.read(buf)) >= 0) {
outImage.write(buf, 0, count);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
But this code does not work, the image could not be display. I think this will not work because i have store the file manager that contain the input stream in a session. This same method work for another image file that was retrieved from database and not stored in the session. i have actually print out the input stream. it contain the same input stream as the database file.
Is it something wrong with the code?
or i actually cannot store the file manager that contain the input stream in a session?
or is it that i used input stream in a wrong way?
You are really not clear about what is actually happening, which is perhaps just ignorance. But storing and passing an InputStream around in the session is already not a good sign. Firstly, it is not serializable. Secondly, you're fully detaching the input stream from the context where it's been created (so it might implicitly have been closed/released when the initial context is finished). Thirdly, an input stream can often be read only once (so once it's read, it cannot be read again anymore, you'd have to create a new one).
The normal approach is to read the InputStream into a byte[] directly after its creation and then store that byte[] in the session instead.
InputStream input = uploadedFile.getInputStream();
ByteArrayOutputStream output = new ByteArrayOutputStream();
// Copy bytes from input to output the usual way.
// ...
byte[] content = output.toByteArray();
// Now store it in session.
And then in the image servlet, just do
// ...
response.getOutputStream().write(content);
You only need to be aware that each byte of a byte[] eats one byte of JVM's memory. Be sure that you don't go overboard. Remove the attribute from the session as soon as you don't need it anymore. Make use of temp file storage if necessary, for sure if you have to deal with large files.
Update: as per your comment on the question:
I am using firebug, response tab is empty, in header tab, response header contain : content-type : image/jpeg, content-length :0 , the server and the date.
A content length of 0 confirms that the input stream was already been read (or its source has implicitly been released). This only confirms my initial guesses. No, manually setting the content length header won't solve the problem. The servlet container already automatically takes care about it when the response body fits fully in the default response buffer; it would otherwise switch to chunked encoding anyway.
Your code is not working because you are not able to display the uploaded image as your img syntax is wrong.
Try this:
<img src="${pageContext.request.contextPath}/submit/Java/2.jpg">
Here, /submit is the folder which is created in the project folder. I.e. this is the folder where all uploaded images are saved.
Related
Up till early this year the US Treasury web site posted monthly US Receipts and Outlays data in txt format. It was easy to write a program to read and store the info. All I use were:
URL url = new URL("https://www.fiscal.treasury.gov/fsreports/rpt/mthTreasStmt/mts1214.txt")
URLConnection connection.openConnection();
InputStream is = connection.getInputStream();
Then I just read the InputStream into a local file.
Now when I try same code, for May, I get an InputStream with nothing in it.
Just clicking on "https://www.fiscal.treasury.gov/fsreports/rpt/mthTreasStmt/mts0415.xlsx" opens an excel worksheet (the download path has since changed).
Which is great if you don't mind clicking on each link separately ... saving the file somewhere ... opening it manually to enable editing ... then saving it again as a real .xlsx file (because they really hand you an .xls file.)
But when I create a URL from that link, and use it to get an InputStream, the is empty. I also tried url.openStream() directly. No different.
Can anyone see a way I can resume using a program to read the new format?
In case its of interest I now use this code to write the stream to the file bit by bit... but there are no bits, so I don't know if it works.
static void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
System.out.println("reading: " + in.read(buf));
//This is what tells me it is empty, i.e. the loop below is ignored.
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Any help is appreciated.
I try to develop something like dropbox(very basic one). For one file to download, it's really easy. Just use servletoutputstream. what i want is: when client asks me multiple file, i zip files in server side then send to user. But if file is big it takes too many times to zip them and send to user.
is there any way to send files while they are compressing?
thanks for your help.
Part of the Java API for ZIP files is actually desgined to provide "on the fly" compression. It all fits nicely both with the java.io API and the servlet API, which means this is even... kind of easy (no multithreading required - even for performance reason, because usually your CPU will probably be faster at ZIPping than your network will be at sending contents).
The part you'll be interacting with is ZipOutputStream. It is a FilterOutputStream (which means it is designed to wrap an outputstream that already exists - in your case, that would be the respone's OutputStream), and will compress every byte you send it, using ZIP compression.
So, say you have a get request
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
// Your code to handle the request
List<YourFileObject> responseFiles = ... // Whatever you need to do
// We declare that the response will contain raw bytes
response.setContentType("application/octet-stream");
// We open a ZIP output stream
try (ZipOutputStream zipStream = new ZipOutputStream(response.getOutputStream()) {// This is Java 7, but not that different from java 6
// We need to loop over each files you want to send
for(YourFileObject fileToSend : responseFiles) {
// We give a name to the file
zipStream.putNextEntry(new ZipEntry(fileToSend.getName()));
// and we copy its content
copy(fileToSend, zipStream);
}
}
}
Of course, you should do proper exception handling. A couple quick notes though :
The ZIP file format mandates that each file has a name, so you must create a new ZipEntry each time you start a new file (you'll probably get an IllegalStateException if you do not, anyway)
Proper use of the API would be that you close each entry once you are done writing to it (at the end of the file). BUT : the Java implementation does that for you : each time you call putNextEntry it closes the previous one (if need be) all by itself
Likewise, you must not forget to close the ZIP stream, beacuse, this will properly close the last entry AND flush everything that is needed to create a proper ZIP file. Failure to do so will result in a corrupt file. Here, the try with resources statement does this : it closes the ZipOutputStream once everything is written to it.
The copy method here is just what you would use to transfert all the bytes from the original file to the outputstream, there is nothing ZIP specific about it. Just call outputStream.write(byte[] bytes).
**EDIT : ** to clarify...
For example, given a YourFileType that has the following methods :
public interface YourFileType {
public byte[] getContent();
public InputStream getContentAsStream();
}
Then the copy method could look like (this is all very basic Java IO, you could maybe use a library such as commons io to not reinvent the wheel...)
public void copy(YourFileType file, OutputStream os) throws IOException {
os.write(file.getContent());
}
Or, for a full streaming implementation :
public void copy(YourFileType file, OutputStream os) throws IOException {
try (InputStream fileContent = file.getContentAsStream()) {
byte[] buffer = new byte[4096]; // 4096 is kind of a magic number
int readBytesCount = 0;
while((readBytesCount = fileContent.read(buffer)) >= 0) {
os.write(buffer, 0, readBytesCount);
}
}
}
Using this kind of implementation, your client will start receiveing a response almost as soon as you start writing to the ZIPOutputStream (the only delay would be that of internal buffers), meaning it should not timeout (unless you spent too long buliding the content to send - but that would not be the ZIPping part fault's).
I'm working on a HTTP server in Java, which for testing purposes is running under Windows 8.1.
The way it's coded makes it so when a certain parameter is set, it changes the header of the HTTP file and sends the file through the socket with something that works kind of like:
socket.outputStream.write(filter.read());
Assume that the communication works fine, since I have tested it with various other filters and it works perfectly.
One of the filters is supposed to grab the HTML file, zip it and then send it to the client, without creating the file in the server machine. This is the header:
"HTTP/1.1 200 OK\nContent-Type: application/zip\nContent-Disposition: filename=\"" + request + ".zip\"\n";
Afterwards, I set my filter to a class I created (which is copied below) and send the file. My problem is that even though the server is definitively sending data, the client only downloads an empty zip file, with nothing inside.
I've been stuck with this issue for a few days, I can't seem to figure out what's wrong. I think that there's something wrong with how I create the entry or maybe how I close the outputs. I can't be sure.
I'd really appreciate any advice that could be given to me on this issue. Thanks for your attention.
class ZipFilterInputStream extends FilterInputStream
{
protected ZipFilterInputStream(InputStream inputToFilter) throws IOException
{
super(inputToFilter);
//Get the stuff ready for compression
ByteArrayOutputStream out = new ByteArrayOutputStream();
ZipOutputStream zout = new ZipOutputStream(out);
zout.putNextEntry(new ZipEntry("file.html"));
//Compress the stream
int data = in.read();
while (data != -1)
{
zout.write(data);
data = in.read();
}
zout.closeEntry();
zout.finish();
//Get the stream ready for reading.
in = new ByteArrayInputStream(out.toByteArray());
out.close();
}
public int read() throws IOException
{
return in.read();
}
}
I'm currently working on a project that is done in Java, on google appengine.
Appengine does not allow files to be stored so any on-disk representation objects cannot be used. Some of these include the File class.
I want to write data and export it to a few csv files, and then zip it up, and allow the user to download it.
How may I do this without using any File classes? I'm not very experienced in file handling so I hope you guys can advise me.
Thanks.
You can create a zip file and add to it while the user is downloading it. If you are using a servlet, this is straigthforward:
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// ..... process request
// ..... then respond
response.setContentType("application/zip");
response.setStatus(HttpServletResponse.SC_OK);
// note : intentionally no content-length set, automatic chunked transfer if stream is larger than the internal buffer of the response
ZipOutputStream zipOut = new ZipOutputStream(response.getOutputStream());
byte[] buffer = new byte[1024 * 32];
try {
// case1: already have input stream, typically ByteArrayInputStream from a byte[] full of previoiusly prepared csv data
InputStream in = new BufferedInputStream(getMyFirstInputStream());
try {
zipOut.putNextEntry(new ZipEntry("FirstName"));
int length;
while((length = in.read(buffer)) != -1) {
zipOut.write(buffer, 0, length);
}
zipOut.closeEntry();
} finally {
in.close();
}
// case 2: write directly to output stream, i.e. you have your raw data but need to create csv representation
zipOut.putNextEntry(new ZipEntry("SecondName"));
// example setup, key is to use the below outputstream 'zipOut' write methods
Object mySerializer = new MySerializer(); // i.e. csv-writer
Object myData = getMyData(); // the data to be processed by the serializer in order to make a csv file
mySerizalier.setOutput(zipOut);
// write whatever you have to the zipOut
mySerializer.write(myData);
zipOut.closeEntry();
// repeat for the next file.. or make for-loop
}
} finally {
zipOut.close();
}
}
There is no reason to store your data in files unless you have memory constraints. Files give you InputStream and OutputStream, both which have in-memory equivalents.
Note that creating a csv writer usually means doing something like this, where the point is to take a piece of data (array list or map, whatever you have) and make it into byte[] parts. Append the byte[] parts into an OutputStream using a tool like DataOutputStream (make your own if you like) or OutputStreamWriter.
If your data is not huge, meaning can stay in memory then exporting to CSV and zipping up and streaming it for downloading can all be done on-they-fly. Caching can be done at any of these steps which greatly depends on your application's business logic.
I'm trying to figure out why this particular snippet of code isn't working for me. I've got an applet which is supposed to read a .pdf and display it with a pdf-renderer library, but for some reason when I read in the .pdf files which sit on my server, they end up as being corrupt. I've tested it by writing the files back out again.
I've tried viewing the applet in both IE and Firefox and the corrupt files occur. Funny thing is, when I trying viewing the applet in Safari (for Windows), the file is actually fine! I understand the JVM might be different, but I am still lost. I've compiled in Java 1.5. JVMs are 1.6. The snippet which reads the file is below.
public static ByteBuffer getAsByteArray(URL url) throws IOException {
ByteArrayOutputStream tmpOut = new ByteArrayOutputStream();
URLConnection connection = url.openConnection();
int contentLength = connection.getContentLength();
InputStream in = url.openStream();
byte[] buf = new byte[512];
int len;
while (true) {
len = in.read(buf);
if (len == -1) {
break;
}
tmpOut.write(buf, 0, len);
}
tmpOut.close();
ByteBuffer bb = ByteBuffer.wrap(tmpOut.toByteArray(), 0,
tmpOut.size());
//Lines below used to test if file is corrupt
//FileOutputStream fos = new FileOutputStream("C:\\abc.pdf");
//fos.write(tmpOut.toByteArray());
return bb;
}
I must be missing something, and I've been banging my head trying to figure it out. Any help is greatly appreciated. Thanks.
Edit:
To further clarify my situation, the difference in the file before I read then with the snippet and after, is that the ones I output after reading are significantly smaller than they originally are. When opening them, they are not recognized as .pdf files. There are no exceptions being thrown that I ignore, and I have tried flushing to no avail.
This snippet works in Safari, meaning the files are read in it's entirety, with no difference in size, and can be opened with any .pdf reader. In IE and Firefox, the files always end up being corrupted, consistently the same smaller size.
I monitored the len variable (when reading a 59kb file), hoping to see how many bytes get read in at each loop. In IE and Firefox, at 18kb, the in.read(buf) returns a -1 as if the file has ended. Safari does not do this.
I'll keep at it, and I appreciate all the suggestions so far.
Just in case these small changes make a difference, try this:
public static ByteBuffer getAsByteArray(URL url) throws IOException {
URLConnection connection = url.openConnection();
// Since you get a URLConnection, use it to get the InputStream
InputStream in = connection.getInputStream();
// Now that the InputStream is open, get the content length
int contentLength = connection.getContentLength();
// To avoid having to resize the array over and over and over as
// bytes are written to the array, provide an accurate estimate of
// the ultimate size of the byte array
ByteArrayOutputStream tmpOut;
if (contentLength != -1) {
tmpOut = new ByteArrayOutputStream(contentLength);
} else {
tmpOut = new ByteArrayOutputStream(16384); // Pick some appropriate size
}
byte[] buf = new byte[512];
while (true) {
int len = in.read(buf);
if (len == -1) {
break;
}
tmpOut.write(buf, 0, len);
}
in.close();
tmpOut.close(); // No effect, but good to do anyway to keep the metaphor alive
byte[] array = tmpOut.toByteArray();
//Lines below used to test if file is corrupt
//FileOutputStream fos = new FileOutputStream("C:\\abc.pdf");
//fos.write(array);
//fos.close();
return ByteBuffer.wrap(array);
}
You forgot to close fos which may result in that file being shorter if your application is still running or is abruptly terminated. Also, I added creating the ByteArrayOutputStream with the appropriate initial size. (Otherwise Java will have to repeatedly allocate a new array and copy, allocate a new array and copy, which is expensive.) Replace the value 16384 with a more appropriate value. 16k is probably small for a PDF, but I don't know how but the "average" size is that you expect to download.
Since you use toByteArray() twice (even though one is in diagnostic code), I assigned that to a variable. Finally, although it shouldn't make any difference, when you are wrapping the entire array in a ByteBuffer, you only need to supply the byte array itself. Supplying the offset 0 and the length is redundant.
Note that if you are downloading large PDF files this way, then ensure that your JVM is running with a large enough heap that you have enough room for several times the largest file size you expect to read. The method you're using keeps the whole file in memory, which is OK as long as you can afford that memory. :)
I thought I had the same problem as you, but it turned out my problem was that I assumed you always get the full buffer until you get nothing. But you do not assume that.
The examples on the net (e.g. java2s/tutorial) use a BufferedInputStream. But that does not make any difference for me.
You could check whether you actually get the full file in your loop. Than the problem would be in the ByteArrayOutputStream.
Have you tried a flush() before you close the tmpOut stream to ensure all bytes written out?