I have a small Java Application running inside IBM Integration Bus, which is installed in an AIX Server with the character encoding set to ISO-8959-1.
My application is creating a ZIP File with the filenames received as a parameter. I have a file called "Websërvícès Guide.pdf" in the filesystem which I wanted to zip but I'm unable.
This is my code:
String zipFilePath = "/tmp/EventAttachments_2018.01.25.11.39.34.zip";
// Streams buffer
int BUFFER = 2048;
// Open I/O Buffered Streams
BufferedInputStream origin = null;
FileOutputStream dest = new FileOutputStream(zipFilePath);
ZipOutputStream out = new ZipOutputStream(new BufferedOutputStream(dest));
byte data[] = new byte[BUFFER];
// Oprn File Stream to my file
Path currentFilePath = Paths.get("/tmp/Websërvícès Guide.pdf");
InputStream fi = Files.newInputStream(currentFilePath, StandardOpenOption.READ);
origin = new BufferedInputStream(fi, BUFFER);
ZipEntry entry = new ZipEntry("Websërvícès Guide.pdf");
out.putNextEntry(entry);
int count;
while ((count = origin.read(data, 0, BUFFER)) != -1) {
out.write(data, 0, count);
}
origin.close();
out.close();
Which is throwing a "File Not Found" exception in the Files.newInputStream line.
I have read that Java is not working properly when checking it files with special characters exists and so on. I'm not able to perform changes in the JVM Parameters as code is executed inside a IBM JVM.
Any idea on how to solve this issue and pack the file properly in the ZIP?
Thank you
Can you try to pass following flag to while running your java program
-Dsun.jnu.encoding=UTF-8
First: In your code, you are not taking care of any Exceptions that could be thrown. I would suggest to handle the exceptions of the method or make the method throw the exception and handle it on a higher level. But somewhere you need to handle the exception.
Maybe that's already the problem. (see https://stackoverflow.com/a/155655/8896833)
Second: According to ISO-8959-1 all the characters used in your filename should be covered. Are you really sure about the path your program is working in at the moment you are trying to access the file?
Try to use URLDecoder class method decode(String string, String encoding);.
For example:
String path = URLDecoder.decode("Websërvícès Guide.pdf", "UTF-8"));
Related
I'm attempting to copy / duplicate a DocumentFile in an Android application, but upon inspecting the created duplicate, it does not appear to be exactly the same as the original (which is causing a problem, because I need to do an MD5 check on both files the next time a copy is called, so as to avoid overwriting the same files).
The process is as follows:
User selects a file from a ACTION_OPEN_DOCUMENT_TREE
Source file's type is obtained
New DocumentFile in target location is initialised
Contents of first file is duplicated into second file
The initial stages are done with the following code:
// Get the source file's type
String sourceFileType = MimeTypeMap.getSingleton().getExtensionFromMimeType(contextRef.getContentResolver().getType(file.getUri()));
// Create the new (empty) file
DocumentFile newFile = targetLocation.createFile(sourceFileType, file.getName());
// Copy the file
CopyBufferedFile(new BufferedInputStream(contextRef.getContentResolver().openInputStream(file.getUri())), new BufferedOutputStream(contextRef.getContentResolver().openOutputStream(newFile.getUri())));
The main copy process is done using the following snippet:
void CopyBufferedFile(BufferedInputStream bufferedInputStream, BufferedOutputStream bufferedOutputStream)
{
// Duplicate the contents of the temporary local File to the DocumentFile
try
{
byte[] buf = new byte[1024];
bufferedInputStream.read(buf);
do
{
bufferedOutputStream.write(buf);
}
while(bufferedInputStream.read(buf) != -1);
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if (bufferedInputStream != null) bufferedInputStream.close();
if (bufferedOutputStream != null) bufferedOutputStream.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
The problem that I'm facing, is that although the file copies successfully and is usable (it's a picture of a cat, and it's still a picture of a cat in the destination), it is slightly different.
The file size has changed from 2261840 to 2262016 (+176)
The MD5 hash has changed completely
Is there something wrong with my copying code that is causing the file to change slightly?
Thanks in advance.
Your copying code is incorrect. It is assuming (incorrectly) that each call to read will either return buffer.length bytes or return -1.
What you should do is capture the number of bytes read in a variable each time, and then write exactly that number of bytes. Your code for closing the streams is verbose and (in theory1) buggy as well.
Here is a rewrite that addresses both of those issues, and some others as well.
void copyBufferedFile(BufferedInputStream bufferedInputStream,
BufferedOutputStream bufferedOutputStream)
throws IOException
{
try (BufferedInputStream in = bufferedInputStream;
BufferedOutputStream out = bufferedOutputStream)
{
byte[] buf = new byte[1024];
int nosRead;
while ((nosRead = in.read(buf)) != -1) // read this carefully ...
{
out.write(buf, 0, nosRead);
}
}
}
As you can see, I have gotten rid of the bogus "catch and squash exception" handlers, and fixed the resource leak using Java 7+ try with resources.
There are still a couple of issues:
It is better for the copy function to take file name strings (or File or Path objects) as parameters and be responsible for opening the streams.
Given that you are doing block reads and writes, there is little value in using buffered streams. (Indeed, it might conceivably be making the I/O slower.) It would be better to use plain streams and make the buffer the same size as the default buffer size used by the Buffered* classes .... or larger.
If you are really concerned about performance, try using transferFrom as described here:
https://www.journaldev.com/861/java-copy-file
1 - In theory, if the bufferedInputStream.close() throws an exception, the bufferedOutputStream.close() call will be skipped. In practice, it is unlikely that closing an input stream will throw an exception. But either way, the try with resource approach will deals with this correctly, and far more concisely.
I have the following file, which contains a binary representation of an .MSG file :
binaryMessage.txt
And I put it in my Eclipse workspace, in the following folder - src/main/resources/test :
I want to use the string which is within this text file , within the following JUnit code, so I tried the following way :
request.setContent("src/main/resources/test/binaryMessage");
mockMvc.perform(post(EmailController.PATH__METADATA_EXTRACTION_OPERATION)
.contentType(MediaType.APPLICATION_JSON)
.content(json(request)))
.andExpect(status().is2xxSuccessful());
}
But this doesn't work. Is there a way I can pass in the string the file directly without using IO code ?
You can't read a file without using IO code (or libraries that use IO code). That said, it's not that difficult to read the file into memory so you can send it.
To read a binary file into a byte[] you can use this method:
private byte[] readToByteArray(InputStream is) throws IOException {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
baos.write(buffer, 0, len);
}
return baos.toByteArray();
} finally {
if (is != null) {
is.close();
}
}
}
Then you can do
request.setContent(readToByteArray(getClass().getResourceAsStream("test/binaryMessage")));
In addition to my comment on Samuel's answer, I just noticed that you depend on your concrete execution directory. I personally don't like that and normally use the class loader's functions to find resources.
Thus, to be independent of your working directory, you can use
getClass().getResource("/test/binaryMessage")
Convert this to URI and Path, then use Files.readAllBytes to fetch the contents:
Path resourcePath = Paths.get(getClass().getResource("/test/binaryMessage").toURI());
byte[] content = Files.readAllBytes(resourcePath);
... or even roll that into a single expression.
But to get back to your original question: no, this is I/O code, and you need it. But since the dawn of Java 7 (in 2011!) this does not need to be painful anymore.
I have a piece of code which uses the deflate algorithm to compress a file:
public static File compressOld(File rawFile) throws IOException
{
File compressed = new File(rawFile.getCanonicalPath().split("\\.")[0]
+ "_compressed." + rawFile.getName().split("\\.")[1]);
InputStream inputStream = new FileInputStream(rawFile);
OutputStream compressedWriter = new DeflaterOutputStream(new FileOutputStream(compressed));
byte[] buffer = new byte[1000];
int length;
while ((length = inputStream.read(buffer)) > 0)
{
compressedWriter.write(buffer, 0, length);
}
inputStream.close();
compressedWriter.close();
return compressed;
}
However, I'm not happy with the OutputStream copying loop since it's the "outdated" way of writing to streams. Instead, I want to use a Java 7 API method such as Files.copy:
public static File compressNew(File rawFile) throws IOException
{
File compressed = new File(rawFile.getCanonicalPath().split("\\.")[0]
+ "_compressed." + rawFile.getName().split("\\.")[1]);
OutputStream compressedWriter = new DeflaterOutputStream(new FileOutputStream(compressed));
Files.copy(compressed.toPath(), compressedWriter);
compressedWriter.close();
return compressed;
}
The latter method however does not work correctly, the compressed file is messed up and only a few bytes are copied. How come?
I see mainly two problems.
You copy from the target instead of the source. I think the copying has to be changed to Files.copy(rawFile.toPath(), compressedWriter);.
The Javadoc of copy says: "Note that if the given output stream is Flushable then its flush method may need to invoked after this method completes so as to flush any buffered output." So, you have to call the flush-method of the OutputStream after copy.
Additionally there is one more point. The Javadoc of copy says:
It is strongly recommended that the output stream be promptly closed if an I/O error occurs.
You can close the OutputStream in a finally-block to make sure it happens in case of an error. Another possibility is to use try with resources that was introduced in Java 7.
I am working on a File upload/download functionality, in Java using Struts2 framework, where we are uploading to and downloading from a remote server path. All seems to work fine when I check the functionality at my local machine with a local path as the destined path from where i am downloading and to which am uploading the files of any format. The development environment has JBoss server.
But when I run the same over at the prod env, where the application is deployed in Weblogic server, files of .txt,.csv,.html (basically text format files) have my jsp source code appended to the file content.
Below is the code that I have used for downloading:
BufferedOutputStream bout=null;
FileInputStream inStream = null;
byte[] buffer = null;
try {
inStream = new FileInputStream(path+File.separator+filename);
buffer = new byte[8192];
String extension = "";
int pos = filename.lastIndexOf(".");
if (pos > 0)
extension = filename.substring(pos+1);
int bytesRead = 0, bytesBuffered = 0;
response.setContentType("application/octet-stream");
response.setHeader("content-disposition", "attachment; filename="+ filename);
bout = new BufferedOutputStream(response.getOutputStream());
while((bytesRead = fistrm.read(buffer)) > -1){
bout.write(buffer, 0, bytesRead);
bytesBuffered += bytesRead;
if(bytesBuffered > 1048576){
bytesBuffered = 0;
bout.flush();
}
}
} catch (IOException e) {
log.error(Logger.getStackTrace(e));
} finally {
if(bout!=null){
bout.flush();
bout.close();
}
if(inStream!=null)
inStream.close();
}
I have tried using different response content types with respect to the extension, but it was of no help.
Seems like the outputstream has the jsp source code in it even before writing from the inputstream.
Can anyone please suggest a solution and explain why is this happening ?
It is happening because you are writing directly in the outputstream, and then returning a struts result, that is your JSP. You are using an action as if it would be a servlet, which is not.
In Struts2, to achieve your goal, you need to use the Stream result type, as described in the following answers:
https://stackoverflow.com/a/16300376/1654265
https://stackoverflow.com/a/16900840/1654265
Otherwise, if you want to bypass the framework mechanisms and manually write to the outputStream by yourself (there are very rare cases in which it is useful, like downloading dynamically created ZIP), then you must return the NONE result.
Returning ActionSupport.NONE (or null) from an Action class method causes the results processing to be skipped. This is useful if the action fully handles the result processing such as writing directly to the HttpServletResponse OutputStream.
But I strongly suggest you to go with the Stream result, the standard way.
I have some working code in python that I need to convert to Java.
I have read quite a few threads on this forum but could not find an answer. I am reading in a JPG image and converting it into a byte array. I then write this buffer it to a different file. When I compare the written files from both Java and python code, the bytes at the end do not match. Please let me know if you have a suggestion. I need to use the byte array to pack the image into a message that needs to be sent over to a remote server.
Java code (Running on Android)
Reading the file:
File queryImg = new File(ImagePath);
int imageLen = (int)queryImg.length();
byte [] imgData = new byte[imageLen];
FileInputStream fis = new FileInputStream(queryImg);
fis.read(imgData);
Writing the file:
FileOutputStream f = new FileOutputStream(new File("/sdcard/output.raw"));
f.write(imgData);
f.flush();
f.close();
Thanks!
InputStream.read is not guaranteed to read any particular number of bytes and may read less than you asked it to. It returns the actual number read so you can have a loop that keeps track of progress:
public void pump(InputStream in, OutputStream out, int size) {
byte[] buffer = new byte[4096]; // Or whatever constant you feel like using
int done = 0;
while (done < size) {
int read = in.read(buffer);
if (read == -1) {
throw new IOException("Something went horribly wrong");
}
out.write(buffer, 0, read);
done += read;
}
// Maybe put cleanup code in here if you like, e.g. in.close, out.flush, out.close
}
I believe Apache Commons IO has classes for doing this kind of stuff so you don't need to write it yourself.
Your file length might be more than int can hold and than you end up having wrong array length, hence not reading entire file into the buffer.