zipped xml string which has been retrieved from a web response given by production server.
String is good as same methodology in .net gives valid result. (The data is a list of car makes.)
In Android however the ZipInputStream read produces a buffer of the right size (4895) but it has null data from position 2661 onwards. The last car decompressed correctly is 'MG'
The method does not error.
Can anybody see what is wrong?
thanks
private byte[] decompressZip(String zipText) throws IOException{
try {
byte[] zipBytes = MakeBytes(zipText);
byte[] zipData;
ByteArrayInputStream b = new ByteArrayInputStream(zipBytes);
BufferedInputStream buf = new BufferedInputStream(b);
//ZipInputStream zin = new ZipInputStream(b); doesn't matter both constuctors have the fault.
ZipInputStream zin = new ZipInputStream(buf);
ZipEntry entry;
if((entry=zin.getNextEntry())!=null)
{
int entrySize=(int)entry.getSize();
zipData = new byte[entrySize];
zin.read(zipData, 0, entrySize);
return zipData;
}
} catch (Exception e) {
// TODO Auto-generated catch block
String sError = e.getMessage();
}
return null;
}
Related
I have a password protected zip file [in the form of a base64 encoded data and the name of the zip file] which contains a single xml. I wish to parse that xml without writing anything to disk. What is the way to do this in Zip4j? Following is what I tried.
String docTitle = request.getDocTitle();
byte[] decodedFileData = Base64.getDecoder().decode(request.getBase64Data());
InputStream inputStream = new ByteArrayInputStream(decodedFileData);
try (ZipInputStream zipInputStream = new ZipInputStream(inputStream, password)) {
while ((localFileHeader = zipInputStream.getNextEntry()) != null) {
String fileTitle = localFileHeader.getFileName();
File extractedFile = new File(fileTitle);
try (InputStream individualFileInputStream = org.apache.commons.io.FileUtils.openInputStream(extractedFile)) {
// Call parser
parser.parse(localFileHeader.getFileName(),
individualFileInputStream));
} catch (IOException e) {
// Handle IOException
}
}
} catch (IOException e) {
// Handle IOException
}
Which is throwing me java.io.FileNotFoundException: File 'xyz.xml' does not exist at line FileUtils.openInputStream(extractedFile). Can you please suggest me the right way to do this?
ZipInputStream keeps all content of a zip file. Each call of zipInputStream.getNextEntry() delivers the content of each file and moves "pointer" to the next entry (file). You also can read the file (ZipInputStream.read) before moving to the next entry.
Your case:
byte[] decodedFileData = Base64.getDecoder().decode(request.getBase64Data());
InputStream inputStream = new ByteArrayInputStream(decodedFileData);
try (ZipInputStream zipInputStream = new ZipInputStream(inputStream, password)) {
ZipEntry zipEntry = null;
while ((zipEntry = zipInputStream.getNextEntry()) != null) {
byte[] fileContent = IOUtils.toByteArray(zipInputStream);
parser.parse(zipEntry.getName(),
new ByteArrayInputStream(fileContent)));
}
} catch (Exception e) {
// Handle Exception
}
I have a method like
public void put(#Nonnull final InputStream inputStream, #Nonnull final String uniqueId) throws PersistenceException {
// a.) create gzip of inputStream
final GZIPInputStream zipInputStream;
try {
zipInputStream = new GZIPInputStream(inputStream);
} catch (IOException e) {
e.printStackTrace();
throw new PersistenceException("Persistence Service could not received input stream to persist for " + uniqueId);
}
I wan to convert the inputStream into zipInputStream, what is the way to do that?
The above method is incorrect and throws Exception as "Not a Zip Format"
converting Java Streams to me are really confusing and I do not make them right
The GZIPInputStream is to be used to decompress an incoming InputStream. To compress an incoming InputStream using GZIP, you basically need to write it to a GZIPOutputStream.
You can get a new InputStream out of it if you use ByteArrayOutputStream to write gzipped content to a byte[] and ByteArrayInputStream to turn a byte[] into an InputStream.
So, basically:
public void put(#Nonnull final InputStream inputStream, #Nonnull final String uniqueId) throws PersistenceException {
final InputStream zipInputStream;
try {
ByteArrayOutputStream bytesOutput = new ByteArrayOutputStream();
GZIPOutputStream gzipOutput = new GZIPOutputStream(bytesOutput);
try {
byte[] buffer = new byte[10240];
for (int length = 0; (length = inputStream.read(buffer)) != -1;) {
gzipOutput.write(buffer, 0, length);
}
} finally {
try { inputStream.close(); } catch (IOException ignore) {}
try { gzipOutput.close(); } catch (IOException ignore) {}
}
zipInputStream = new ByteArrayInputStream(bytesOutput.toByteArray());
} catch (IOException e) {
e.printStackTrace();
throw new PersistenceException("Persistence Service could not received input stream to persist for " + uniqueId);
}
// ...
You can if necessary replace the ByteArrayOutputStream/ByteArrayInputStream by a FileOuputStream/FileInputStream on a temporary file as created by File#createTempFile(), especially if those streams can contain large data which might overflow machine's available memory when used concurrently.
GZIPInputStream is for reading gzip-encoding content.
If your goal is to take a regular input stream and compress it in the GZIP format, then you need to write those bytes to a GZIPOutputStream.
See also this answer to a related question.
I have a text file with a sequence of 4194304 letters ranging from A-D all on one line (4 MB).
How would I randomly point to a character and replace the following set of characters to another file that is 100 characters long and write it out to a file?
I'm actually currently able to do this, but I feel it's really inefficient when I iterate it several times.
Here's an illustration of what I mentioned above:
Link to Imageshack
Here's how I'm currently achieving this:
Random rnum = new Random();
FileInputStream fin = null;
FileOutputStream fout = null;
int count = 10000;
FileInputStream fin1 = null;
File file1 = new File("fileWithSet100C.txt");
int randChar = 0;
while(cnt > 0){
try {
int c = 4194304 - 100;
randChar = rnum.nextInt(c);
File file = new File("file.txt");
//seems inefficient to initiate these guys over and over
fin = new FileInputStream(file);
fin1 = new FileInputStream(file1);
//would like to remove this and have it just replace the original
fout = new FileOutputStream("newfile.txt");
int byte_read;
int byte_read2;
byte[] buffer = new byte[randChar];
byte[] buffer2 = new byte[(int)file1.length()]; //4m
byte_read = fin.read(buffer);
byte_read2 = fin1.read(buffer2);
fout.write(buffer, 0, byte_read);
fout.write(buffer2, 0, byte_read2);
byte_read = fin.read(buffer2);
buffer = new byte[4096]; //4m
while((byte_read = (fin.read(buffer))) != -1){
fout.write(buffer, 0, byte_read);
}
cnt--;
}
catch (...) {
...
}
finally {
...
}
try{
File file = new File("newfile.txt");
fin = new FileInputStream(file);
fout = new FileOutputStream("file.txt");
int byte_read;
byte[] buffer = new byte[4096]; //4m
byte_read = fin.read(buffer);
while((byte_read = (fin.read(buffer))) != -1){
fout.write(buffer, 0, byte_read);
}
}
catch (...) {
...
}
finally {
...
}
Thanks for reading!
EDIT:
For those curious, here's the code I used to solve the aforementioned problem:
String stringToInsert = "insertSTringHERE";
byte[] answerByteArray = stringToInsert.getBytes();
ByteBuffer byteBuffer = ByteBuffer.wrap(answerByteArray);
Random rnum = new Random();
randChar = rnum.nextInt(4194002); //4MB worth of bytes
File fi = new File("file.txt");
RandomAccessFile raf = null;
try {
raf = new RandomAccessFile(fi, "rw");
} catch (FileNotFoundException e1) {
// TODO error handling and logging
}
FileChannel fo = null;
fo = raf.getChannel();
// Move to the beginning of the file and write out the contents
// of the byteBuffer.
try {
outputFileChannel.position(randChar);
while(byteBuffer.hasRemaining()) {
fo.write(byteBuffer);
}
} catch (IOException e) {
// TODO error handling and logging
}
try {
outputFileChannel.close();
} catch (IOException e) {
// TODO error handling and logging
}
try {
randomAccessFile.close();
} catch (IOException e) {
// TODO error handling and logging
}
You probably want to use Java's random-access file features. Sun/Oracle has a Random Access Files tutorial that will probably be useful to you.
If you can't use Java 7, then look at RandomAccessFile which also has seek functionality and has existed since Java 1.0.
First off, for your files you could have the Files as global variables. This would all you to use the file when ever you needed without reading it again. Also note that if you keep making new files then you will lose the data that you have already acquired.
For example:
public class Foo {
// Gloabal Vars //
File file;
public Foo(String location) {
// Do Something
file = new File(location);
}
public add() {
// Add
}
}
Answering your question, I would first read both files and then make all the changes you want in memory. After you have made all the changes, I would then write the changes to the file.
However, if the files are very large, then I would make all the changes one by one on the disk... it will be slower, but you will not run out of memory this way. For what you are doing I doubt you could use a buffer to help counter how slow it would be.
My overall suggestion would be to use arrays. For example I would do the following...
public char[] addCharsToString(String str, char[] newChars, int index) {
char[] string = str.toCharArray();
char[] tmp = new char[string.length + newChars.length];
System.arraycopy(string, 0, tmp, 0, index);
System.arraycopy(newChars, index, tmp, index, newChars.length);
System.arraycopy(string, index + newChars.length, tmp, index + newChars.length, tmp.length - (newChars.length + index));
return tmp;
}
Hope this helps!
i am having a problem in reading a file from Flex. The file contains a base64encoded string. when i read the file i get the length as 47856 and the decoded base64 byte array length as 34157.
When i read the same File from java i get the length as 48068 and 35733 respectively.
What is the problem?
private function init():void{
var file:File = File.desktopDirectory.resolvePath("Files/sample.txt");
stream = new FileStream();
stream.open(file, FileMode.READ);
var str:String = stream.readUTFBytes(stream.bytesAvailable);
stream.close();
str = str.replace(File.lineEnding, "\n");
contents.text = str;
fileName.text = file.name;
}
public function playSound(contents:String):void{
try{
var byteData: ByteArray;
byteData = new ByteArray();
byteData.writeUTFBytes(contents);
var dec:Base64Decoder = new Base64Decoder();
dec.decode(contents);
byteData = dec.toByteArray();
Alert.show("byte Array " + byteData.toString().length +" :: " +contents.length);
}
And this is my java code for reading the file...Whatever result i am expecting is achieved in the java side.
private static String readFile(String path) throws IOException {
FileInputStream stream = new FileInputStream(new File(path));
try {
FileChannel fc = stream.getChannel();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size());
return Charset.defaultCharset().decode(bb).toString(); }
finally { stream.close();
}
}
Java Code where i am printing the length
byte[] decodedBase64 = new byte[byteLength];
String speexData = null;
try {
speexData = readFile(userDir +"//" +xmlFileName);
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
// System.out.println("sa " + sa);
try{
decodedBase64= Base64.decodeToByteArray(speexData);
System.out.println("decodednase64 length " + decodedBase64.length +" :: " +speexData.length());
}
catch(Exception e){
}
You would have to post your java code to show what you're doing there, as well.
However, without knowing more, I could take a guess and say that when you replace the line ending, you may be removing a byte each time (if it was \r\n and you're making it \n, for example).
Here is the weird thing which has already taken me a whole day:
If a write a simple String like "1" to a file and read it immediately, the string fetched equals the original String.
But if the String is generated by some hash function, the String fetched is no longer the same.
The follow code prints true false, and I want to know the trick behind the scene.
Thank you very much.
public static void main(String[] args) {
try {
String s1 = "1";
File f1 = new File("f1");
write (s1, f1);
System.out.println(read(f1).equals(s1));
MessageDigest md = MessageDigest.getInstance("SHA-512");
String s2 = foo(new File("1.jpg"), md);
File f2 = new File("f2");
write (s2, f2);
System.out.println(read(f2).equals(s2));
} catch (NoSuchAlgorithmException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
// Hash <i>f</i> by <i>md</i>
static String foo (File f, MessageDigest md) throws IOException {
FileInputStream fis = new FileInputStream(f);
DigestInputStream dis = new DigestInputStream(fis, md);
byte[] b = new byte[1024];
while (dis.read(b, 0, 1024) != -1) {
}
md = dis.getMessageDigest();
String s = new String(md.digest());
dis.close();
fis.close();
return s;
}
static void write (String s, File f) throws IOException {
FileWriter fw = new FileWriter(f);
BufferedWriter bw = new BufferedWriter(fw);
bw.write(s);
bw.newLine();
bw.close();
fw.close();
}
static String read (File f) throws IOException {
FileReader fr = new FileReader(f);
BufferedReader bf = new BufferedReader(fr);
String s;
s = bf.readLine();
bf.close();
fr.close();
return s;
}
This is your first problem:
String s = new String(md.digest());
You're creating a string with arbitrary binary data in the platform default encoding. It may well not be valid text data in the platform default encoding. In other words, you're losing data. Encode it with base-64 instead - that way you'll always have a string with ASCII characters, and can get back to the original binary data reliably.
Your second general problem is using FileReader and FileWriter. These always use the default platform encoding, which is a terrible API decision as it makes them almost useless in my view. You should almost always be specifying an encoding - I tend to use UTF-8. Use FileInputStream/FileOutputStream and InputStreamReader/InputStreamWriter to read/write text with files. (Or use the Guava helper routines.)
The digest value of the hashed file most likely contains a newline or line feed character (0x10 or 0x13) which breaks the way you read the string using BufferedReader.readLine().