I wrote some Javascript code.
compress with base64 and deflate
function base64 (str) {
return new Buffer(str).toString("base64");
}
function deflate (str) {
return RawDeflate.deflate(str);
}
function encode (str) {
return base64(deflate(str));
}
var str = "hello, world";
console.log("Test Encode");
console.log(encode(str));
I converted "hello, world" to 2f8d48710d6e4229b032397b2492f0c2
and I want to decompress this string(2f8d48710d6e4229b032397b2492f0c2) in java
I put the str in a file, then:
public static String decompress1951(final String theFilePath) {
byte[] buffer = null;
try {
String ret = "";
System.out.println("can come to ret");
InputStream in = new InflaterInputStream(new Base64InputStream(new FileInputStream(theFilePath)), new Inflater(true));
System.out.println("can come to in");
while (in.available() != 0) {
buffer = new byte[20480];
*****line 64 excep happen int len = in.read(buffer, 0, 20480);
if (len <=0) {
break;
}
ret = ret + new String(buffer, 0, len);
}
in.close();
return ret;
} catch (IOException e) {
System.out.println("Has IOException");
System.out.println(e.getMessage());
e.printStackTrace();
}
return "";
}
But I have an exception:
java.util.zip.ZipException: invalid stored block lengths
at java.util.zip.InflaterInputStream.read(Unknown Source)
at com.cnzz.mobile.datacollector.DecompressDeflate.decompress1951(DecompressDeflate.java:64)
at com.cnzz.mobile.datacollector.DecompressDeflate.main(DecompressDeflate.java:128)
The java code up there works perfectly. As in the comment, you somehow got the encoded value wrong. The encoded value I got using the javascript value is y0jNycnXUSjPL8pJAQA=
Then, when you copy this value to file and call decompress1951, you do in fact get back hello, world as required. Don't know what to say on the javascript part as the code you use seems to sync up nicely with examples on the distribution web pages. I notice there is the original and the fork so maybe there is some confusion there? Anyhow there is this jsfiddle which I think can be seen as a working version if you want to take a look at that one.
Related
I am developing a file encryption program. I was using the function below to encrypt files
until I realized that it is not suitable for big ones; because it reads all file content into memory. Now, I need to create a function that can read and write file content in chunks. How can I do this?
private fun encryptFile(file: File) {
val originalData = file.readBytes()
val encryptData = encrypt(originalData)
encryptData?.run {
file.writeBytes(this)
}
}
Your encrypt function obviously can't stay that way. It'll have to become a thing that wraps an InputStream or OutputStream, and then it's fairly trivial.
Note that handrolling encryption is a near 100% guarantee you'll mess it up, and crypto streams already exist. Any reason you're reinventing a wheel and signing up to mess up security by reinventing things you shouldn't?
Have a look at code. OP
// ...
StringBuilder sb = new StringBuilder();
String line;
while ((line = inputStream.readLine()) != null) {
sb.append(line);
// if enough content is read, extract the chunk
while (sb.length() >= chunkSize) {
String c = sb.substring(0, chunkSize);
// do something with the string
// add the remaining content to the next chunk
sb = new StringBuilder(sb.substring(chunkSize));
}
}
// thats the last chunk
String c = sb.toString();
// do something with the string
EDIT: What about using Chilkat library link to download a Chillkat lib
Code example for encypting chunk of file
import com.chilkatsoft.*;
public class ChilkatExample {
static {
try {
System.loadLibrary("chilkat");
} catch (UnsatisfiedLinkError e) {
System.err.println("Native code library failed to load.\n" + e);
System.exit(1);
}
}
public static void main(String argv[])
{
CkCrypt2 crypt = new CkCrypt2();
crypt.put_CryptAlgorithm("aes");
crypt.put_CipherMode("cbc");
crypt.put_KeyLength(128);
crypt.SetEncodedKey("000102030405060708090A0B0C0D0E0F","hex");
crypt.SetEncodedIV("000102030405060708090A0B0C0D0E0F","hex");
String fileToEncrypt = "qa_data/hamlet.xml";
CkFileAccess facIn = new CkFileAccess();
boolean success = facIn.OpenForRead(fileToEncrypt);
if (success != true) {
System.out.println("Failed to open file that is to be encrytped.");
return;
}
String outputEncryptedFile = "qa_output/hamlet.enc";
CkFileAccess facOutEnc = new CkFileAccess();
success = facOutEnc.OpenForWrite(outputEncryptedFile);
if (success != true) {
System.out.println("Failed to encrypted output file.");
return;
}
// Let's encrypt in 10000 byte chunks.
int chunkSize = 10000;
int numChunks = facIn.GetNumBlocks(chunkSize);
crypt.put_FirstChunk(true);
crypt.put_LastChunk(false);
CkBinData bd = new CkBinData();
int i = 0;
while (i < numChunks) {
i = i+1;
if (i == numChunks) {
crypt.put_LastChunk(true);
}
// Read the next chunk from the file.
// The last chunk will be whatever amount remains in the file..
bd.Clear();
facIn.FileReadBd(chunkSize,bd);
// Encrypt.
crypt.EncryptBd(bd);
// Write the encrypted chunk to the output file.
facOutEnc.FileWriteBd(bd,0,0);
crypt.put_FirstChunk(false);
}
// Make sure both FirstChunk and LastChunk are restored to true after
// encrypting or decrypting in chunks. Otherwise subsequent encryptions/decryptions
// will produce unexpected results.
crypt.put_FirstChunk(true);
crypt.put_LastChunk(true);
facIn.FileClose();
facOutEnc.FileClose();
// Decrypt the encrypted output file in a single call using CBC mode:
String decryptedFile = "qa_output/hamlet_dec.xml";
success = crypt.CkDecryptFile(outputEncryptedFile,decryptedFile);
// Assume success for the example..
// Compare the contents of the decrypted file with the original file:
boolean bSame = facIn.FileContentsEqual(fileToEncrypt,decryptedFile);
System.out.println("bSame = " + bSame);
}
}
I'm trying to decompress a larg bytearray with around 1,500,000 bytes.
The data is retrieved from a webservice as a bas64 string which i decode first to bytearray.
The bytearray consists of a compressed JSON string which I wan't to read an object model from via GSON.
My App is using around ~12MB RAM before i start my decompression method below:
public static String decompress(byte[] arr) {
if(arr == null || arr.length == 0)
return null;
byte[] buffer = new byte[1024];
try {
ByteArrayOutputStream out = new ByteArrayOutputStream(arr.length);
GZIPInputStream gzip = new GZIPInputStream(new ByteArrayInputStream(arr));
int len;
while ((len = gzip.read(buffer)) > 0) {
out.write(buffer, 0 ,len);
}
gzip.close();
out.close();
System.gc();
return out.toString("UTF-8");
} catch (IOException e) {
e.printStackTrace();
return null;
} catch (OutOfMemoryError e) {
e.printStackTrace();
return null;
}
}
While writing to the output stream the OOM Exception occures.
I already checked many SO questions but the only hint I got was that coding it as a streaming process would help but this wasen't explaint at all and I dind't find any leads on the net. So if somebody knows if it is possible to do the decompression memory efficent please expain it or link an article that covers the topic.
I have a Base64 encoded Image String residing in a File Server. The encoded String has a prefix (ex: "data:image/png;base64,") for support in popular modern browsers (it's obtained via JavaScript's Canvas.toDataURL() method). The client sends a request for the image to my server which verifies them and returns a stream of the Base64 encoded String.
If the client is a web client, the image can be displayed as is within an <img> tag by setting the src to the Base64 encoded String. However, if the client is an Android client, the String needs to be decoded into a Bitmap without the prefix. Though, this can be done fairly easily.
The Problem:
In order to simplify my code and not reinvent the wheel, I'm using an Image Library for the Android client to handle loading, displaying, and caching the images (Facebook's Fresco Library to be exact). However, no library seems to support Base64 decoding (I want my cake and to eat it too). A solution I came up with is to decode the Base64 String on the server as it is being streamed to the client.
The Attempt:
S3Object obj = s3Client.getObject(new GetObjectRequest(bucketName, keyName));
Base64.Decoder decoder = Base64.getDecoder();
//decodes the stream as it is being read
InputStream stream = decoder.wrap(obj.getObjectContent());
try{
return new StreamingOutput(){
#Override
public void write(OutputStream output) throws IOException, WebApplicationException{
int nextByte = 0;
while((nextByte = stream.read()) != -1){
output.write(nextByte);
}
output.flush();
output.close();
stream.close();
}
};
}catch(Exception e){
e.printStackTrace();
}
Unfortunately, the Fresco library still has a problem displaying the image (with no stack traces!). As there doesn't seem to be an issue on my server when decoding the stream (no stack traces either), it leads me to believe that it must be an issue with the prefix. Which leaves me with a dilemma.
The Question: How do I remove the Base64 prefix from a Stream being sent to the client without storing and editing the entire Stream on the server? Is this possible?
Fresco does support decoding data URIs, just as the web client does.
The demo app has an example of this.
How do I remove the Base64 prefix from a Stream being sent to the client without storing and editing the entire Stream on the server?
Removing the prefix while sending the stream to the client turns out to be a pretty complex task. If you don't mind storing the whole String on the server you could simply do:
BufferedReader br = null;
StringBuilder sb = new StringBuilder();
String line;
try {
br = new BufferedReader(new InputStreamReader(stream));
while ((line = br.readLine()) != null) {
sb.append(line);
}
String result = sb.toString();
//comma is the charater which seperates the prefix and the Base64 String
int i = result.indexOf(",");
result = result.substring(i + 1);
//Now, that we have just the Base64 encoded String, we can decode it
Base64.Decoder decoder = Base64.getDecoder();
byte[] decoded = decoder.decode(result);
//Now, just write each byte from the byte array to the output stream
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
But to be more efficient and not store the entire Stream on the server, creates a much more complicated task. We could use the Base64.Decoder.wrap() method but the problem with that is that it throws an IOException if it reaches a value that cannot be decoded (wouldn't it be nice if they provided a method that just left the bytes as is if they can't be decoded?). And unfortunately, the Base64 prefix can't be decoded because it's not Base64 encoded. So, it would throw an IOException.
To get around this problem, we would have to use an InputStreamReader to read the InputStream with the specified appropriate Charset. Then we would have to cast the ints received from the InputStream's read() method call to chars. When we reach the appropriate amount of chars, we would have to compare it with the Base64 prefix's intro ("data"). If it's a match, we know the Stream contains the prefix, so continue reading until we reach the prefix end character (the comma: ","). Finally, we can begin streaming out the bytes after the prefix. Example:
S3Object obj = s3Client.getObject(new GetObjectRequest(bucketName, keyName));
Base64.Decoder decoder = Base64.getDecoder();
InputStream stream = obj.getObjectContent();
InputStreamReader reader = new InputStreamReader(stream);
try{
return new StreamingOutput(){
#Override
public void write(OutputStream output) throws IOException, WebApplicationException{
//for checking if string has base64 prefix
char[] pre = new char[4]; //"data" has at most four bytes on a UTF-8 encoding
boolean containsPre = false;
int count = 0;
int nextByte = 0;
while((nextByte = stream.read()) != -1){
if(count < pre.length){
pre[count] = (char) nextByte;
count++;
}else if(count == pre.length){
//determine whether has prefix or not and act accordingly
count++;
containsPre = (Arrays.toString(pre).toLowerCase().equals("data")) ? true : false;
if(!containsPre){
//doesn't have Base64 prefix so write all the bytes until this point
for(int i = 0; i < pre.length; i++){
output.write((int) pre[i]);
}
output.write(nextByte);
}
}else if(containsPre && count < 25){
//the comma character (,) is considered the end of the Base64 prefix
//so look for the comma, but be realistic, if we don't find it at about 25 characters
//we can assume the String is not encoded correctly
containsPre = (Character.toString((char) nextByte).equals(",")) ? false : true;
count++;
}else{
output.write(nextByte);
}
}
output.flush();
output.close();
stream.close();
}
};
}catch(Exception e){
e.printStackTrace();
return null;
}
This seems a bit hefty of a task to do on the server so I think decoding on the client side is a better choice. Unfortunately, most Android client side libraries don't have support for Base64 decoding (especially with the prefix). However, as #tyronen pointed out Fresco does support it if the String is already obtained. Though, this removes one of the key reasons to use an image loading library.
Android Client Side Decoding
To decode on the client side application is pretty easy. First obtain the String from the InputStream:
BufferedReader br = null;
StringBuilder sb = new StringBuilder();
String line;
try {
br = new BufferedReader(new InputStreamReader(stream));
while ((line = br.readLine()) != null) {
sb.append(line);
}
return sb.toString();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Then decode the String using Android's Base64 class:
int i = result.indexOf(",");
result = result.substring(i + 1);
byte[] decodedString = Base64.decode(result, Base64.DEFAULT);
Bitmap bitMap = BitmapFactory.decodeByteArray(decodedString, 0, decodedString.length);
The Fresco library seems hard to update due to them using a lot of delegation. So, I moved on to using the Picasso image loading library and created my own fork of it with the Base64 decoding ability.
I am trying to read a data from a file. I have following code.
public void ReadFile()
{
File sdcard = android.os.Environment.getExternalStorageDirectory();
File directory = new File(sdcard.getAbsolutePath()+ "/MyDirectory");
File file = new File(directory,"textfile1.txt");
try (FileInputStream fis = new FileInputStream(file)) {
char stringComma = new Character(',');
System.out.println("Total file size to read (in bytes) : "+ fis.available());
int content;
while ((content = fis.read()) != -1) {
// convert to char and display it
Log.d(TAG, "reading a file");
System.out.print((char) content);
}
} catch (IOException e) {
e.printStackTrace();
}
}
I have file format as follows [textfile1.txt]
[12],84359768069 //some numbers
[34],56845745740
[44],36344679992
[99],46378467467
When i am reading this file each character will read at a time. I want to split this and store in different string arrays like
str1 = [12]
str2 = 84359768069
How i can achieve this?
You're currently reading a byte at a time, because you're using InputStream. That's the first thing to fix - you should be using a Reader for text data. The best approach is to wrap your InputStream in an InputStreamReader.
Next, it sounds like you want to read a line at a time rather than just a character at a time. The easiest way of doing that is to use a BufferedReader wrapping an InputStreamReader.
(If you were using Java 7+, all of this could be achieved very nicely using Files.newBufferedReader - you just need to supply the Path and the Charset. Until Android supports that, you'll need to just do the wrapping manually. It's not too painful though.)
One you're reading a line at a time, you then need to split the line by comma - look at using String.split for this. I would then suggest you create a class to store these two separate values. So each line will be transformed into an instance of your class.
Finally, create a List<YourCustomClass> and add to it as you read the file.
That's given an overview of how to achieve each step - hopefully enough detail to enable you to get going, but not spoon-feeding you enough to hamper you actually learning from the experience.
A simple solution would be to parse the readed characters:
public void ReadFile()
{
File sdcard = android.os.Environment.getExternalStorageDirectory();
File directory = new File(sdcard.getAbsolutePath()+ "/MyDirectory");
File file = new File(directory,"textfile1.txt");
try (FileInputStream fis = new FileInputStream(file)) {
char stringComma = new Character(',');
System.out.println("Total file size to read (in bytes) : "+ fis.available());
int content;
String str1="";
String str2 = "";
boolean commaFound=false;
while ((content = fis.read()) != -1) {
// convert to char and display it
Log.d(TAG, "reading a file");
if ((char)content==',')
{
commaFound = true;
}
else if ((char)content=="\n")
{
System.out.printlnt("str1="+str1+"\nstr2="+str2);
commaFound = false;
str1 = "";
str2 = "";
}
else
{
if (commaFound)
{
str2 += (char)content;
}
else
{
str1 += (char)content;
}
}
System.out.print((char) content);
}
} catch (IOException e) {
e.printStackTrace();
}
}
I work with a propriety client/server message format that restricts what I can send over the wire. I can't send a serialized object, I have to store the data in the message as a String. The data I am sending are large comma-separated values, and I want to compress the data before I pack it into the message as a String.
I attempted to use Deflater/Inflater to achieve this, but somewhere along the line I am getting stuck.
I am using the two methods below to deflate/inflate. However, passing the result of the compressString() method to decompressStringMethod() returns a null result.
public String compressString(String data) {
Deflater deflater = new Deflater();
byte[] target = new byte[100];
try {
deflater.setInput(data.getBytes(UTF8_CHARSET));
deflater.finish();
int deflateLength = deflater.deflate(target);
return new String(target);
} catch (UnsupportedEncodingException e) {
//TODO
}
return data;
}
public String decompressString(String data) {
String result = null;
try {
byte[] input = data.getBytes();
Inflater inflater = new Inflater();
int inputLength = input.length;
inflater.setInput(input, 0, inputLength);
byte[] output = new byte[100];
int resultLength = inflater.inflate(output);
inflater.end();
result = new String(output, 0, resultLength, UTF8_CHARSET);
} catch (DataFormatException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (UnsupportedEncodingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return result;
}
From what I can tell, your current approach is:
Convert String to byte array using getBytes("UTF-8").
Compress byte array
Convert compressed byte array to String using new String(bytes, ..., "UTF-8").
Transmit compressed string
Receive compressed string
Convert compressed string to byte array using getBytes("UTF-8").
Decompress byte array
Convert decompressed byte array to String using new String(bytes, ..., "UTF-8").
The problem with this approach is in step 3. When you compress the byte array, you create a sequence of bytes which may no longer be valid UTF-8. The result will be an exception in step 3.
The solution is to use a "bytes to characters" encoding scheme like Base64 to turn the compressed bytes into a transmissible string. In other words, replace step 3 with a call to a Base64 encode function, and step 6 with a call to a Base64 decode function.
Notes:
For small strings, compressing and
encoding is likely to actually
increase the size of the transmitted string.
If the compacted String is going to be incorporated into a URL, you may want to pick a different encoding to Base64 that avoids characters that need to be URL escaped.
Depending on the nature of the data you are transmitting, you may find that a domain specific compression works better than a generic one. Consider compressing the data before creating the comma-separated string. Consider alternatives to comma-separated strings.
The problem is that you convert compressed bytes to a string, which breaks the data. Your compressString and decompressString should work on byte[]
EDIT: Here is revised version. It works
EDIT2: And about base64. you're sending bytes, not strings. You don't need base64.
public static void main(String[] args) {
String input = "Test input";
byte[] data = new byte[100];
int len = compressString(input, data, data.length);
String output = decompressString(data, len);
if (!input.equals(output)) {
System.out.println("Test failed");
}
System.out.println(input + " " + output);
}
public static int compressString(String data, byte[] output, int len) {
Deflater deflater = new Deflater();
deflater.setInput(data.getBytes(Charset.forName("utf-8")));
deflater.finish();
return deflater.deflate(output, 0, len);
}
public static String decompressString(byte[] input, int len) {
String result = null;
try {
Inflater inflater = new Inflater();
inflater.setInput(input, 0, len);
byte[] output = new byte[100]; //todo may oveflow, find better solution
int resultLength = inflater.inflate(output);
inflater.end();
result = new String(output, 0, resultLength, Charset.forName("utf-8"));
} catch (DataFormatException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return result;
}
TO ME: write compress algorithm myself is difficult but writing binary to string is not. So if I were you, I will serialize the object normally and zip it with compression (as provided by ZipFile) then convert to string using something like Base64 Encode/Decode.
I actually have BASE64 ENCODE/DECODE functions. If you wanted I can post it here.
If you have a piece of code which seems to be silently failing, perhaps you shouldn't catch and swallow Exceptions:
catch (UnsupportedEncodingException e) {
//TODO
}
But the real reason why decompress returns null is because your exception handling doesn't specify what to do with result when you catch an exception - result is left as null. Are you checking the output to see if any Exceptions are occuring?
If I run your decompress() on a badly formatted String, Inflater throws me this DataFormatException:
java.util.zip.DataFormatException: incorrect header check
at java.util.zip.Inflater.inflateBytes(Native Method)
at java.util.zip.Inflater.inflate(Inflater.java:223)
at java.util.zip.Inflater.inflate(Inflater.java:240)
Inflator/Deflator is not a solution for compress string.
I think GZIPInputString and GZIPOutputString is the proper tool to compress the string
I was facing similar issue which was resolved by base64 decoding the input.
i.e instead of
data.getBytes(UTF8_CHARSET)
i tried
Base64.decodeBase64(data)
and it worked.