I am trying to decrypt a file in Java which was encrypted using the Microsoft CryptoAPI's CryptEncrypt function. I have read that "the encryption block buffer returned is in little-endian byte order (compared to big-endian for Java and .NET above)."
I am using the ByteBuffer and ByteOrder classes in Java and am pretty sure I am doing it wrong because I am getting the same printout with a System.out.println for beforebytes and afterbytes no matter what I try.
byte [] beforebytes = null;
// code to extract bytes from file here
ByteBuffer bb = ByteBuffer.wrap(beforebytes);
bb.order( ByteOrder.LITTLE_ENDIAN); // BIG_ENDIAN doesn't work either?
ByteBuffer slice = bb.slice();
// 'slice' now refers to the same data, but is supposed to be BIG ENDIAN
byte[] afterbytes = new byte[bb.capacity()];
// transfer bytes from this buffer into the given destination array
slice.get(afterbytes, 0, afterbytes.length);
Any help will be greatly appreciated!
Thank you,
Bertrand
I resolved this in C! Java now decrypts correctly what was encrypted by the CryptoAPI.
I started out from the CryptoAPI example at:
http://blogs.msdn.com/b/alejacma/archive/2008/01/28/how-to-generate-key-pairs-encrypt-and-decrypt-data-with-cryptoapi.aspx
Then just before writing the encrypted text to file, I added a block of code from reference
CryptoAPI C++ interop with Java using AES
// reverse bytes of pbData for java
for (unsigned i = 0; i<dwEncryptedLen / 2; i++)
{
BYTE temp = pbData[i];
pbData[i] = pbData[dwEncryptedLen - i - 1];
pbData[dwEncryptedLen - i - 1] = temp;
}
The reference was for AES but I encrypted in RSA. For decryption I used the bouncycastle provider using algorithm "RSA/NONE/PKCS1Padding". To install the bouncycastle provider on Windows 7, follow: http://sce.uhcl.edu/yang/teaching/JDK_JCE_environment_Configuration.htm and reboot!
Hope this will help someone.
The byte order doesn't matter if you get individual bytes (or a byte array) out of the buffer. It only matters if you are getting for example 16-bit short values or 32-bit integer values out of the buffer; in that case, the bytes from the buffer will be swapped appropriately according to the byte order.
For example:
ByteBuffer buf1 = ByteBuffer.wrap(new byte[]{0x01, 0x02, 0x03, 0x04});
buf1.order(ByteOrder.LITTLE_ENDIAN);
int n1 = buf1.getInt();
System.out.println(n1 == 0x04030201);
ByteBuffer buf2 = ByteBuffer.wrap(new byte[]{0x01, 0x02, 0x03, 0x04});
buf2.order(ByteOrder.BIG_ENDIAN);
int n2 = buf2.getInt();
System.out.println(n2 == 0x01020304);
Related
I am using AES to encrypt files. The problem first came when i tried to encrypt a large file. So i did some reading online and figured that i need to use a buffer and only encrypt bytes of data at a time.
I divided my plaintext into chunks of 8192 bytes of data and then applied the encryption operation on each of these chunks but I am still getting the out of memory error.
public static File encrypt(File f, byte[] key) throws Exception
{
System.out.println("Starting Encryption");
byte[] plainText = fileToByte(f);
SecretKeySpec secretKey = new SecretKeySpec(key, ALGORITHM);
Cipher cipher = Cipher.getInstance(ALGORITHM);
cipher.init(Cipher.ENCRYPT_MODE, secretKey);
System.out.println(plainText.length);
List<byte[]> bufferedFile = divideArray(plainText, 8192);
System.out.println(bufferedFile.size());
List<byte[]> resultByteList = new ArrayList<>();
for(int i = 0; i < bufferedFile.size(); i++)
{
resultByteList.add(cipher.doFinal(bufferedFile.get(i)));
}
ByteArrayOutputStream baos = new ByteArrayOutputStream();
for(byte[] b : resultByteList)
baos.write(b);
byte[] cipherText = baos.toByteArray();
File temp = byteToFile(cipherText, "D:\\temp");
return temp;
}
The fileToByte() takes a file as input and returns a byte array
the divideArray() takes a byte array as input and divides it into an arraylist consisting of smaller byte arrays.
public static List<byte[]> divideArray(byte[] source, int chunkSize) {
List<byte[]> result = new ArrayList<byte[]>();
int start = 0;
while (start < source.length) {
int end = Math.min(source.length, start + chunkSize);
result.add(Arrays.copyOfRange(source, start, end));
start += chunkSize;
}
return result;
}
Here is the error I get
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at java.io.OutputStream.write(OutputStream.java:75)
at MajorProjectTest.encrypt(MajorProjectTest.java:61)
at MajorProjectTest.main(MajorProjectTest.java:30)
I am not getting this error if I use a file of a smaller size, but then again, the sole purpose of using buffers was to eliminate the out of memory problem.
Thanks in advance. Any help is appreciated.
One problem is holding arrays and copies of arrays in memory.
Read and write in blocks.
Then doFinal should not be repeated. Use update instead. Many examples just use a single doFinal which is misleading.
So:
public static File encrypt(File f, byte[] key) throws Exception
{
System.out.println("Starting Encryption");
SecretKeySpec secretKey = new SecretKeySpec(key, ALGORITHM);
Cipher cipher = Cipher.getInstance(ALGORITHM);
cipher.init(Cipher.ENCRYPT_MODE, secretKey);
System.out.println(plainText.length);
Path outPath = Paths.get("D:/Temp");
byte[] plainBuf = new byte[8192];
try (InputStream in = Files.newInputStream(f.toPath());
OutputStream out = Files.newOutputStream(outPath)) {
int nread;
while ((nread = in.read(plainBuf)) > 0) {
byte[] enc = cipher.update(plainBuf, 0, nread);
out.write(enc);
}
byte[] enc = cipher.doFinal();
out.write(enc);
}
return outPath.toFile();
}
Explanation
Encryption of some byte blocks goes as:
Cipher.init
Cipher.update block[0]
Cipher.update block[1]
Cipher.update block[2]
...
Cipher.doFinal(block[n-1])
Or instead of the last doFinal:
Cipher.update(block[n-1])
Cipher.doFinal()
Every update or doFinal yielding a portion of the encrypted data.
doFinal also "flushes" final encryption data.
If one has only a single block of bytes, it suffices to call
byte[] encryptedBlock = cipher.doFinal(plainBlock);
Then no calls to cipher.update are needed.
For the rest I used the try-with-resources syntax which automatically closes input and output streams, even should a return happen, or an exception have been thrown.
Instead of File the newer Path is a bit more versatile, and in combination with Paths.get("...") and the very nice utility class Files can provide powerful code: like Files.readAllBytes(path) and much more.
Look at these four variables: plainText, bufferedFile, resultByteList, cipherText. All of them contain your entire file in a slightly different format, which means that each of them is 1.2GB big. Two of them are Lists which means they are likely to be even bigger, because you didn't set the initial size of ArrayLists and they resize automatically when needed. So we are talking about over 5GB of memory needed.
Actually, you add chunks into ByteArrayOutputStream baos, which means that it must store it internally, before you call toByteArray() on it. So it's 5 copies of your data, meaning 6GB+. The ByteArrayOutputStream is internally using an array so it grows similarly to ArrayLists so it will use more memory than needed (see the stacktrace - it tried to resize).
All these variables are in the same scope, never are assigned null which means that they cannot be garbage collected.
You can increase the maximum heap limit (see Increase heap size in Java), but this will be a serious limitation on your program.
Your program throws out of memory error when writing to ByteArrayOutputStream. This is the 4th time you copy all your data, which means that 3.6GB is already allocated. From this I deduce that your heap is set to 4GB (which is a maximum you can set on 32 bit operating system).
What you should do is have a loop, read part of the file, encrypt it and write to another file. This will avoid loading entire file into memory. Lines like List<byte[]> bufferedFile = divideArray(plainText, 8192); or resultByteList.add(...) is something that you shouldn't have in your code - you end up storing entire file in memory. The only thing that you need to keep track of is a cursor (i.e. position which says what bytes you already processed), which is O(1) memory complexity. Then you only need as much memory as the chunk your are encoding - which is far smaller than entire file.
As you're iterating over the file, keep a counter to track the number of bytes:
int encryptedBytesSize = 0;
for(int i = 0; i < bufferedFile.size(); i++) {
resultByteList.add(cipher.doFinal(bufferedFile.get(i)));
encryptedBytesSize += resultByteList.get(resultByteList.size() - 1).length;
}
Then use the constructor which takes a size parameter to create the output buffer:
ByteArrayOutputStream baos = new ByteArrayOutputStream(encryptedBytesSize);
This will avoid the internal buffer from having to grow. Growth could be non-linear so as more bytes are added each iteration even more space is allocated the next time it grows.
But this still might not work, depending on the file size. Another approach would be to:
Read a little chunk of the unencrypted file
Encrypt the chunk
Write to the encrypte file
This avoids having all of the regular and encrypted files in memory at the same time.
I am making a byte array with predefined size as shown below:
private byte[] newPayload() {
byte[] payload = new byte[100];
Arrays.fill(payload, (byte) 1);
return payload;
}
Now I want to add 8 bytes of current timestamp in the same byte array in front of it.
long time = System.currentTimeMillis();
So first eight bytes will be current timestamp and remaining 92 bytes will be same what I am doing right now.
You can use ByteBuffer to convert long to byte[]. Also you can use System.arraycopy to copy this byte[] to the mail array. Please refer the below code.
ByteBuffer buffer = ByteBuffer.allocate(Long.SIZE / Byte.SIZE);
buffer.putLong(time);
byte[] timeBytes = buffer.array();
System.arraycopy(timeBytes, 0, payload, 0, timeBytes.length);
byte[] demande=new byte[2];
Let's suppose that demande is a data frame which will be send to a socket.
What should be byte[0] and byte[1] if I want send 200. I try to write byte[0]=1 and byte[1]=-56 ( 1*256 - 56)=200 but it don't work. How can I do?
I assume that the number 200 is a decimal value.
As 200 is less than 255 it will fit into one byte because the hexadecimal value of 200 is 0xC8.
So in your case you have two options. Which one is correct depends on the protocol you are using.
Either
byte[] demande = { 0x00, 0xC8 }; // little endian
or
byte[] demande = { 0xC8, 0x00 }; // big endian
Or if you prefer
byte[] demande = new byte[2];
demande[0] = 0x00;
demande[1] = 0xC8;
(little endian)
You can use the ByteBuffer class to create a byte array. If you wanted to convert the integer 200 to a byte array:
ByteBuffer b = ByteBuffer.allocate(2);
b.putInt(0x000000c8);
byte[] result = b.array();
I am trying to encrypt/decrypt data ios to java & java to ios but I data encrypted in java is not properly decrypted in ios & data encrypted in ios is not properly decrypted in java
- (NSData *) encrypt:(NSData *) dataToEncrypt symmetricKey:(NSData *)symmetricKey context:(CCOperation)encryptOrDecrypt{
NSUInteger data_length= [dataToEncrypt length];
uint8_t input_raw_data[data_length];
//The [dataToEncrypt length] gives the number of chars present in the string.So say there are 10 chars.
//Now,the getBytes needs to get the raw bytes from this i.e. binary NSData.But suppose the encoding was
//full 16 bit encoding then the number of bytes needed wd have been double- 20.But as we are using the
//NSUTF8StringEncoding,the number of byes needed is 1 per char as the chars(even if originally unicode are
//compressed into an 8 bit UTF8 encoding.)
[dataToEncrypt getBytes:&input_raw_data length:data_length];
// [dataToEncrypt getBytes:&input_raw_data maxLength:data_length usedLength:NULL encoding:NSUTF8StringEncoding options:0 range:NSMakeRange(0,data_length) remainingRange:NULL];
//According to the doc: For block ciphers, the output size will always be less than or
//equal to the input size plus the size of one block.
//That's why we need to add the size of one block here
size_t buffer_size = data_length + kCCBlockSizeAES128;
void* buffer = malloc(buffer_size);
size_t num_bytes_encrypted = 0;
CCCryptorStatus crypt_status = CCCrypt(encryptOrDecrypt, kCCAlgorithmAES128, 0x0000,
[symmetricKey bytes], kCCKeySizeAES128,
NULL,
input_raw_data, data_length,
buffer, buffer_size,
&num_bytes_encrypted);
// NSLog(#"~~num bytes encrypted: %d",num_bytes_encrypted);
if (crypt_status == kCCSuccess){
NSLog(#"~~Data encoded successfully...");
return [NSData dataWithBytesNoCopy:buffer length:num_bytes_encrypted];
}
free(buffer); //free the buffer;
return nil;
}
I have used this
Java Code -
Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
String keyString = "keykeykeykeykeykeykeykey";
byte[] keyBytes = keyString.getBytes("UTF-8");
cipher.init(Cipher.ENCRYPT_MODE, new SecretKeySpec(keyBytes, "AES"), new IvParameterSpec(new byte[16]));
byte[] resultBytes = cipher.doFinal("Hallo Welt!".getBytes("UTF8"));
FileOutputStream out = new FileOutputStream(new File("encryptedFileJava"));
out.write(resultBytes); out.close();
and this is encrypted text - “Se áJbë|8”R ,
key - BW3dKDf2bkDC4Bq9xTdr1g==
Please help me or suggest me any solution.
Thank you.
You have at least two problems:
The Objective C code is using ECB mode, while the Java code is using CBC mode. Use a bytearray of zeroes instead of NULL in the CCrypt-invocation to use CBC mode with a zero IV like the Java code.
Since keyBytes is 24 bytes long, Java will use AES-192. CCrypt will just ignore the extra bytes. Either specify AES-192 to CCrypt or use a 128 bit key ("keykeykeykeykeyk" should work).
For secured communication between IOS and Java devices, symmetric key encryption can be used.
In such cases where the platforms are different it is advisable that the key generated is a plain text key and not a API generated key.
AES 128 bit encryption can be used in such cases. IOS devices are capable of generating a symmetric key and encrypting the text using AES encryption.
Below link provides the java code to encrypt and decrypt using plain text symmetric key
http://www.java-redefined.com/2015/06/symmetric-key-encryption-ios-java.html
We are trying to duplicate the sha1 crypto encoding done in our java 1.6 server with the iOS/iPhone CommonCrypto libraries.
A basic question I have is why does Java have a fix output of 40 bytes while iOS has a fix output of 20 bytes from the SHA1 algorithms
I have found this link which shows how to generate the encoding in both environments but the output would be of different lengths, correct?
How to SHA1 hash a string in Android?
The SHA1 algorithm always return 160 bits (or 20 bytes).
I suspect your Java code is turning the byte array into a hexadecimal string, i.e. where each byte would show as two characters.
To compare this with CommonCrypto you can either:
convert the Java output to a byte array; or
convert the CommonCrypto byte array to an hexadecimal string (this is what the link in your question is doing)
before comparing the values.
Yesterday i was facing exactly this problem, the sha1 algorithm implementation i was using, was not compatible with the android one, after more or less one hour searching on android and ios implementations, i realize it was only a problem of String Formating. (change X to x).
Im sharing the snippet of what we are using for sha1 algorithm implementation ios/android compatible ... Hope this helps as a version of #poupou theory answer :-).
public static String sha1(String s) {
MessageDigest digest = null;
try {
digest = MessageDigest.getInstance("SHA-1");
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
digest.reset();
byte[] data = digest.digest(s.getBytes());
return String.format("%0" + (data.length * 2) + "X", new BigInteger(1, data));
}
-(NSString*) sha1:(NSString*)input
{
const char *cstr = [input cStringUsingEncoding:NSUTF8StringEncoding];
NSData *data = [NSData dataWithBytes:cstr length:input.length];
uint8_t digest[CC_SHA1_DIGEST_LENGTH];
CC_SHA1(data.bytes, data.length, digest);
NSMutableString* output = [NSMutableString stringWithCapacity:CC_SHA1_DIGEST_LENGTH * 2];
for(int i = 0; i < CC_SHA1_DIGEST_LENGTH; i++)
[output appendFormat:#"%02X", digest[i]];
return output;
}
Android or iOS, the SHA-1 has an expected length of 20 bytes.
But there is a difference in the sha-1 algorithm return.
iOS simply does not terminate the result with a null character.
So I guess the point is to not use the sha output length to generate the output data, but the CC_SHA1_DIGEST_LENGTH constant - which is 20.
uint8_t digest[CC_SHA1_DIGEST_LENGTH];
NSData* data = [stringToHash dataUsingEncoding:NSUTF8StringEncoding];
char* sha = CC_SHA1(data.bytes, data.length, digest);
NSData *hashedData = [NSData dataWithBytes:sha length:CC_SHA1_DIGEST_LENGTH];
If you terminate the digest yourself, then the sha output is correct:
uint8_t digest[CC_SHA1_DIGEST_LENGTH+1];
memset(digest,0,CC_SHA1_DIGEST_LENGTH+1);
NSData* data = [stringToHash dataUsingEncoding:NSUTF8StringEncoding];
char* sha = CC_SHA1(data.bytes, data.length, digest);
NSData *hashedData = [NSData dataWithBytes:sha length:strlen(sha)];
Hope it helps, cheers :)