Shrink some blocks of ciphertext in AES - java

I need to decrypt an AES ciphertext, for which I have the key. The problem is that on decryption in Java, an error occurs:
javax.crypto.BadPaddingException: Given final block not properly padded
I suppose it was a problem on data persisting in the database and that some part of the the data is corrupt (because there was no problems so far, it can't be the key). The length of the ciphertext is a multiple of 16.
Two questions:
If I would I delete the last 16-byte block, would it be possible to decrypt the data?
Do you have any other suggestions?

You can omit padding using NOPADDING for padding spec when encrypting if you can guarantee that your message length will always be a multiple of the AES block size which is 16 bytes. You can also omit padding if you use AES in a mode that doesn't require padding (i.e. CTR mode).
Also, you can always attempt to decrypt a padded message with NOPADDING but you'll have to deal with the padding in the plaintext at some point.
Overall you're probably better off trying to figure out why your message is not decrypting properly instead of trying workarounds. Workarounds when dealing with crypto are generally not a good idea.

Related

AES share only the key (no salt or IV)

I have a requirement to encrypt with AES -symmetric- a string and then share the encrypted string with a client.
They know the key (we have communicated over the phone) and they should be able to decipher the encrypted string.
However the Java implementations I have found all require to share the salt (or IV) along with the encrypted document. This defeats the purpose of sharing only the cipher text and a symmetric key (before hand) if I somehow have to send the salt every time.
Am I understanding something wrong? Is there a way to share only the cipher text and the symmetric key?
The purpose of the IV in encryption is randomization. If you use ECB mode of operation, it can leak information about the ciphertexts that are encrypted under the same key. See the famous penguin in Wikipedia mode of operations.
E(k,m) = E(k,m') iff m=m'
The modern modes of operation use IV, as AES-GCM which is in TLS 1.3 cipher suites.
You should tell the big company about the danger. I'm pretty sure that they can easily adapt to your case very easily.
Note: ECB mode can only be safe, if
your data is always different (no pattern)
you generate a new key for every encryption with a key agreement protocol as Diffie-Hellman key exchange, and this is not your case.
Usually the IV is shared by appending it to the cipher text. So, eventually you are sending a single Base64 encoded string.
So, if you are worried about breaking a contract by sending two fields (one IV and one cipher text) instead of sending just one field, let me assure you that you're going to send a single field only. And the decryption logic knows how to extract the IV from the received string and use it in the decryption process.
Note that there are some key distinctions between IV and key:
Key is a secret, IV is not
Many messages can be encrypted with the same key, but IV is different for every new message. The key and IV combination has to be unique for every message and the IV also has to be random
Therefore, you do not share IV the same way as key. Since IV changes for every message, it is actually appended with the cipher text to form a single string which is then sent as the encrypted output. So, the decryption logic takes as input only the key and your encrypted output; it knows how to extract the IV and the cipher text from the encrypted output.
On today's date if anyone needs to encrypt something using AES, the usual choice is an authenticated encryption mode like GCM which provides not only confidentiality but also integrity in a secured way.
Unless the recipient (in your case) is rigidly specifying a particular mode for AES, the default choice would always be AES with GCM. And even if the recipient proposes some mode that is not an authenticated encryption mode, you may consider explaining to them the benefits of using authenticated encryption mode.
You'll find a complete java implementation with detailed explanation here.
You may also want to read this along with the comments to understand it better.

Can you ever get partially decrypted data from a decryption algorithm using WSS4J library?

I'm decrypting some data using Java and Apache's most recent WSS4J library with 128-bit AES decryption.
I setup the the cipher which appears to be correct with the right padding, decryption algorithm, and cipher block mode.
I then make a call to doFinal() on the encrypted data bytes and it successfully returns a value.
My question is would it ever return a value that is only partially decrypted?
For example, let's say the first 16 bytes are still jumbled up after decryption, but the remainder of the data returned has been successfully decrypted, and is human-readable with the expected data there.
Would this mean that there could be an issue with my decryption process? Or would it not even be able to return a value from the doFinal() step if something was even slightly off with the decryption setup?
If I get a value returned from doFinal() would that mean that 100% the data returned is the original data before it was encrypted?
I'm decrypting data from a web service call and the owners of the web service are claiming that I must be doing something wrong during my decryption process and that they are sending the data correctly.
Yes, that is possible. A prime example would be if you try to decrypt something in CBC mode with the wrong Initialization Vector (IV). That would result in the first part decrypted to be invalid.
This is due to how the IV is XORed to the first block of plaintext in CBC mode.

Using blowfish in CBC mode for encryption and decryption but how to proceed with the IV?

I have a UI to add/edit password .These passwords are encrypted using Blowfish in CBC mode and it worked fine but during decryption it required a IV (it threw a parameter missing exception.)
I have used the cipher class while initiating the cipher so this would have taken care of the IV while encrypting.
So my doubt is,
should the IV be same for both encryption and decryption? I read on some pages that while decryption if we use an incorrect IV the first block will be incorrect but the remaining blocks would be correct .Can you explain on this?
IF the IV (in case of encryption and decryption using the same IV) be saved should it be saved as a plain object or encrypted along with the password using some delimiter ?Which will be safer?
Thanks in advance.
Yes, the IV should be the same for encryption/decryption. In CBC, if I recall properly, errors will cascade down the blocks. So the whole message will be wrong if you use the wrong IV.
The IV can be stored in plaintext. If you try and store it encrypted, you'll end up needing to store the IV used to encrypt the IV...
However, it is generally considered a bad practice to store passwords in an encrypted form. If someone were to retrieve you database, they'd only need to find one key to retrieve all the passwords.
The recommended way to store passwords is to use a hash function multiple times, also known as a PBKDF (password based key derivation function), either based on a plain hash or on a hmac function. See the OWASP password storage cheatsheet.
There are primitives for this in java, see the example on this page. (Search for Use a Password Hashing Algorithm and scroll down to the java implementation.)

How to encrypt a big text file using AES algorithm, Hadoop and Java?

I have a big text file (100MB or more) and I want to use AES algorithm to encrypt the content of the text file using Hadoop and Java (Map/Reduce functions), but as I am new to Hadoop, I am not really sure how to start this. I found JCE (a Java library) where AES is already implemented but I have to provide 16 bytes text along with a key to generate a 16 bytes cipher text (encrypted output). My question is how to use this JCE/AES method to get my purpose done? How should I split my big input text file and What should I pass to the map method of Mapper class? what should be the key and value? What should be passed to the Reduce method? Any kind of starting point or code example would be greatly appreciated. (P.S. I am new to Hadoop and I just ran the wordcount problem on my machine, that's it.)
EDIT 1:
Actually, I have to do the following things:
split the input file in 16 bytes chunks.
for each chunk, I have to apply the AES algorithm to get 16 bytes cipher text and write the 16 bytes cipher text in the output file.
continue the process until the input file ends.
My question now is, how to parallelize it using Hadoop's Map and Reduce methods? what should be the key and how to accumulate the output cipher texts in the output file?
Encrypting a large stream with a block cipher requires you to resolve a fundamental issue, completely irrelevant of how you actually split the work (M/R or whatever). The problem is the cipher-block chaining. Because each block is dependent on the output of the previous block, you cannot encrypt (or decrypt) block N w/o first encrypting (or decrypting) block N-1. This implies that you can only encrypt the file one block at a time, starting with block 1, then block 2, then 3 and so on.
To work around the problem all encryption solutions do the same: they split the stream in chunks of adequate size (the right size is always a trade-off) and use some out-of-band storage where they associate each chunk with a startup nonce (initialization vector). This way chunks can be ecnrypted and decrypted independently.
HDFS has a natural chunk (the block) and the access patterns on blocks are single threaded and sequential, lending itself to be the natural choice for encryption chunks. Adding the extra metadata on the namenode for the nonce on each block is relatively straightforward to do. If you do this for your own education, this is a fun project to tackle. Key management is a separate issue, and of course, as with any encryption scheme, key management is the actually important part while implementing the cipher is the trivial part.
If you are thinking at this for real world use, stop right now. Use an off-the-shelf encryption solution for Hadoop, of which there are several

javax.crypto AES encryption - Do I only need to call doFinal?

I want to do AES CBC encryption in Java. I'm using javax.crypto. After I have the Cipher initialized, do I only need to call doFinal on the clear bytes to properly encrypt it? Or do I need to do something with update?
Documentation says update:
Continues a multiple-part encryption
or decryption operation
and doFinal
Encrypts or decrypts data in a
single-part operation, or finishes a
multiple-part operation
what exactly do they mean by multiple-part encryption?
doFinal adds the PKCS7 padding in the last block. So you can call update zero to many times, but last call should be an doFinal. Multipart encryption is when the data is not contiguous in memory. Typical example being buffers received from a socket. You set up the cipher and then start calling update to encrypt or decrypt the data, block by block, and build up the encrypted/decrypted data, by appending the blocks returned by update. On last input block you call doFinal and the returned block is the last one to be appended to the output data. On ecnrypting, doFinal will add the padding. On decrypting doFinal will validate and remove the padding.

Categories