will every encrypted password of jasypt would contain "=" at the end? - java

While using Jasypt, the encrypted passwords contains = (equal character) at the end. Is it guaranteed that the encrypted passwords will always have = at the end?
How/Can we control this behavior?
Foe example: test is encrypted to Nv4nMcuVwsvWVuYD7Av44Q==

It looks like the =s come from padding the Base64 representation of the encryption / hash output.
In that case, the answer is generally no, it won't necessarily end with "=".
However if the algorithm you're using produces constant-length output (e.g. if it uses hashing along the way), it might by a chance end up producing those "="s all the time - but there's no way of knowing that for sure unless you fully understand all steps the algorithm you're using performs.

Related

PHP Bcrypt Salt as of 7.0

I am working on an application in which I have to compare 2 hashed passwords in a database, one password is being generated in PHP with $Password = password_hash($RawPassword, PASSWORD_BCRYPT);
While the other password that is being sent to the database to compare with the PHP hashed password is generated in Java with String hashedPassword = BCrypt.hashpw(password);
As of PHP 7.0 the salting is automatically generated, how can i know what salt is being applied in PHP so i can apply it to my java code? Or is there a way to still specify the salt that is no longer in the documentation for PHP Hashing?
The standard idea behind the vast majority of bcrypt impls is that the thing that is in the database looks like $2y$10$AB where A is 22 characters and B is 31 characters for a grand total of 60. A is: left(base64(salt + 00 + 00), 22) and B is: left(base64(bcryptraw(salt + pass)), 31). (2y refers to the hash algorithm/ EDIT: 2y and 2a are more or less interchangible; most bcrypt impls treat them the same, and it is unlikely to matter which one is there. The 10 refers to the # of bcrypt rounds applied. 10 is common and usually what you want).
where:
base64(X) = apply base64 conversion, using . and / as the 63rd and 64th char.
+ is concatenate, i.e. salt (a 16-byte byte array) gets 2 zero bytes added.
left(chars, size) means: Take the first size chars and discard the rest.
salt is the salt in bytes and pass is the password, converted to bytes via UTF_8. (if not converting via UTF-8, it's generally $2a$ instead, and you should upgrade, folks with non-ascii chars in their password get pretty bad hashes in the older $2a$ mode!
This one string contains everything that a bcrypt impl needs to check if a given password is correct or not. Thus, all non-idiotic bcrypt library impls have just two methods and no others:
// This is for when the user creates an account or edits their password.
// send the password to this method, then take the string it returns,
// and store this in your database.
hash = crypto.hashPass(password);
// This is for when the user tries to log in. For 'hash', send the thing
// that the hashPass method made for you.
boolean passwordIsCorrect = crypto.checkPass(password, hash);
EDIT: NB: A truly well designed crypto library calls these methods processNewPassword and checkExistingPassword to avoid the kind of confusion that caused you to ask this question, but unfortunately, nobody out there seems to have had the wherewithal to think for 5 seconds about what their names suggest. Unfortunate. Security is hard.
if your BCrypt API doesn't work like this, get rid of it, and find a standard implementation that works like this.
It sounds like you're using the wrong method. To check passwords, don't use hashPass. Use checkPass, or whatever goes for checkPass in your impl (it might be called checkPw or verifyPw or validate, etcetera. It take 2 strings).
Thus, you should never generate a salt, nor ever extract a salt from such a string. Let the bcrypt lib do it. Those 'hashes' that standard bcrypt libraries generate (the $2y$ string) are interchangible; your PHP library can make em and your java library can check em, and vice versa.
If you MUST extract the salt (but don't):
take those 22 characters, after the $protocol$rounds$ part.
append 'aa' to this.
base64decode the result.
this gets you 18 bytes. toss the last 2 bytes, which contain garbage.
The remaining 16 bytes are the salt.
You should absolutely not write this - your bcrypt library will do this.

Is a non-random initialization vector still bad when encrypting short strings?

We encrypt with AES 256 CBC PKCS5PADDING in Java with the libraries one has to download from Oracle, with Base64 encoding of the resulting byte arrays. I have read that static common initialization vector drastically decreases the security as texts that starts with the chars will looks the same when encrypted. Is this still true for short strings (12 numeric chars)?
I have encrypted a large set and I cannot find any reoccurring substrings in the resulting encrypted strings, even when they start with the same sequence.
Example (plaintext on the left and resulting encrypted string on the right)
555555555501 -> U0Mkd0PPloB5iLBy5jM6nw==
555555555502 -> NUHWaFs62LMEeyoGA0mGoQ==
555555555503 -> X3/XJNd4TzEsMv7V0bXwqg==
Albeit separate from the question, but to preempt some suggestions: we need to be able to do look ups based on plaintext strings and to be able to decrypt. We could do both hashing and encryption, but prefer to avoid it if it does not improve security significantly as it adds complexity.
I have read that static common initialization vector are bad as one can derive the key from encrypted strings.
I'm curious: where have you read that?
With short (<=16 bytes) plaintext, a random IV effectifely works as a Salt, i.e. it causes the ciphertext to differ even if the plain text is the same. This is an important feature in a lot of applications. But you write:
We need to be able to do look ups based on plaintext strings.
So you want to build some sort of pseudonymization database? If that is a requirement for you, the feature that salt, and in your case random IV adds, is actually one that you specifically don't want. Depending on your other requirements you can probably get away with using a static IV here. But for pseudonymization in general, it is recommended to use a dedicated pseudonym. In your case the data seems to be atomic. But in the general case of, for example, address data, you want to hash the name, the zip code, the city and whatever else your pseudonym is, separately, both to allow more specific queries, and to keep access to and information flow from your data under strict control.

Is it safe (in matter of uniqueness) to use UUID to generate a unique identifier for specific string?

String myText;
UUID.nameUUIDFromBytes((myText).getBytes()).toString();
I am using above code to generate a representative for specific texts.
For example 'Moien' should always be represeted with "e9cad067-56f3-3ea9-98d2-26e25778c48f", not any changes like project rebuild should be able to change that UUID.
The reason why I'm doing this is so that I don't want those specific texts to be readable(understandable) to human.
Note: I don't need the ability to regenerate the main text (e.g "Moien") after hashing .
I have an alternative way too :
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hash = digest.digest((matcher.group(1)).getBytes("UTF-8"));
String a = Base64.encode(hash);
Which od you think is better for my problem?
UUID.nameUUIDFromBytes appears to basically just be MD5 hashing, with the result being represented as a UUID.
It feels clearer to me to use a base64-encoded hash explicitly, partly as you can then control which hash gets used - which could be relevant if collisions pose any sort of security risk. (SHA-256 is likely a better option than MD5 for exactly that reason.) The string will be longer from SHA-256 of course, but hopefully that's not a problem.
Note that in either case, I'd convert the string to text using a fixed encoding via StandardCharsets. Don't use the platform default (as per your first snippet) and prefer StandardCharsets over magic string values (as per your second snippet).

Avoiding line breaks in encrypted and encoded URL string

I am trying to implement a simple string encoder to obfuscates some parts of a URL string (to prevent them from getting mucked with by a user). I'm using code nearly identical to the sample in the JCA guide, except:
using DES (assuming it's a little faster than AES, and requires a smaller key) and
Base64 en/decoding the string to make sure it stays safe for a URL.
For reasons I can't understand, the output string ends up with linebreaks, which I presume won't work. I can't figure out what's causing this. Suggestions on something similar that's easier or pointers to some other resources to read? I'm finding all the cryptography references a bit over my head (and overkill), but a simple ROT13 implementation won't work since I want to deal with a larger character set (and don't want to waste time implementing something likely to have issues with obscure characters i didn't think of).
Sample input (no line break):
http://maps.google.com/maps?q=kansas&hl=en&sll=42.358431,-71.059773&sspn=0.415552,0.718918&hnear=Kansas&t=m&z=7
Sample Output (line breaks as shown below):
GstikIiULcJSGEU2NWNTpyucSWUFENptYk4m5lD8RJl8l1CuspiuXiE9a07fUEAGM/tC7h0Vzus+
jAH6cT4Wtz2RUlBdGf8WtQxVDKZVOzKwi84eQh2kZT9T3KomlnPOu2owJ/2RAEvG+QuGem5UGw==
my encode snippet:
final Key key = new SecretKeySpec(seed.getBytes(), "DES");
final Cipher c = Cipher.getInstance("DES");
c.init(Cipher.ENCRYPT_MODE, key);
final byte[] encVal = c.doFinal(s.getBytes());
return new BASE64Encoder().encode(encVal);
Simply perform base64Str = base64Str.replaceAll("(?:\\r\\n|\\n\\r|\\n|\\r)", "")
on the encoded string.
It works fine when you try do decode it back to bytes. I did test it several times with random generated byte arrays. Obviously decoding process just ignores the newlines either they are present or not.
I tested this "confirmed working" by using com.sun.org.apache.xml.internal.security.utils.Base64
Other encoders not tested.
Base64 encoders usually impose some maximum line (chunk) length, and adds newlines when necessary. You can normally configure that, but that depends on the particular coder implementation.
For example, the class from Apache Commons has a linelength attribute, setting it to zero (or negative) disables the line separation.
BTW: I agree with the other answer in that DES is hardly advisable today. Further, are you just "obfuscating" or really encrypting? Who has the key? The whole thing does not smell very well to me.
import android.util.Base64;
...
return new BASE64.encodeToString(encVal, Base64.NO_WRAP);
Though it's unrelated to your actual question, DES is generally slower than AES (at least in software), so unless you really need to keep the key small, AES is almost certainly a better choice.
Second, it's perfectly normal that encryption (DES or AES) would/will produce new-line characters in its output. Producing output without them will be entirely up to the base-64 encoder, so that's where you clearly need to look.
It's not particularly surprising to see a base-64 insert new-line characters at regular intervals in its output though. The most common use for base-64 encoding is putting raw data into something like the body of an email, where a really long line would cause a problem. To prevent that, the data is broken up into pieces, typically no more than 80 columns (and usually a bit less). In this case, the new-lines should be ignored, however, so you should be able to just delete them, if memory serves.

Could anyone verify the correctness of getting a md5 hash using this method?

MessageDigest m=MessageDigest.getInstance("MD5");
StringBuffer sb = new StringBuffer();
if(nodeName!=null) sb.append(nodeName);
if(nodeParentName!=null) sb.append(nodeParentName);
if(nodeParentFieldName!=null) sb.append(nodeParentFieldName);
if(nodeRelationName!=null) sb.append(nodeRelationName);
if(nodeViewName!=null) sb.append(nodeViewName);
if(treeName!=null) sb.append(treeName);
if(nodeValue!=null && nodeValue.trim().length()>0) sb.append(nodeValue);
if(considerParentHash) sb.append(parentHash);
m.update(sb.toString().getBytes("UTF-8"),0,sb.toString().length());
BigInteger i = new BigInteger(1,m.digest());
hash = String.format("%1$032X", i);
The idea behind these lines of code is that we append all the values of a class/model into a StringBuilder and then return the padded hash of that (the Java implementation returns md5 hashes that are lenght 30 or 31, so the last line formats the hash to be padded with 0s).
I can verify that this works, but I have a feeling it fails at one point (our application fails and I believe this to be the probable cause).
Can anyone see a reason why this wouldn't work? Are there any workarounds to make this code less prone to errors (e.g. removing the need for the strings to be UTF-8).
There are a few weird things in your code.
UTF-8 encoding of a character may use more than one byte. So you should not use the string length as final parameter to the update() call, but the length of the array of bytes that getBytes() actually returned. As suggested by PaĆ­lo, use the update() method which takes a single byte[] as parameter.
The output of MD5 is a sequence of 16 bytes with quite arbitrary values. If you interpret it as an integer (that's what you do with your call to BigInteger()), then you will get a numerical value which will be smaller than 2160, possibly much smaller. When converted back to hexadecimal digits, you may get 32, 31, 30... or less than 30 characters. Your usage of the the "%032X" format string left-pads with enough zeros, so your code works, but it is kind of indirect (the output of MD5 has never been an integer to begin with).
You assemble the hash input elements with raw concatenation. This may induce issues. For instance, if modeName is "foo" and modeParentName is "barqux", then the MD5 input will begin with (the UTF-8 encoding of) "foobarqux". If modeName is "foobar" and modeParentName is "qux", then the MD5 input will also begin with "foobarqux". You do not tell why you want to use a hash function, but usually, when one uses a hash function, it is to have a unique trace of some piece of data; two distinct data elements should yield distinct hash inputs.
When handling nodeValue, you call trim(), which means that this string could begin and/or end with whitespace, and you do not want to include that whitespace into the hash input -- but you do include it, since you append nodeValue and not nodeValue.trim().
If what you are trying to do has any relation to security then you should not use MD5, which is cryptographically broken. Use SHA-256 instead.
Hashing an XML element is normally done through canonicalization (which handles whitespace, attribute order, text representation, and so on). See this question on the topic of canonicalizing XML data with Java.
One possible problem is here:
m.update(sb.toString().getBytes("UTF-8"),0,sb.toString().length());
As said by Robing Green, the UTF-8 encoding can produce a byte[] which is longer than your original string (it will do this exactly when the String contains non-ASCII characters). In this case, you are only hashing the start of your String.
Better write it like this:
m.update(sb.toString().getBytes("UTF-8"));
Of course, this would not cause an exception, simply another hash than would be produced otherwise, if you have non-ASCII-characters in your string. You should try to brew your failure down to an SSCCE, like lesmana recommended.

Categories