I am trying to implement a simple string encoder to obfuscates some parts of a URL string (to prevent them from getting mucked with by a user). I'm using code nearly identical to the sample in the JCA guide, except:
using DES (assuming it's a little faster than AES, and requires a smaller key) and
Base64 en/decoding the string to make sure it stays safe for a URL.
For reasons I can't understand, the output string ends up with linebreaks, which I presume won't work. I can't figure out what's causing this. Suggestions on something similar that's easier or pointers to some other resources to read? I'm finding all the cryptography references a bit over my head (and overkill), but a simple ROT13 implementation won't work since I want to deal with a larger character set (and don't want to waste time implementing something likely to have issues with obscure characters i didn't think of).
Sample input (no line break):
http://maps.google.com/maps?q=kansas&hl=en&sll=42.358431,-71.059773&sspn=0.415552,0.718918&hnear=Kansas&t=m&z=7
Sample Output (line breaks as shown below):
GstikIiULcJSGEU2NWNTpyucSWUFENptYk4m5lD8RJl8l1CuspiuXiE9a07fUEAGM/tC7h0Vzus+
jAH6cT4Wtz2RUlBdGf8WtQxVDKZVOzKwi84eQh2kZT9T3KomlnPOu2owJ/2RAEvG+QuGem5UGw==
my encode snippet:
final Key key = new SecretKeySpec(seed.getBytes(), "DES");
final Cipher c = Cipher.getInstance("DES");
c.init(Cipher.ENCRYPT_MODE, key);
final byte[] encVal = c.doFinal(s.getBytes());
return new BASE64Encoder().encode(encVal);
Simply perform base64Str = base64Str.replaceAll("(?:\\r\\n|\\n\\r|\\n|\\r)", "")
on the encoded string.
It works fine when you try do decode it back to bytes. I did test it several times with random generated byte arrays. Obviously decoding process just ignores the newlines either they are present or not.
I tested this "confirmed working" by using com.sun.org.apache.xml.internal.security.utils.Base64
Other encoders not tested.
Base64 encoders usually impose some maximum line (chunk) length, and adds newlines when necessary. You can normally configure that, but that depends on the particular coder implementation.
For example, the class from Apache Commons has a linelength attribute, setting it to zero (or negative) disables the line separation.
BTW: I agree with the other answer in that DES is hardly advisable today. Further, are you just "obfuscating" or really encrypting? Who has the key? The whole thing does not smell very well to me.
import android.util.Base64;
...
return new BASE64.encodeToString(encVal, Base64.NO_WRAP);
Though it's unrelated to your actual question, DES is generally slower than AES (at least in software), so unless you really need to keep the key small, AES is almost certainly a better choice.
Second, it's perfectly normal that encryption (DES or AES) would/will produce new-line characters in its output. Producing output without them will be entirely up to the base-64 encoder, so that's where you clearly need to look.
It's not particularly surprising to see a base-64 insert new-line characters at regular intervals in its output though. The most common use for base-64 encoding is putting raw data into something like the body of an email, where a really long line would cause a problem. To prevent that, the data is broken up into pieces, typically no more than 80 columns (and usually a bit less). In this case, the new-lines should be ignored, however, so you should be able to just delete them, if memory serves.
Related
While using Jasypt, the encrypted passwords contains = (equal character) at the end. Is it guaranteed that the encrypted passwords will always have = at the end?
How/Can we control this behavior?
Foe example: test is encrypted to Nv4nMcuVwsvWVuYD7Av44Q==
It looks like the =s come from padding the Base64 representation of the encryption / hash output.
In that case, the answer is generally no, it won't necessarily end with "=".
However if the algorithm you're using produces constant-length output (e.g. if it uses hashing along the way), it might by a chance end up producing those "="s all the time - but there's no way of knowing that for sure unless you fully understand all steps the algorithm you're using performs.
We encrypt with AES 256 CBC PKCS5PADDING in Java with the libraries one has to download from Oracle, with Base64 encoding of the resulting byte arrays. I have read that static common initialization vector drastically decreases the security as texts that starts with the chars will looks the same when encrypted. Is this still true for short strings (12 numeric chars)?
I have encrypted a large set and I cannot find any reoccurring substrings in the resulting encrypted strings, even when they start with the same sequence.
Example (plaintext on the left and resulting encrypted string on the right)
555555555501 -> U0Mkd0PPloB5iLBy5jM6nw==
555555555502 -> NUHWaFs62LMEeyoGA0mGoQ==
555555555503 -> X3/XJNd4TzEsMv7V0bXwqg==
Albeit separate from the question, but to preempt some suggestions: we need to be able to do look ups based on plaintext strings and to be able to decrypt. We could do both hashing and encryption, but prefer to avoid it if it does not improve security significantly as it adds complexity.
I have read that static common initialization vector are bad as one can derive the key from encrypted strings.
I'm curious: where have you read that?
With short (<=16 bytes) plaintext, a random IV effectifely works as a Salt, i.e. it causes the ciphertext to differ even if the plain text is the same. This is an important feature in a lot of applications. But you write:
We need to be able to do look ups based on plaintext strings.
So you want to build some sort of pseudonymization database? If that is a requirement for you, the feature that salt, and in your case random IV adds, is actually one that you specifically don't want. Depending on your other requirements you can probably get away with using a static IV here. But for pseudonymization in general, it is recommended to use a dedicated pseudonym. In your case the data seems to be atomic. But in the general case of, for example, address data, you want to hash the name, the zip code, the city and whatever else your pseudonym is, separately, both to allow more specific queries, and to keep access to and information flow from your data under strict control.
String myText;
UUID.nameUUIDFromBytes((myText).getBytes()).toString();
I am using above code to generate a representative for specific texts.
For example 'Moien' should always be represeted with "e9cad067-56f3-3ea9-98d2-26e25778c48f", not any changes like project rebuild should be able to change that UUID.
The reason why I'm doing this is so that I don't want those specific texts to be readable(understandable) to human.
Note: I don't need the ability to regenerate the main text (e.g "Moien") after hashing .
I have an alternative way too :
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hash = digest.digest((matcher.group(1)).getBytes("UTF-8"));
String a = Base64.encode(hash);
Which od you think is better for my problem?
UUID.nameUUIDFromBytes appears to basically just be MD5 hashing, with the result being represented as a UUID.
It feels clearer to me to use a base64-encoded hash explicitly, partly as you can then control which hash gets used - which could be relevant if collisions pose any sort of security risk. (SHA-256 is likely a better option than MD5 for exactly that reason.) The string will be longer from SHA-256 of course, but hopefully that's not a problem.
Note that in either case, I'd convert the string to text using a fixed encoding via StandardCharsets. Don't use the platform default (as per your first snippet) and prefer StandardCharsets over magic string values (as per your second snippet).
Given the following example:
String f="FF00000000000000";
byte[] bytes = DatatypeConverter.parseHexBinary(f);
String f2= new String (bytes);
I want the output to be FF00000000000000 but it's not working with this method.
You're currently trying to interpret the bytes as if they were text encoded using the platform default encoding (UTF-8, ISO-8859-1 or whatever). That's not what you actually want to do at all - you want to convert it back to hex.
For that, just look at the converter you're using for the parsing step, and look for similar methods which work in the opposite direction. In this case, you want printHexBinary:
String f2 = DatatypeConverter.printHexBinary(bytes);
The approach of "look for reverse operations near the original operation" is a useful one in general... but be aware that sometimes you need to look at a parallel type, e.g. DataInputStream / DataOutputStream. When you find yourself using completely different types for inverse operations, that's usually a bit of a warning sign. (It's not always wrong, it's just worth investigating other options.)
MessageDigest m=MessageDigest.getInstance("MD5");
StringBuffer sb = new StringBuffer();
if(nodeName!=null) sb.append(nodeName);
if(nodeParentName!=null) sb.append(nodeParentName);
if(nodeParentFieldName!=null) sb.append(nodeParentFieldName);
if(nodeRelationName!=null) sb.append(nodeRelationName);
if(nodeViewName!=null) sb.append(nodeViewName);
if(treeName!=null) sb.append(treeName);
if(nodeValue!=null && nodeValue.trim().length()>0) sb.append(nodeValue);
if(considerParentHash) sb.append(parentHash);
m.update(sb.toString().getBytes("UTF-8"),0,sb.toString().length());
BigInteger i = new BigInteger(1,m.digest());
hash = String.format("%1$032X", i);
The idea behind these lines of code is that we append all the values of a class/model into a StringBuilder and then return the padded hash of that (the Java implementation returns md5 hashes that are lenght 30 or 31, so the last line formats the hash to be padded with 0s).
I can verify that this works, but I have a feeling it fails at one point (our application fails and I believe this to be the probable cause).
Can anyone see a reason why this wouldn't work? Are there any workarounds to make this code less prone to errors (e.g. removing the need for the strings to be UTF-8).
There are a few weird things in your code.
UTF-8 encoding of a character may use more than one byte. So you should not use the string length as final parameter to the update() call, but the length of the array of bytes that getBytes() actually returned. As suggested by PaĆlo, use the update() method which takes a single byte[] as parameter.
The output of MD5 is a sequence of 16 bytes with quite arbitrary values. If you interpret it as an integer (that's what you do with your call to BigInteger()), then you will get a numerical value which will be smaller than 2160, possibly much smaller. When converted back to hexadecimal digits, you may get 32, 31, 30... or less than 30 characters. Your usage of the the "%032X" format string left-pads with enough zeros, so your code works, but it is kind of indirect (the output of MD5 has never been an integer to begin with).
You assemble the hash input elements with raw concatenation. This may induce issues. For instance, if modeName is "foo" and modeParentName is "barqux", then the MD5 input will begin with (the UTF-8 encoding of) "foobarqux". If modeName is "foobar" and modeParentName is "qux", then the MD5 input will also begin with "foobarqux". You do not tell why you want to use a hash function, but usually, when one uses a hash function, it is to have a unique trace of some piece of data; two distinct data elements should yield distinct hash inputs.
When handling nodeValue, you call trim(), which means that this string could begin and/or end with whitespace, and you do not want to include that whitespace into the hash input -- but you do include it, since you append nodeValue and not nodeValue.trim().
If what you are trying to do has any relation to security then you should not use MD5, which is cryptographically broken. Use SHA-256 instead.
Hashing an XML element is normally done through canonicalization (which handles whitespace, attribute order, text representation, and so on). See this question on the topic of canonicalizing XML data with Java.
One possible problem is here:
m.update(sb.toString().getBytes("UTF-8"),0,sb.toString().length());
As said by Robing Green, the UTF-8 encoding can produce a byte[] which is longer than your original string (it will do this exactly when the String contains non-ASCII characters). In this case, you are only hashing the start of your String.
Better write it like this:
m.update(sb.toString().getBytes("UTF-8"));
Of course, this would not cause an exception, simply another hash than would be produced otherwise, if you have non-ASCII-characters in your string. You should try to brew your failure down to an SSCCE, like lesmana recommended.