I am writing a Java program that converts Bitcoin privateKey to WIF format.
Unfortunately, I got wrong SHA256 hashes.
My code is based on Basing on this tutorial.
When I hash a value like:
800C28FCA386C7A227600B2FE50B7CAE11EC86D3BF1FBE471BE89827E19D72AA1D
I get something like this as result:
e2e4146a36e9c455cf95a4f259f162c353cd419cc3fd0e69ae36d7d1b6cd2c09
instead of:
8147786C4D15106333BF278D71DADAF1079EF2D2440A4DDE37D747DED5403592
This is my piece of code:
public String getSHA(String value){
String hash = hash = DigestUtils.sha256Hex(value.getBytes());
System.out.println(hash);
return hash;
}
I used this library: import org.apache.commons.codec.digest.DigestUtils;
Of course I searched this problem on the web and I found this site.
On that website, there are two textboxes - String hash and Binary Hash.
Using a String hash, I got the same incorrect result as in my Java program.
But, using a Binary hash, I got a right result.
My question is:
What is the difference between Binary and String hashes?
How to implement Binary hash in my Java method?
In your case 800C28... is a text representation of byte[] using hex encoding. To convert it back to byte[] you can take a look at this answer, one way would be to do it is:
public static byte[] hexStringToByteArray(String hex) {
int l = hex.length();
byte[] data = new byte[l/2];
for (int i = 0; i < l; i += 2) {
data[i/2] = (byte) ((Character.digit(hex.charAt(i), 16) << 4)
+ Character.digit(hex.charAt(i+1), 16));
}
return data;
}
String.getBytes() will return the character values, e.g. character 8 has a value of 56 as per the ASCII table.
System.out.println(Arrays.toString("8".getBytes())); // 56
Related
I have been trying out to ''improve'' the simple Caesar encryption by encrypting in the CBC mode.
As I understand, the first character has to be XORed by an initialization vector and then by the key, the output then is the first character of the encrypted text. This will then be XORed by the second char, then again XORed by the key, … and so forth.
I don't quite understand how the XORing should work.
Let us have the translation table given (only space and A-Z):
/s: 0, A: 1, B: 2, …, Z: 26,
key: 1,
Init.vector: 5
Using the simple Caesar, ''HELLO'' -> {8,5,12,12,20} -> {9,6,13,13,21} -> ''IFMMP''
But how would I get to encrypt using CBC?
It'd be especially helpful if you could show me how to implement it in Java. Thanks!
Hmm I interpret your question like you think you want to xor by the key to your cipher, this is wrong:
You xor by the previous result from the cipher. Something like this:
// Untested code
// The code below need to be adjusted for it to print meaningful
// characters in the encrypted string as the xor function produces
// integers outside the range of standard ascii characters
private void cbcCaesar(){
int key = 1;
String message = "java";
int initialisationVector = 5; // the IV is not super necessarily hidden from an attacker but should be different for each message
StringBuilder encryptedMessageBuilder = new StringBuilder();
char[] charArray = message.toCharArray();
int encryptedLetter = initialisationVector;
for (int letter : charArray){
int xorApplied = letter ^ encryptedLetter;
encryptedLetter = applyCaesarCipher(xorApplied, key);
encryptedMessageBuilder.append((char) encryptedLetter);
}
System.out.println(encryptedMessageBuilder.toString());
}
private int applyCaesarCipher(int xorApplied, int key) {
return (xorApplied+ key) % 26;
}
The easiest way to convert the above snippet to something usable would be to map letters to the numbers 0-26 and use that instead of the char's ascii encoding
I found this resource to be pretty good https://www.youtube.com/watch?v=0D7OwYp6ZEc
I have employee data, each employee has address information. I need to generate a unique 9 digit (numeric or alpha numeric) value for postal code (5 chars) and address line1 (35 chars), which is a unique value to represent a location. It is also called as "Wrap number".
As shown in below picture, when address of two employees is same, then Wrap Number should be same, otherwise new value should be assigned.
Which algorithm is best suitable to generate 9 digit unique value?
P.S. I need to program it in Java.
What you're asking is impossible. No, really, impossible.
You have a 5-digit ZIP code, which can be encoded in 17 bits. Then you have 35 characters of text. Let's say you limit it to upper and lower case letters, plus digits and special characters. Figure 96 possible characters, or approximately 6.5 bits each. So:
35 * 6.5 = 227.5 ~ 228 bits
So you have up to 245 bits of information and you want to create a "unique" 9-character code. Your 9-character code only occupies 72 bits. You can't pack 228 bits of information into 72 bits without duplication. See Pigeonhole principle.
A better solution would be to assign a sequential number to each employee. If you want to make those 9-character codes, then use a technique to obfuscate the numbers and encode them using base-36 (numbers and upper-case letters) or something similar. I explain how to do that in my blog post, How to generate unique "random-looking" keys.
The simple idea is to use the well-known hash algorithms, which are already implemented in Java.
private static long generateIdentifier(final String adrLine, final String postCode) {
final String resultInput = adrLine + postCode;
//do not forget about charset you want to work with
final byte[] inputBytes = resultInput.getBytes(Charset.defaultCharset());
byte[] outputBytes = null;
try {
//feel free to choose the encoding base like MD5, SHA-1, SHA-256
final MessageDigest digest = MessageDigest.getInstance("SHA-256");
outputBytes = digest.digest(inputBytes);
} catch (NoSuchAlgorithmException e) {
//do whatever you want, better throw some exception with error message
}
long digitResult = -1;
if (outputBytes != null) {
digitResult = Long.parseLong(convertByteArrayToHexString(outputBytes).substring(0, 7), 16);
}
return digitResult;
}
//this method also may be useful for you if you decide to use the full result
// or you need the appropriate hex representation
private static String convertByteArrayToHexString(byte[] arrayBytes) {
final StringBuilder stringBuffer = new StringBuilder();
for (byte arrByte: arrayBytes) {
stringBuffer.append(Integer.toString((arrByte & 0xff) + 0x100, 16)
.substring(1));
}
return stringBuffer.toString();
}
I suggest you not to use MD5 and SHA1 because of the collisions which those hash functions can provide.
My idea would be this:
String str = addressLine + postalCode;
UUID uid = UUID.nameUUIDFromBytes(str.getBytes());
return makeItNineDigits(uid);
Where makeItNineDigits is some reduction of the UUID string representation to your liking. :)
This could be uid.ToString().substring(0, 9). Or you could take the two long values getLeastSignificantBits, getMostSignificantBits and create a 9-digit value from them.
A simple option might be to just take advantage of the hashing built in to Java....
String generateIdentifier(String postCode, String addressLine) {
long hash = ((postCode.hashCode() & 0xffffffffL) << 14L)
^ (addressLine.hashCode() & 0xffffffffL);
return Long.toString(hash, 36);
}
I'm writing a Simplified DES algorithm to encrypt and subsequently decrypt a string. Suppose I have the initial character ( which has the binary value 00101000 which I get using the following algorithm:
public void getBinary() throws UnsupportedEncodingException {
byte[] plaintextBinary = text.getBytes("UTF-8");
for(byte b : plaintextBinary){
int val = b;
int[] tempBinRep = new int[8];
for(int i = 0; i<8; i++){
tempBinRep[i] = (val & 128) == 0 ? 0 : 1;
val <<= 1;
}
binaryRepresentations.add(tempBinRep);
}
}
After I perform the various permutations and shifts, ( and it's binary equivalent is transformed into 10001010 and it's ASCII equivalent Š. When I come around to decryption I pass the same character through the getBinary() method I now get the binary string 11000010 and another binary string 10001010 which translates into ASCII as x(.
Where is this rogue x coming from?
Edit: The full class can be found here.
You haven't supplied the decrypting code, so we can't know for sure, but I would guess you missed the encoding either when populating your String. Java Strings are encoded in UTF-16 by default. Since you're forcing UTF-8 when encrypting, I'm assuming you're doing the same when decrypting. The problem is, when you convert your encrypted bytes to a String for storage, if you let it default to UTF-16, you're probably ending up with a two-byte character because the 10001010 is 138, which is beyond the 127 range for ASCII charaters that get represented with a single byte.
So the "x" you're getting is the byte for the code page, followed by the actual character's byte. As suggested in the comments, you'd do better to just store the encrypted bytes as bytes, and not convert them to Strings until they're decrypted.
im looking for a way to encrypt a four digits password and as a result get a 16chars string.
So far ive got 64chars String using this
public static String digestHex(String text) {
StringBuilder stringBuffer = new StringBuilder();
try {
MessageDigest digest = MessageDigest.getInstance("SHA-256");// SHA-256
digest.reset();
for (byte b : digest.digest(text.getBytes("UTF-8"))) {
stringBuffer.append(Integer.toHexString((int) (b & 0xff)));
}
} catch (NoSuchAlgorithmException | UnsupportedEncodingException e) {
e.printStackTrace();
}
return stringBuffer.toString();
}
being text = 1234
the resulting String is = 3ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4 Using Java btw :D
Any "encryption" scheme where you are encrypting a 4 digit number without an additional key is effectively a lookup scheme. Since there are only 10,000 unique "inputs" to the lookup scheme, it will be relatively easy to crack your encryption ... by trying all of the inputs.
In other words, the security of your encrypted PIN numbers is an illusion ... unless you do something like "seeding" the input before you encrypt it.
The security of you scheme aside - there are easier ways to do this:
// Your original - with the horrible exception hiding removed.
public static String digestHex(String text) throws NoSuchAlgorithmException, UnsupportedEncodingException {
StringBuilder stringBuffer = new StringBuilder();
MessageDigest digest = MessageDigest.getInstance("SHA-256");// SHA-256
digest.reset();
for (byte b : digest.digest(text.getBytes("UTF-8"))) {
stringBuffer.append(Integer.toHexString((int) (b & 0xff)));
}
return stringBuffer.toString();
}
// Uses BigInteger.
public static String digest(String text, int base) throws NoSuchAlgorithmException, UnsupportedEncodingException {
MessageDigest digest = MessageDigest.getInstance("SHA-256");// SHA-256
digest.reset();
BigInteger b = new BigInteger(digest.digest(text.getBytes("UTF-8")));
return b.toString(base);
}
public void test() throws NoSuchAlgorithmException, UnsupportedEncodingException {
System.out.println("Hex:" + digestHex("1234"));
System.out.println("Hex:" + digest("1234", 16));
System.out.println("36:" + digest("1234", 36));
System.out.println("Max:" + digest("1234", Character.MAX_RADIX));
}
This allows you to generate the string in a higher base - thus shortening the number but sadly you still do not achieve 16.
I would suggest you use one of the simple CRC algorithms if you are really instistent on 16 characters. Alternatively you could try base 62 or base 64 - there are many implementations out there.
You are using SHA-256. This algorithm generates 32 bytes long messages (256 bits, more details here).
This is why you obtain a 64 bytes long hex string as an output: Integer.toHexString((int) (b & 0xff)) converts each single b byte of the MessageDigest into a 2 bytes long hex String representation.
To obtain a 16 bytes long String, you can either use MD5 (16 bytes output, 32 if converted in hex), derive that string or use a completely different way such as actually using encryption (using javax.crypto.Cipher).
I'd need to know what you would like to to to elaborate further, knowing that using MessageDigestis actually hashing, not encryption, while in the first line of your post you are speaking of encryption. One of the difference resides in the fact that hash codes are not designed to be reversed but compared, unlike encryption which is reversible. See this interesting SO post on this.
I am new to java but I am very fluent in C++ and C# especially C#. I know how to do xor encryption in both C# and C++. The problem is the algorithm I wrote in Java to implement xor encryption seems to be producing wrong results. The results are usually a bunch of spaces and I am sure that is wrong. Here is the class below:
public final class Encrypter {
public static String EncryptString(String input, String key)
{
int length;
int index = 0, index2 = 0;
byte[] ibytes = input.getBytes();
byte[] kbytes = key.getBytes();
length = kbytes.length;
char[] output = new char[ibytes.length];
for(byte b : ibytes)
{
if (index == length)
{
index = 0;
}
int val = (b ^ kbytes[index]);
output[index2] = (char)val;
index++;
index2++;
}
return new String(output);
}
public static String DecryptString(String input, String key)
{
int length;
int index = 0, index2 = 0;
byte[] ibytes = input.getBytes();
byte[] kbytes = key.getBytes();
length = kbytes.length;
char[] output = new char[ibytes.length];
for(byte b : ibytes)
{
if (index == length)
{
index = 0;
}
int val = (b ^ kbytes[index]);
output[index2] = (char)val;
index++;
index2++;
}
return new String(output);
}
}
Strings in Java are Unicode - and Unicode strings are not general holders for bytes like ASCII strings can be.
You're taking a string and converting it to bytes without specifying what character encoding you want, so you're getting the platform default encoding - probably US-ASCII, UTF-8 or one of the Windows code pages.
Then you're preforming arithmetic/logic operations on these bytes. (I haven't looked at what you're doing here - you say you know the algorithm.)
Finally, you're taking these transformed bytes and trying to turn them back into a string - that is, back into characters. Again, you haven't specified the character encoding (but you'll get the same as you got converting characters to bytes, so that's OK), but, most importantly...
Unless your platform default encoding uses a single byte per character (e.g. US-ASCII), then not all of the byte sequences you will generate represent valid characters.
So, two pieces of advice come from this:
Don't use strings as general holders for bytes
Always specify a character encoding when converting between bytes and characters.
In this case, you might have more success if you specifically give US-ASCII as the encoding. EDIT: This last sentence is not true (see comments below). Refer back to point 1 above! Use bytes, not characters, when you want bytes.
If you use non-ascii strings as keys you'll get pretty strange results. The bytes in the kbytes array will be negative. Sign-extension then means that val will come out negative. The cast to char will then produce a character in the FF80-FFFF range.
These characters will certainly not be printable, and depending on what you use to check the output you may be shown "box" or some other replacement characters.