I'm using following code to create a BigInteger from hexadecimal string and print in to output.
package javaapplication2;
import java.math.BigInteger;
import javax.xml.bind.DatatypeConverter;
public class JavaApplication2 {
public static void main(String[] args) {
// Number in hexadecimal form
String HexString = "e04fd020ea3a6910a2d808002b30309d";
// Convertation from string to byte array
byte[] ByteArray = toByteArray(HexString);
// Creation of BigInteger from byte array
BigInteger BigNumber = new BigInteger(ByteArray);
// Print result
System.out.print(BigNumber + "\n");
}
public static String toHexString(byte[] array) {
return DatatypeConverter.printHexBinary(array);
}
public static byte[] toByteArray(String s) {
return DatatypeConverter.parseHexBinary(s);
}
}
After execution of this code I'm get a following result:
-42120883064304190395265794005525319523
But I'm expected to see this result:
298161483856634273068108813426242891933
What I'm doing wrong?
You're passing in a byte array where the first byte has a top bit that is set - making it negative. From the constructor documentation:
Translates a byte array containing the two's-complement binary representation of a BigInteger into a BigInteger. The input array is assumed to be in big-endian byte-order: the most significant byte is in the zeroth element.
A two's-complement binary representation with a leading set bit is negative.
To get the result you want, you can do any of:
Prefix the hex string with "00" so that you'll always get a top byte of 0
Pass the hex string straight into the BigInteger(String, int) constructor, where the sign is inferred from the presence or absence of "-" at the start of the string. (Obviously you'd pass in 16 as the base.)
Use the BigInteger(int, byte[]) constructor, passing 1 as the signum value
If your real context is that you've already got the byte array, and you were only parsing it from a hex string for test purposes, I'd use the third option. If you've genuinely got a hex string as input, I'd use the second option.
try
BigInteger bigInt = new BigInteger(HexString, 16);
Related
i have to encrypy a string using repeating XOR with the KEY:"ICE".
I think that i made a correct algorith to do it but the solution of the problem has 5 byte less then my calculated Hex string, why? Until this 5 bytes more the string are equals.
Did i miss something how to do repeating XOR?
public class ES5 {
public static void main(String[] args) throws UnsupportedEncodingException {
String str1 = "Burning 'em, if you ain't quick and nimble";
String str2 = "I go crazy when I hear a cymbal";
String correct1 = "0b3637272a2b2e63622c2e69692a23693a2a3c6324202d623d63343c2a2622632427276527";
byte[] cr = Encript(str1.getBytes(StandardCharsets.UTF_8),"ICE");
String cr22 = HexFormat.of().formatHex(cr);
System.out.println(cr22);
System.out.println(correct1);
}
private static byte doXOR(byte b, byte b1) {
return (byte) (b^b1);
}
private static byte[] Encript(byte[] bt1, String ice) {
int x = 0;
byte[] rt = new byte[bt1.length];
for (int i=0;i< bt1.length;i++){
rt[i] = doXOR(bt1[i],(byte) (ice.charAt(x) & 0x00FF));
x++;
if(x==3)x=0;
}
return rt;
}
}
Hmmm. The String contains characters, and XOR works on bytes.
That's why the first thing is to run String.getBytes() to receive a byte array.
Here, depending on the characters and their encoding the amount of bytes can be more than the amount of characters. You may want to print and compare the numbers already.
Then you perform XOR on the bytes, which may bring you into a completely different area for characters - so you cannot rely on new String(byte[]) at all. Instead you have to create a HEX string representation of the byte[].
Finally compare this HEX string with the value in correct. To me that string already looks like a HEX representation, so do not apply HEX again.
I have some Java code that converts a Hexadecimal string into bytes. It seems to work okay for very short hexadecimal strings but flags an error if I use a long string, but I cant figure out why. I'm new to Java and programming in general. Feel free to point out any other areas which I could improve.
Here is my code:
public class Hextobinary {
static String hexToBinary(String hex) {
int i = Integer.parseInt(hex, 16);
String bin = Integer.toBinaryString(i);
return bin;
}
public static void main(String[] args) {
String h = "5F";
String x = hexToBinary(h);
System.out.println(x);
}
}
Many Thanks
There is a built-in for this using DatatypeConverter, so you may not have to do it yourself.
import javax.xml.bind.DatatypeConverter;
public class HexUtils {
public String toHex(final byte[] arr) {
return DatatypeConverter.printHexBinary(arr);
}
public byte[] fromHex(final String str) {
return DatatypeConverter.parseHexBinary(str);
}
}
You are parsing your string to an int. That will work for short hex strings, but not for longer ones. An int is 32 bits, or 8 hex characters. Any string longer than that will not fit into an int.
If you do write your own method, then split the hex string up into two character chunks, and process each pair of characters separately into a byte, and store the bytes in a byte array. That will allow you to deal with longer hex strings.
If you are using huge strings, the type int (Integer) of the variable i cannot store the value contained in the string hex. An Integer can only store values ranging from -80000000 (hexadecimal) to +7FFFFFFF. Any longer string will cause your function to produce false results.
One quick solution is to use the type Long (and the function parseLong) instead of Integer. The type Long can hold values ranging from -8000000000000000 (hexadecimal) to +7FFFFFFFFFFFFFFF. But if you need to convert longer strings, this is not going to work anymore.
I'm transforming bigints into binary, radix16 and radix64 encoding and seeing mysterious msb zero paddings. Is this a biginteger problem that I can workaround by stripping zero padding or perhaps doing something else?
My test code:
String s;
System.out.printf( "%s length %d\n", s = "123456789A", (new BigInteger( s, 16 )).toByteArray().length );
System.out.printf( "%s length %d\n", s = "F23456789A", (new BigInteger( s, 16 )).toByteArray().length );
Produces output:
123456789A length 5
F23456789A length 6
Of which the longer array has zero padding at the front. Upon inspection of BigInteger.toByteArray() I see:
public byte[] toByteArray() {
int byteLen = bitLength()/8 + 1;
byte[] byteArray = new byte[byteLen];
Now, I can find private int bitLength;, but I can't quite find where bitLength() is defined to figure out exactly why this class does this - connected to sign extension perhaps?
Yes, this is the documented behaviour:
The byte array will be in big-endian byte-order: the most significant byte is in the zeroth element. The array will contain the minimum number of bytes required to represent this BigInteger, including at least one sign bit, which is (ceil((this.bitLength() + 1)/8)).
bitLength() is documented as:
Returns the number of bits in the minimal two's-complement representation of this BigInteger, excluding a sign bit.
So in other words, two values with the same magnitude will always have the same bit length, regardless of sign. Think of a BigInteger as being an unsigned integer and a sign bit - and toByteArray() returns all the data from both parts, which is "the number of bits required for the unsigned integer, and one bit for the sign".
Thanks Jon Skeet for your answer. Here's some code I'm using to convert, very likely it can be optimized.
import java.math.BigInteger;
import java.util.Arrays;
public class UnsignedBigInteger {
public static byte[] toUnsignedByteArray(BigInteger value) {
byte[] signedValue = value.toByteArray();
if(signedValue[0] != 0x00) {
throw new IllegalArgumentException("value must be a psoitive BigInteger");
}
return Arrays.copyOfRange(signedValue, 1, signedValue.length);
}
public static BigInteger fromUnsignedByteArray(byte[] value) {
byte[] signedValue = new byte[value.length + 1];
System.arraycopy(value, 0, signedValue, 1, value.length);
return new BigInteger(signedValue);
}
}
I tried the following code:
import java.math.BigInteger;
import org.apache.commons.codec.binary.Base32;
import org.junit.Test;
public class Sandbox
{
#Test
public void testSomething() {
String sInput = "GIYTINZUHAZTMNBX";
BigInteger bb = new BigInteger(new Base32().decode(sInput));
System.out.println("number = " + bb);
}
}
and heres the output:
number = 237025977136523702055991
using this website to convert between base 32 I get a different result than the actual output. Heres the result I expect to see based on what I got from the website:
expected output = 2147483647
Any idea why this is happening?
Edit:
Forgive me for making it confusing by purposefully attempting to convert 2^31-1.
Using the conversion website I linked to earlier, I changed the input:
String sInput = "GE4DE===";
Expected output:
number = 182
Actual output:
number = 3225650
What you're doing is correct... assuming that the Base32 string comes from Base32-encoding a byte array you get from calling BigInteger.toByteArray().
BigInteger(byte[] val) does not really take an array of arbitrary bytes. It takes the byte[] representation of a BigInteger. Also, it assumes the most-significant byte is in val[0]).
If it's base-32 the X, Y, and Z shouldn't be there. Are you sure it isn't base-36?
I try to compare 2 byte arrays.
Byte array 1 is an array with the last 3 bytes of a sha1 hash:
private static byte[] sha1SsidGetBytes(byte[] sha1)
{
return new byte[] {sha1[17], sha1[18], sha1[19]};
}
Byte array 2 is an array that I fill with 3 bytes coming from an hexadecimal string:
private static byte[] ssidGetBytes(String ssid)
{
BigInteger ssidBigInt = new BigInteger(ssid, 16);
return ssidBigInt.toByteArray();
}
How is it possible that this comparison:
if (Arrays.equals(ssidBytes, sha1SsidGetBytes(snSha1)))
{
}
works most of the times but sometimes not. Byte Order?
e.g. for "6451E6" (hex string) it works fine, for "ABED74" it does not...
The problem is pretty obvious if you try this:
BigInteger b1 = new BigInteger("6451E6", 16);
BigInteger b2 = new BigInteger("ABED74", 16);
System.out.println(b1.toByteArray().length);
System.out.println(b2.toByteArray().length);
Specifically, ABED74 creates a BigInteger whose byte array is 4 bytes long--so of course it's not going to be equal to any three byte array.
The straightforward fix is to change the return statement in ssidGetBytes from
return ssidBigInt.toByteArray();
to
byte[] ba = ssidBigInt.toByteArray();
return new byte[] { ba[ba.length - 3], ba[ba.length - 2], ba[ba.length - 1] };
Your approach of parsing a hex string via BigInteger is flawed, basically. For example, new BigInteger("ABED74").toByteArray() returns an array of 4 bytes, not three. While you could hack around this, you're fundamentally not trying to do anything involving BigInteger values... you're just trying to parse hex.
I suggest you use the Apache Codec library to do the parsing:
byte[] array = (byte[]) new Hex().decode(text);
(The API for Apache Codec leaves something to be desired, but it does work.)
From the javadoc's (emphasis mine):
http://download.oracle.com/javase/1.5.0/docs/api/java/math/BigInteger.html#toByteArray%28%29
Returns a byte array containing the
two's-complement representation of
this BigInteger. The byte array will
be in big-endian byte-order: the most
significant byte is in the zeroth
element. The array will contain the
minimum number of bytes required to
represent this BigInteger, including
at least one sign bit, which is
(ceil((this.bitLength() + 1)/8)).
(This representation is compatible
with the (byte[]) constructor.)
There is a lot of computations going on inside the ByteInteger(String,radix) constructor that you are using, which does not guarantee the constructed BigInteger will produce a byte array (via its toByteArray() method) comparable to the result of a String's getBytes() encoding.
The output of toByteArray() is intended to be used (mostly) as input to the (byte[]) constructor of BigInteger. It makes no guarantee for uses other than those.
Look at it like this: the output of toByteArray() is the byte representation of the BigInteger object and everything in it including internal attributes like magnitude. Those attributes do not exist in the input String, but are computed during construction of the BitInteger object.
That will be incompatible to the byte representation of the input String which only carries the initial numeric value with which to create a BigInteger.