i have to encrypy a string using repeating XOR with the KEY:"ICE".
I think that i made a correct algorith to do it but the solution of the problem has 5 byte less then my calculated Hex string, why? Until this 5 bytes more the string are equals.
Did i miss something how to do repeating XOR?
public class ES5 {
public static void main(String[] args) throws UnsupportedEncodingException {
String str1 = "Burning 'em, if you ain't quick and nimble";
String str2 = "I go crazy when I hear a cymbal";
String correct1 = "0b3637272a2b2e63622c2e69692a23693a2a3c6324202d623d63343c2a2622632427276527";
byte[] cr = Encript(str1.getBytes(StandardCharsets.UTF_8),"ICE");
String cr22 = HexFormat.of().formatHex(cr);
System.out.println(cr22);
System.out.println(correct1);
}
private static byte doXOR(byte b, byte b1) {
return (byte) (b^b1);
}
private static byte[] Encript(byte[] bt1, String ice) {
int x = 0;
byte[] rt = new byte[bt1.length];
for (int i=0;i< bt1.length;i++){
rt[i] = doXOR(bt1[i],(byte) (ice.charAt(x) & 0x00FF));
x++;
if(x==3)x=0;
}
return rt;
}
}
Hmmm. The String contains characters, and XOR works on bytes.
That's why the first thing is to run String.getBytes() to receive a byte array.
Here, depending on the characters and their encoding the amount of bytes can be more than the amount of characters. You may want to print and compare the numbers already.
Then you perform XOR on the bytes, which may bring you into a completely different area for characters - so you cannot rely on new String(byte[]) at all. Instead you have to create a HEX string representation of the byte[].
Finally compare this HEX string with the value in correct. To me that string already looks like a HEX representation, so do not apply HEX again.
Related
I have a byte array read over a network connection that I need to transform into a String without any encoding, that is, simply by treating each byte as the low end of a character and leaving the high end zero. I also need to do the converse where I know that the high end of the character will always be zero.
Searching the web yields several similar questions that have all got responses indicating that the original data source must be changed. This is not an option so please don't suggest it.
This is trivial in C but Java appears to require me to write a conversion routine of my own that is likely to be very inefficient. Is there an easy way that I have missed?
No, you aren't missing anything. There is no easy way to do that because String and char are for text. You apparently don't want to handle your data as text—which would make complete sense if it isn't text. You could do it the hard way that you propose.
An alternative is to assume a character encoding that allows arbitrary sequences of arbitrary byte values (0-255). ISO-8859-1 or IBM437 both qualify. (Windows-1252 only has 251 codepoints. UTF-8 doesn't allow arbitrary sequences.) If you use ISO-8859-1, the resulting string will be the same as your hard way.
As for efficiency, the most efficient way to handle an array of bytes is to keep it as an array of bytes.
This will convert a byte array to a String while only filling the upper 8 bits.
public static String stringFromBytes(byte byteData[]) {
char charData[] = new char[byteData.length];
for(int i = 0; i < charData.length; i++) {
charData[i] = (char) (((int) byteData[i]) & 0xFF);
}
return new String(charData);
}
The efficiency should be quite good. Like Ben Thurley said, if performance is really such an issue don't convert to a String in the first place but work with the byte array instead.
Here is a sample code which will convert String to byte array and back to String without encoding.
public class Test
{
public static void main(String[] args)
{
Test t = new Test();
t.Test();
}
public void Test()
{
String input = "Hèllo world";
byte[] inputBytes = GetBytes(input);
String output = GetString(inputBytes);
System.out.println(output);
}
public byte[] GetBytes(String str)
{
char[] chars = str.toCharArray();
byte[] bytes = new byte[chars.length * 2];
for (int i = 0; i < chars.length; i++)
{
bytes[i * 2] = (byte) (chars[i] >> 8);
bytes[i * 2 + 1] = (byte) chars[i];
}
return bytes;
}
public String GetString(byte[] bytes)
{
char[] chars = new char[bytes.length / 2];
char[] chars2 = new char[bytes.length / 2];
for (int i = 0; i < chars2.length; i++)
chars2[i] = (char) ((bytes[i * 2] << 8) + (bytes[i * 2 + 1] & 0xFF));
return new String(chars2);
}
}
Using deprecated constructor String(byte[] ascii, int hibyte)
String string = new String(byteArray, 0);
String is already encoded as Unicode/UTF-16. UTF-16 means that it can take up to 2 string "characters"(char) to make one displayable character. What you really want is to use is:
byte[] bytes = System.Text.Encoding.Unicode.GetBytes(myString);
to convert a String to an array of bytes. This does exactly what you did above except it is 10 times faster in performance. If you would like to cut the transmission data nearly in half, I would recommend converting it to UTF8 (ASCII is a subset of UTF8) - the format the internet uses 90% of the time, by calling:
byte[] bytes = Encoding.UTF8.GetBytes(myString);
To convert back to a string use:
String myString = Encoding.Unicode.GetString(bytes);
or
String myString = Encoding.UTF8.GetString(bytes);
I have some Java code that converts a Hexadecimal string into bytes. It seems to work okay for very short hexadecimal strings but flags an error if I use a long string, but I cant figure out why. I'm new to Java and programming in general. Feel free to point out any other areas which I could improve.
Here is my code:
public class Hextobinary {
static String hexToBinary(String hex) {
int i = Integer.parseInt(hex, 16);
String bin = Integer.toBinaryString(i);
return bin;
}
public static void main(String[] args) {
String h = "5F";
String x = hexToBinary(h);
System.out.println(x);
}
}
Many Thanks
There is a built-in for this using DatatypeConverter, so you may not have to do it yourself.
import javax.xml.bind.DatatypeConverter;
public class HexUtils {
public String toHex(final byte[] arr) {
return DatatypeConverter.printHexBinary(arr);
}
public byte[] fromHex(final String str) {
return DatatypeConverter.parseHexBinary(str);
}
}
You are parsing your string to an int. That will work for short hex strings, but not for longer ones. An int is 32 bits, or 8 hex characters. Any string longer than that will not fit into an int.
If you do write your own method, then split the hex string up into two character chunks, and process each pair of characters separately into a byte, and store the bytes in a byte array. That will allow you to deal with longer hex strings.
If you are using huge strings, the type int (Integer) of the variable i cannot store the value contained in the string hex. An Integer can only store values ranging from -80000000 (hexadecimal) to +7FFFFFFF. Any longer string will cause your function to produce false results.
One quick solution is to use the type Long (and the function parseLong) instead of Integer. The type Long can hold values ranging from -8000000000000000 (hexadecimal) to +7FFFFFFFFFFFFFFF. But if you need to convert longer strings, this is not going to work anymore.
I have this operation I need to perform where I need to append a byte such as 0x10 to some String in Java. I was wondering how I could go about doing this?
For example:
String someString = "HELLO WORLD";
byte someByte = 0x10;
In this example, how would I go about appending someByte to someString?
The reason why I am asking this question is because the application I am developing is supposed to send commands to some server. The server is able to accept commands (base64 encoded), decode the command, and parse out these bytes that are not necessarily compatible with any sort of ASCII encoding standard for performing some special function.
If you want to concatenate the actual value of a byte to a String use the Byte wrapper and its toString() method, like this:
String someString = "STRING";
byte someByte = 0x10;
someString += Byte.toString(someByte);
If you want to have the String representation of the byte as ascii char then try this:
public static void main(String[] args) {
String a = "bla";
byte x = 0x21; // Ascii code for '!'
a += (char)x;
System.out.println(a); // Will print out 'bla!'
}
If you want to convert the byte value into it's hex representation as String then take a look at Integer.toHexString
If you just want to extend a String literal, then use this one:
System.out.println("Hello World\u0010");
otherwise:
String s1 = "Hello World";
String s2 = s1 + '\u0010';
And no - character are not bytes and vice versa. But here the approximation is close enough :-)
I am new to java but I am very fluent in C++ and C# especially C#. I know how to do xor encryption in both C# and C++. The problem is the algorithm I wrote in Java to implement xor encryption seems to be producing wrong results. The results are usually a bunch of spaces and I am sure that is wrong. Here is the class below:
public final class Encrypter {
public static String EncryptString(String input, String key)
{
int length;
int index = 0, index2 = 0;
byte[] ibytes = input.getBytes();
byte[] kbytes = key.getBytes();
length = kbytes.length;
char[] output = new char[ibytes.length];
for(byte b : ibytes)
{
if (index == length)
{
index = 0;
}
int val = (b ^ kbytes[index]);
output[index2] = (char)val;
index++;
index2++;
}
return new String(output);
}
public static String DecryptString(String input, String key)
{
int length;
int index = 0, index2 = 0;
byte[] ibytes = input.getBytes();
byte[] kbytes = key.getBytes();
length = kbytes.length;
char[] output = new char[ibytes.length];
for(byte b : ibytes)
{
if (index == length)
{
index = 0;
}
int val = (b ^ kbytes[index]);
output[index2] = (char)val;
index++;
index2++;
}
return new String(output);
}
}
Strings in Java are Unicode - and Unicode strings are not general holders for bytes like ASCII strings can be.
You're taking a string and converting it to bytes without specifying what character encoding you want, so you're getting the platform default encoding - probably US-ASCII, UTF-8 or one of the Windows code pages.
Then you're preforming arithmetic/logic operations on these bytes. (I haven't looked at what you're doing here - you say you know the algorithm.)
Finally, you're taking these transformed bytes and trying to turn them back into a string - that is, back into characters. Again, you haven't specified the character encoding (but you'll get the same as you got converting characters to bytes, so that's OK), but, most importantly...
Unless your platform default encoding uses a single byte per character (e.g. US-ASCII), then not all of the byte sequences you will generate represent valid characters.
So, two pieces of advice come from this:
Don't use strings as general holders for bytes
Always specify a character encoding when converting between bytes and characters.
In this case, you might have more success if you specifically give US-ASCII as the encoding. EDIT: This last sentence is not true (see comments below). Refer back to point 1 above! Use bytes, not characters, when you want bytes.
If you use non-ascii strings as keys you'll get pretty strange results. The bytes in the kbytes array will be negative. Sign-extension then means that val will come out negative. The cast to char will then produce a character in the FF80-FFFF range.
These characters will certainly not be printable, and depending on what you use to check the output you may be shown "box" or some other replacement characters.
I need to compute a numeric representation of a string which is bi-direction. For example, If I have a string "US" I would like an algorithm which when applied to "US" generates a number X (int or long). When another algorithm is applied to X, I want to get "US". Each string consists of two characters.
Thanks in advance.
The following does it easily by using DataInputStream and DataOutputStream to read/write to an underlying byte array.
public static void main(String[] args) {
String original = "US";
int i = stringToInt(original);
String copy = intToString(i);
System.out.println("original: "+original);
System.out.println("i: "+i);
System.out.println("copy: "+copy);
}
static int stringToInt(String s) {
byte[] bytes = s.getBytes();
if (bytes.length > 4) {
throw new IllegalArgumentException("String too large to be" +
" stored in an int");
}
byte[] fourBytes = new byte[4];
System.arraycopy(bytes, 0, fourBytes, 0, bytes.length);
try {
return new DataInputStream(new ByteArrayInputStream(fourBytes))
.readInt();
} catch (IOException e) {
throw new RuntimeException("impossible");
}
}
static String intToString(int i) {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
try {
new DataOutputStream(byteArrayOutputStream).writeInt(i);
} catch (IOException e) {
throw new RuntimeException("impossible");
}
return new String(byteArrayOutputStream.toByteArray());
}
This is in the general sense impossible; there are only 2^64 long values, and there are more than 2^64 64-character strings consisting only of the characters X, Y and Q.
Maybe you want to have a pair of hash tables A and B and a counter; if you're given a string you check whether it's in the first hash table, if so return the value you stored there, if not then you set
A[string]=counter; B[counter]=string; counter=1+counter;
What you're describing is bidirectional encryption. Something like this may help you. Another way to do this if you specifically want a numerical value, is to store the character codes (ASCII codes) of each letter. However the resulting number is going to be huge (especially for really long strings) and you probably won't be able to store it in an 32 or 64-bit integer. Even a long won't help you here.
UPDATE
According to your edit, which says that you only need two characters, you can use the ASCII codes by using getBytes() on the String. When you need to convert it back, the first two digits will correspond to the first character, whereas the last two will correspond to the second character.
This could do, assuming your Strings have length 2, i.e. consist of two Java char values:
public int toNumber(String s) {
return s.charAt(0) + s.charAt(1) << 16;
}
public String toString(int number) {
return (char)number + "" + (char)(number >> 16);
}
There are Unicode characters (those with numbers over 216) that do not fit into a single Java char, but are represented (by UTF-16) by two consecutive surrogates. This algorithm would work for a single-Character-string consisting of these two surrogates, but not for longer strings consisting of more than one such character.
Also, there are int values which do not map back to valid Unicode (or UTF-16) strings (e.g. which produce unpaired surrogates instead of valid characters). But each normal string gets converted to an int and back to the same string.