Bit manipulation of chars in Java or C? - java

I am a student trying to implement the DES algorithm.
I have a choice of 2 languages: C & Java.
I did understand the algorithm, but am stuck at the very beginning as to manipulation of the key.
Here's the problem.
In DES, we have a 64-bit key (8 chars in C and 4 in Java, although I can cast the char to byte to get only the ASCII part), of which every 8th bit is a parity bit and needs to be stripped to make it a 56-bit key and do further processing. I have thought about this for long, but cannot find a way to strip every 8th bit and store the result in another char array (in Java as well as C).
I tried using the java.util.BitSet class, but got confused.
Any suggestions as to how can I remove every 8th bit and concat adjacent bytes(Java) or chars(C) to get the 56 bit key?
I am aware of the bit operations and shifting, but for the specific example:
Suppose I have an 16 bit key: 1100 1001 1101 1000.
I need to remove the 8th and 16th bit, making the key: 1100 100 1101 100.
If I declare 2 bytes, how do I truncate the 8th bit and append the 9th bit to it, making the first byte: 1100 1001
So, what I need help with is how do I, replace 8th bit with 9th bit, replace 16th bit with 17th bit and so on to derive a 56-bit key from 64-bit key?
If someone can explain it to me, I might probably be able to implement it regardless of language.

Be careful of 16-bit chars in Java. Many methods only convert the lower 8 bits. Read the documentation carefully. It is more usual to treat a cryptographic key as a a byte[] in Java due to the stronger typing than in C.
As to the parity bits, check through the DES algorithm carefully and see where they are used. That should give you a hint as to what you need to do with them.

In C, you can manipulate bits with the bitwise operators, such as & and |, as well as the bitshift operators << and >>.
For instance, to turn off the high bit of a given byte, you can do this.
char c = 0xBF; // initial value is bit pattern 10111111
c &= 0x7F; // perform AND against the bit pattern 01111111
// final value is bit pattern 00111111 (0x3F)
Does that make sense?
Obviously, you need to be able to convert from a bit pattern to hex, but that's not too hard.
You can use similar masking to extract the bits you want, and put them in an output buffer.
Update:
You have 64 bits (8 bytes) of input, and want 56 bits (7 bytes) of output.
Let's represent your input as the following, where each letter represents a single bit
The 'x' bits are the ones you want to throw away.
xAAAAAAA xBBBBBBB xCCCCCCC xDDDDDDD xEEEEEEE xFFFFFFF xGGGGGGG xHHHHHHH
So you want your final answer to be:
AAAAAAAB BBBBBBCC CCCCCDDD DDDDEEEE EEEFFFFF FFGGGGGG GHHHHHHH
So in C, we might have code like this:
unsigned char data[8] = {/* put data here */};
// chop off the top bit of the first byte
data[0] <<= 1;
// the bottom bit of data[0] needs to come from the top data bit of data[1]
data[0] |= (data[1] >> 6) & 0x01;
// use similar transformations to fill in data[1], data[2], ... data[6]
// At the end, data[7] will be useless
Of course this is not optimized at all, but hopefully you get the idea.

I can briefly tell about a way....i will explain further if required...
Right shift all the 8 chars by 1 ie c1 = c1>>1 etc
Multiplyc1 with the total number of bytes (ie56) ie c1 * 0x0000000000 (not sure how many zeros)
Then, add 0x0000+ to next chars ie c2 = c2 + 0x0000; c3 = c3 + 0x000000 so on (Keep adding 2 0s for proceeding chars)
Now, start adding c1 + c2 + c3.......
The idea here is that, first fill in the number with zeros and start adding teh other chars so that they keep lying properly
00 00 00 00 00 00 00 00 00
34 00 00 00 00 00 00 00 00 (c1 = c1>>1) . Here c1=0x34, c2=0x67
00 67 00 00 00 00 00 00 00 (c2 = c2>>1)
so on...............
Add teh above;
I hope this will help.

#jwd, #jscode
Thanks a lot for your help.
To jwd: I got the idea from your code. Seemed pretty simple logic after I read it.. :-) Wonder why I didnt think of that.
Well, I did polish your idea a little bit and it works fine now in Java.
If anyone has any suggestions, please let me know.
THANKS..
P.S.: The testing part is very primitive. I print the bit values. I did it manually for a couple of examples and used the same as input and it works fine.
=============================================
public static void main(String[] args) {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
System.out.println("Enter an 8 char key: ");
String input;
try {
// get key, key.length()>=8 chars
input = br.readLine();
if (input.length() < 8) {
System.out.println("Key < 8B. Exiting. . .");
System.exit(1);
}
// java char has 16 bits instead of 8 bits as in C,
// so convert it to 8 bit char by getting lower order byte &
// discarding higher order byte
char[] inputKey = input.toCharArray();
byte[] key64 = new byte[8];
byte[] key56 = new byte[7];
// consider only first 8 chars even if input > 8
for (int counter = 0; counter < 8; counter++)
key64[counter] = (byte) inputKey[counter];
System.out.print("\n$$ " + new String(key64) + " $$\n");
// converting 64bit key to 56 bit key
for (int counter = 0; counter < KEY_LENGTH - 1; counter++) {
key64[counter] = (byte) (key64[counter] >>> 1);
key64[counter] = (byte) (key64[counter] << 1);
}
for (int counter = 0; counter < KEY_LENGTH - 1; counter++) {
key56[counter] = (byte) (key64[counter] << counter);
key56[counter] = (byte) (key56[counter] | (key64[counter + 1] >>> (KEY_LENGTH - 1 - counter)));
}
/*Conversion from 64 to 56 bit testing code
System.out.println(new String(key56));
System.out.println();
for (int counter1 = 0; counter1 < 7; counter1++) {
for (int counter2 = 7; counter2 >= 0; counter2--) {
System.out.println(key56[counter1] & (1 << counter2));
}
System.out.println();
}*/
} catch (IOException e) {
e.printStackTrace();
}
}

Related

Variable length-encoding of int to 2 bytes

I'm implementing variable lenght encoding and reading wikipedia about it. Here is what I found:
0x00000080 0x81 0x00
It mean 0x80 int is encoded as 0x81 0x00 2 bytes. That what I cannot understand. Okay, following the algorithm listed there we have.
Binary 0x80: 00000000 00000000 00000000 10000000
We move the sign bit to the next octet so we have and set to 1 (indicating that we have more octets):
00000000 00000000 00000001 10000000 which is not equals to 0x81 0x00. I tried to write a program for that:
byte[] ba = new byte[]{(byte) 0x81, (byte) 0x00};
int first = (ba[0] & 0xFF) & 0x7F;
int second = ((ba[1] & 0xFF) & 0x7F) << 7;
int result = first | second;
System.out.println(result); //prints 1, not 0x80
ideone
What did I miss?
Let's review the algorithm from the Wikipedia page:
Take the binary representation of the integer
Split it into groups of 7 bits, the group with the highest value will have less
Take these seven bits as a byte, setting the MSB (most significant bit) to 1 for all but the last; leave it 0 for the last one
We can implement the algorithm like this:
public static byte[] variableLengthInteger(int input) {
// first find out how many bytes we need to represent the integer
int numBytes = ((32 - Integer.numberOfLeadingZeros(input)) + 6) / 7;
// if the integer is 0, we still need 1 byte
numBytes = numBytes > 0 ? numBytes : 1;
byte[] output = new byte[numBytes];
// for each byte of output ...
for(int i = 0; i < numBytes; i++) {
// ... take the least significant 7 bits of input and set the MSB to 1 ...
output[i] = (byte) ((input & 0b1111111) | 0b10000000);
// ... shift the input right by 7 places, discarding the 7 bits we just used
input >>= 7;
}
// finally reset the MSB on the last byte
output[0] &= 0b01111111;
return output;
}
You can see it working for the examples from the Wikipedia page here, you can also plug in your own values and try it online.
Another Variable length encoding of integers exists and are widely used. For example ASN.1 from 1984 does define "length" field as:
The encoding of length can take two forms: short or long. The short
form is a single byte, between 0 and 127.
The long form is at least two bytes long, and has bit 8 of the first
byte set to 1. Bits 7-1 of the first byte indicate how many more bytes
are in the length field itself. Then the remaining bytes specify the
length itself, as a multi-byte integer.
This encoding is used for example in DLMS COSEM protocol or https certificates. For simple code, you can have a look at ASN.1 java library.

binary value comes wrong after & with 0x000000FF [duplicate]

I'm reading a file into a byte array in chunks and sending it over the network via a POST request to a webserver. It's not anything complicated, I've done it before using this exact same code. This time, I noticed that my images are looking really odd when they get to the server, so I decided to look at the byte array being sent and the one being received just to make sure it was the same. It's not. On the java sending side the byte array contains negative numbers. On the C# receiving side, there are no negative numbers.
The first 15 bytes on the receiving side (C#)
137
80
78
71
13
10
26
10
0
0
0
13
73
72
68
Those same bytes but on the sending side (java)
-119
80
78
71
13
10
26
10
0
0
0
13
73
72
68
All of the non-negative numbers are the same, and the -119 isn't the only negative number, they are all over. I did notice that -119 and 137 are 256 apart and wondered if that has something to do with it.
The code I'm using to read the image (java)
public static byte[] readPart(String fileName, long offset, int length) throws FileNotFoundException, Exception
{
byte[] data = new byte[length];
File file = new File(fileName);
InputStream is = new FileInputStream(file);
is.skip(offset);
is.read(data,0,data.length);
is.close();
return data;
}
The code I'm using to write the data (c#)
private void writeFile(string fileName, Stream contents)
{
using (FileStream fs = new FileStream(fileName, FileMode.Append, FileAccess.Write, FileShare.ReadWrite))
{
int bufferLen = 65000;
byte[] buffer = new byte[bufferLen];
int count = 0;
while ((count = contents.Read(buffer, 0, bufferLen)) > 0)
{
fs.Write(buffer, 0, count);
}
fs.Close();
}
contents.Close();
}
I don't know if that is something that always happens and I just never noticed it before or if it is something that decided to go horribly wrong. What I do know is that this code worked before for something very similar and that it's not working now.
If anyone has any suggestions or an explanation I would really appreciate it.
EDIT:
The reason my images were looking odd is how I was calling the readPart method.
byte[] data = FileUtilities.readPart(fileName,counter,maxFileSize);//counter is the current chunk number
How I should have been calling it
byte[] data = FileUtilities.readPart(fileName,counter*maxFileSize,maxFileSize);//the current chunk * cuhnksize for the offset...
Thanks everyone, I'm significantly less confused now :)
In Java, byte is a signed value (using two's complement to encode negative values), so what you see it correct if unexpected by most people.
To convert a byte to an unsigned int value, use b & 0xff
Java doesn't have unsigned bytes; all bytes are treated as signed. That's all.
All that really matters is how you think of the bytes, since you rarely ever actually need to do comparisons on bytes. The only significant difference is that they print out as signed, as you've discovered.
If you like, you can use e.g. Guava's UnsignedBytes utilities to view Java bytes as unsigned, but there's really not much practical difference.
As a further explanation, assume you have 137 as an unsigned byte. That is represented as:
1000 1001
This binary value, when expressed as a signed two's complement number, turns out to be -119. (-128 + 9)
Any unsigned byte values over 128 will be affected by the difference since the left-most bit is used in this way by the two's complement scheme.
Maybe it has something to do with the fact that Java's byte is signed (range -128 to 127) while C#'s is unsigned (0 to 255) :). The information is the same in binary, it's just interpreted differently.
The range of byte is from -128 to 127, so if you try to assign a byte 128, it will cycle around and the result will be -128.
System.out.println("Max val = " + Byte.MAX_VALUE); //prints: Max val = 127
System.out.println("Min val = " + Byte.MIN_VALUE); //prints: Min val = -128
System.out.println("(byte)137 = " + (byte)137); //prints: (byte)137 = -119
System.out.println("(byte)128 = " + (byte)128); //prints: (byte)128 = -128
System.out.println("(byte)-129 = " + (byte)-129); //prints: (byte)-129 = 127

How Convert Byte Array To UInt64 In Objective-C

This is so far what I've done to convert the 8 bytes I received to UInt64:
+ (UInt64)convertByteArrayToUInt64:(unsigned char *)bytes ofLength:(NSInteger)length
{
UInt64 data = 0;
for (int i = 0; i < length; i++)
{
data = data | ((UInt64) (bytes[i] & 0xff) << (24 - i * 8));
}
return data;
}
The sender that converts the data to 8 bytes data did it this way:
for(int i=1;i<9;i++)
{
statusdata[i] = (time >> 8*i & 0xff);
}
The 8 bytes data data that I received is:
01 00 00 00 00 00 00 3b
The output of my method is:
16777216
I tried to convert this "16777216" to bytes using calculator and I got:
01 00 00 00 00 00 00
which means the 3b was not included in conversion.
But, I tried this code under java and it's working fine. I don't know where the problem is.
Please help. Thanks in advance!
A UInt64 is 8 bytes, so if you have an 8 byte buffer all you need to do is make a UInt64 pointer to it and dereference it (as long as the data is in little-endian format on x86 architectures, but I'll get to that in a sec),
So:
unsigned char foo[8] = {0x08, 0x07, 0x06, 0x05, 0x04, 0x03, 0x02, 0x01};
UInt64 *n = (UInt64 *) foo; // pointer to value 0x0102030405060708
UInt64 nValue = *n;
Be aware though the individual byte values on x86 hardware are little-endian, so the least-significant byte goes first in the buffer. Thus if you think about the buffer above as individual base-16 digits, the number will be:
0x0102030405060708 (most significant byte is last in the buffer).
The trick you're looking for though is to simply cast your 8-byte buffer to a UInt64* and dereference it. If you've never seen big-vs.-little endian storage/byte-order before I recommend you go read about it.
p.s. - The sender code is wrong btw, the array index (i) needs to run from 0 to 7, so (i=0;i<8;++i), not 1-8 or the value of time will not be copied correctly. Incidentally, the sender is attempting to copy time into statusdata in little-endian order (least-significant byte first in the buffer), but again, it's being done wrong.
Also, if the sender code is on Java you need to be careful that the value of time isn't actually supposed to be negative. Java doesn't have unsigned integer values so if time is negative and you reconstitute it to a UInt64 it will be a large positive value, which isn't what you'll want.
For the sake of completeness, I'll show you how to reconstitute the data byte-by-byte from the little-endian buffer, but remember, the sender code is wrong and needs to be rewritten to index from zero (and a typecast as shown above would get you there as well):
data = 0
for (i=0; i < 8; ++i) {
data = (data << 8) | bytes[i]; // bytes is already UInt8 * so no need to mask it
}

Creating 'mpint' value in Java (The Secure Shell (SSH) Protocol Architecture - RFC 4251)

I'm trying to create mpint string using BigInteger as specified in RFC4251:
mpint
Represents multiple precision integers in two's complement format,
stored as a string, 8 bits per byte, MSB first. Negative numbers
have the value 1 as the most significant bit of the first byte of
the data partition. If the most significant bit would be set for
a positive number, the number MUST be preceded by a zero byte.
Unnecessary leading bytes with the value 0 or 255 MUST NOT be
included. The value zero MUST be stored as a string with zero
bytes of data.
By convention, a number that is used in modular computations in
Z_n SHOULD be represented in the range 0 <= x < n.
Examples:
value (hex) representation (hex)
----------- --------------------
0 00 00 00 00
9a378f9b2e332a7 00 00 00 08 09 a3 78 f9 b2 e3 32 a7
80 00 00 00 02 00 80
-1234 00 00 00 02 ed cc
-deadbeef 00 00 00 05 ff 21 52 41 11
Everything almost clear, but how to explain "Unnecessary leading bytes with the value 0 or 255 MUST NOT be included."?
And the second question is about this line: "By convention, a number that is used in modular computations in Z_n SHOULD be represented in the range 0 <= x < n.". How to explain it?
EDIT:
My first suggestion is:
/**
* Write 'mpint' to output stream including length.
*
* #param dos output stream
* #param bi the value to be written
*/
public static void writeMPInt(DataOutputStream dos, BigInteger bi) throws IOException {
byte[] twos = bi.toByteArray();
dos.writeInt(twos.length);
dos.write(twos);
}
Does this method is valid according mentioned above rules?
Unnecessary leading bytes with the value 0 or 255 MUST NOT be included.
Do not pad the front of numbers with extra 00 or ff bytes.
80 stored as 00 00 00 03 00 00 80 has an extra leading 00 byte.
-deadbeef stored as 00 00 00 06 ff ff 21 52 41 11 has an extra leading ff byte.
In both cases, the numbers are technically correct but have unnecessary leading bytes.
By convention, a number that is used in modular computations in Z_n SHOULD be represented in the range 0 <= x < n.
Z_n is an ASCII way of writing the bold math notation for integers modulo n. (see Modular arithmetic)
This means, you should not store a number x greater than the modulus n to be used, or less than zero.
You wish to store the number 123.
You know that number will be modulo 100 for sure. That is, 123 % 100.
You should store 23 instead.
Does this method is valid according mentioned above rules?
No, write() does not check if the values in your byte array conform to the above rules.
I do not fully agree with Jay Jun and because I did not understand it the first time I will try to give a different (hopefully simpler) answer (in code):
Unnecessary leading bytes with the value 0 or 255 MUST NOT be included.
The following code snippet will output the long (8 bytes) for the values: 0x80 and -0xdeadbeef
//long has 8 bytes
long l = 0x80L;
System.out.println("0x80 = "+String.format("%02X ", (l>>56) & 0xFF)+" "+String.format("%02X ", (l>>48) & 0xFF)+" "+String.format("%02X ", (l>>40) & 0xFF)+" "+String.format("%02X ", (l>>32) & 0xFF)+String.format("%02X ", (l>>24) & 0xFF)+" "+String.format("%02X ", (l>>16) & 0xFF)+" "+String.format("%02X ", (l>>8) & 0xFF)+" "+String.format("%02X ", (l) & 0xFF));
l = -0xdeadbeefL;
System.out.println("-0xdeadbeef = "+String.format("%02X ", (l>>56) & 0xFF)+" "+String.format("%02X ", (l>>48) & 0xFF)+" "+String.format("%02X ", (l>>40) & 0xFF)+" "+String.format("%02X ", (l>>32) & 0xFF)+String.format("%02X ", (l>>24) & 0xFF)+" "+String.format("%02X ", (l>>16) & 0xFF)+" "+String.format("%02X ", (l>>8) & 0xFF)+" "+String.format("%02X ", (l) & 0xFF));
The output is:
0x80L = 00 00 00 00 00 00 00 80
-0xdeadbeefL = FF FF FF FF 21 52 41 11
As we can see 0x80L is positive and has 7 leading 0 bytes but only 6 must not be included because the leading bit of 80 (binary value = 10000000) is 1 and therefore according to:
If the most significant bit would be set for a positive number, the number MUST be preceded by a zero byte
we have to precede 0x80 with one 0-byte. The result is therefore "00 80".
Vice versa for -0xdeadbeef we have 4 leading 255 bytes but only 3 must not be included because the most significant bit of "21" is "0" and therefore we need 1 leading 255 byte before "21 52 41 11" and the result is "ff 21 42 41 11"
All this leading byte problematic is handled when using BigInteger:
byteArray = new byte[]{(byte) 0x80};
bigInteger = new BigInteger(+1, byteArray);
byteArray = bigInteger.toByteArray();
byteArray = new byte[]{(byte) 0xde, (byte) 0xad, (byte) 0xbe, (byte) 0xef};
bigInteger = new BigInteger(-1, byteArray);
byteArray = bigInteger.toByteArray();
So coming back to the question:
Does this method is valid according mentioned above rules?
I would say yes.
PS: the only rule that BigInteger is not handling correctly is:
The value zero MUST be stored as a string with zero bytes of data
byteArray = new byte[]{};
bigInteger = new BigInteger(0, byteArray);
byteArray = bigInteger.toByteArray();
will actually return 1 byte with the value of 0 instead of an empty array.

Sending 0xFF and Calculating CRC with signed bytes - WriteSingleCoil & ModBUS & Java & Android -

EDITED & SOVLED (below)
I'm using Java for Android trying to send the byte 255 (0xFF in WriteSingleCoil function) to a ModBUS server device.
Device is not runnig, I don't know if because of not able to interpretate the signed byte -1 or because of I'm wrong calculating the CRC.
I don't know how to calculate CRC for negative bytes.
Summarizing: I don't know how to send function 05 Write Single Coil with 0xFF value for switch on the coil for from Java to ModBUS server.
Can anyone help me please?
SOLUTION:
" iIndex = ucCRCLo ^ b: operations like this must be written as iIndex
= (ucCRCLo ^ b)&0xff because the & will cast ucCRCLo, b and the result to int, which is 32 bits while short is 16 so you will have a lot of
extra bits set to 1 "
This answer helped me. Thanks a lot to TheDayOfcondor
But also my huge problem was the usual problem in Java with signed bytes. My CRC calculating function is doing it right for unsigned bytes, but it give errors if I pass inside signed bytes.
The trick for work with bytes for ModBUS comunication is work in the whole App with shorts as bytes, for have the range 0-255, even calculating trames and CRC. And only in the last step, when sending trame to ModBUS sever, cast them to bytes again. This is running.
Hope it will helps to someone in future.
EXPLAINING PROBLEM:
I'm trying to set ON a coil to ModBUS with function 05, this is explaining of function:
Request
I'm tryiing to set ON the coil on address 1:
This hex: 0A 05 00 01 ff 00 DC 81
This byte array: 10 5 0 1 255 0 220 129
10: The Slave Address (10 = 0A hex)
05: The Function Code (Force Single Coil)
0001: The Data Address of the coil. (coil# 1 = 01 hex)
FF00: The status to write ( FF00 = ON, 0000 = OFF )
DC81: The CRC (cyclic redundancy check) for error checking.
The thing is that Java is using signed bytes, so I can't put 255 on my byte array.
I understand I should put -1, but then I can't calculate CRC correctly, because of I have a couple of precalculated array of bytes for get the CRC but the function send a negative index.
So: I don't know if I'm doing right trying to send -1, if I have an alternative for sending 255, neither how to calculate CRC for -1.
This is function for calculate CRC:
public short[] GenerateCRC (byte[] pMsg) {
short ucCRCHi = 0xFF;
short ucCRCLo = 0xFF;
int iIndex;
for (byte b : pMsg)
{
iIndex = ucCRCLo ^ b;
try {
ucCRCLo = (short)(ucCRCHi ^ aucCRCHi[ ( iIndex ) ]);
ucCRCHi = aucCRCLo[ ( iIndex ) ];
} catch (Exception e) {
Log.e(LOGTAG, "GenerateCRC: " + e.toString(), e);
e.printStackTrace();
}
}
short[]result= new short[2];
result0]= ucCRCHi;
result1]= ucCRCLo;
return result;
}
The question is not very clear - however, the most common problem dealing with bytes is the fact Java does not have unsigned bytes, and boolean operation are always between int
The best way to deal with bytes is to use integers, and-ing every operation with 0xff. Also use >>> for the shift right (it is the unsigned version)
Example:
byte b= (byte)(255 & 0xff) // gives you the "unsigned byte"
byte b= (byte) ((b<<2)0xff ) // shift left must be truncated
If you post your code to calculate the CRC I can have a look into it
The best way to define a byte array without using negative numbers is like this:
byte[]={ (byte)0xff, (byte)0xff, (byte)0xff };

Categories