Sockets: Passing unsigned char array from C to JAVA - java

C side:
unsigned char myBuffer[62];
fread(myBuffer,sizeof(char),62,myFile);
send(mySocket, myBuffer, 62,0);
JAVA side:
bufferedReader.read(tempBuffer,0,62);
Now in JAVA program i receive (using socket) values less than 0x80 in C program with no problems, but i receive 0xFD value for all values equal or greater than 0x80 in C program.
I need perfect solution for this problem.

Don't use a Reader to read bytes, use an InputStream!
A Reader is meant to read characters; it receives a stream of bytes and (tries and) converts these bytes to characters; you lose the original bytes.
In more details, a Reader will use a CharsetDecoder; this decoder is configured so that unknown byte sequences are replaced; and the encoding used here likely replaces unknown byte sequences with character 0x00fd, hence your result.
Also, you don't care about signed vs unsigned; 1000 0000 may be 128 as an unsigned char in C and -127 as a byte in Java, it still remains 1000 0000.
If what you send is really text, then it means the charset you have chosen for decoding is not the good one; you must know the encoding in which the files on your original system are written.

Related

Java 11 Compact Strings magic behind char[] to byte[]

I been reading about encoding Unicode Java 9 compact Strings in the last two days i am getting quite well. But there is something that i dont understand.
About byte data type
1). Is a 8-bit storage ranges from -128 to 127
Questions
1). Why Java didn't implement it like char unsigned 16 bits? i mean it would be in a range of 0.256 because from 0 to 127 only can i hold a Ascii value but what would happen if i set the value 200 a extended ascii would overflow to -56.
2). Does the negative value mean something i mean i have try a simple example using Java 11
final char value = (char)200;//in byte would overflow
final String stringValue = new String(new char[]{value});
System.out.println(stringValue);//THE SAME VALUE OF JAVA 8
I have checked the String.value variable and i see a byte array of
System.out.println(value[0]);//-56
The same questions like before arise does the -56 mean something i mean the (negative value) in other languages this overflow is detected to return to the value 200? How can Java know that -56 value is the same as 200 in char.
I have try hardest examples like codepoint 128048 and i see in String.value variable a array of bytes like this.
0 = 61
1 = -40
2 = 48
3 = -36
I know this codepoint takes 4 bytes but i get it how is transformed char[] to byte[] but i dont know how String handle this byte[] data.
Sorry if this question is simple and sorry any typing english is not my natural language thanks a lot.
Why Java didn't implement it like char unsigned 16 bits? i mean it would be in a range of 0.256 because from 0 to 127 only can i hold a Ascii value but what would happen if i set the value 200 a extended ascii would overflow to -56.
Java’s primitive data types were settled with Java 1.0 a quarter century ago. The compact strings were introduced in Java 9, less than two years ago. This new feature, which is merely an implementation detail, did not justify fundamental changes at Java’s type system.
Besides that, you are looking at one interpretation of the data stored in a byte. For the sake of representing iso-latin-1 units, it is entirely irrelevant whether interpreting the same data as Java’s built-in signed byte would result in a positive or negative number.
Likewise Java’s I/O API allows reading a file into a byte[] array and write byte[] arrays back to files and these two operations are already sufficient to copy a file losslessly, regardless of its file format which would be relevant when interpreting its content.
So the following works since Java 1.1:
byte[] bytes = "È".getBytes("iso-8859-1");
System.out.println(bytes[0]);
System.out.println(bytes[0] & 0xff);
-56
200
The two numbers, -56 and 200 are just different interpretations of the bit pattern 11001000 whereas the iso-latin-1 interpretation of a byte containing the bit pattern 11001000 is the character È.
A char value is also just an interpretation of a two byte quantity, i.e. as UTF-16 code unit. Likewise, a char[] array is a sequence of bytes in the computer’s memory with a standard interpretation.
We can also interpret other byte sequences this way.
StringBuilder sb = new StringBuilder().appendCodePoint(128048);
byte[] array = new byte[4];
StandardCharsets.UTF_16LE.newEncoder()
.encode(CharBuffer.wrap(sb), ByteBuffer.wrap(array), true);
System.out.println(Arrays.toString(array));
will print the value you’ve seen, [61, -40, 48, -36].
The advantage of using a byte[] array inside the String class is, that now, the interpretation can be chosen, to use iso-latin-1 when all characters are representable with this encoding or utf-16 otherwise.
The possible numeric interpretations are irrelevant to the string. However, when you ask “How can Java know that -56 value is the same as 200”, you should ask yourself, how does it know that the bit pattern 11001000 of a byte is -56 in the first place?
System.out.println(value[0]);
bears an actually expensive operation, compared to ordinary computer arithmetic, the conversion of a byte (or an int) to a String. This conversion operation is often overlooked as it has been defined as the default way of printing a byte, but is not more natural than a conversion to a String interpreting the value as an unsigned quantity. For further reading, I recommend Two's complement.
This is because not all bytes in a string are interpreted the same. This depends to the string's character encoding.
Example:
if a string is an UTF-8 string, its characters will be 8-bits in size.
in an UTF-16 string, its characters will be 16-bits in size.
etc...
This means, if the string is to be represented as UTF-8, the characters will be made by reading 1 byte at a time; if 16-bits, the characters will made by reading 2 bytes at a time.
Look at this code: a single byte array data is transformed to string using UTF-8 and UTF-16.
byte[] data = new byte[] {97, 98, 99, 100};
System.out.println(new String(data, StandardCharsets.UTF_8));
System.out.println(new String(data, StandardCharsets.UTF_16));
The output of this code is:
abcd // 4 bytes = 4 chars, 1 byte per char
慢捤 // 4 bytes = 2 chars, 2 byte per char
Going back to the question, what motivated the developers to do so is to reduce memory footprint on strings. Not all strings uses all the 16-bits a char offers.
EDIT: Code here

How to write to a file in Java after Huffman Coding is done

I have implemented a class for Huffman coding. The class will parse an input file and build a huffman tree from it and creates a map which has each of the distinct characters appeared in the file as the key and the huffman code of the character as its value.
For example, let the string "aravind_is_a_good_boy" be the only line in the file. When you build the huffman tree and generate the huffman code for each character, we can see that, for the character 'a', the huffman code is '101' and for the character 'r', the huffman code is '0101' etc.
My intention is to compress the file. So I cannot write a string, which is created by replacing each character, by its huffman code, directly to the file. Since, each character would be replaced by at least 3 characters (Each '1' and '0' would still be written into the file as a character, not bits). So I thought I would write it to a file as a bytes, since there is no way you can write bits to a file. But then, 'a' and 'r' are both written as '5' into the file. This would cause problem when trying to decompress the file.
This is how I am converting a series of bits to bytes:
public byte[] compressString(String s, CharCodeHashMap map) {
String byteString = "";
byte[] byteArr = new byte[s.length()];
int size = 0;
for (int i = 0; i < s.length(); i++) {
byteString += addPaddingZeros(map.getCompressedChar(s.charAt(i)));
byteArr[size++] = new BigInteger(byteString, 2).toByteArray()[0];
byteString = "";
}
return byteArr;
}
I tried prefixing '1' to each of the hashcodes, to fix the problem. But then, when you build a huffman tree, reading a file, some characters would have more than 8 bits. Then, the problem is new BigInteger(byteString, 2).toByteArray() would have more than 1 element in the array.(For eg, if 'v' has the hashcode '11010001' and new BigInteger(byteString, 2).toByteArray() returns an array of elements [0, -47].)
Can someone please suggest me a way to write to a file such that, the file would be compressed and at the same time, these problems are also taken care.
The problem is that files in modern operating systems are modeled as indexable sequences of bytes1.
So what you need is a way to encode the fact that your file is representing a number of bits that may not be a multiple of 8. That means the bit stream size is not necessarily the file size (in bytes) multiplied by 8.
There are a variety of solutions:
Reserve N bytes at the start of the file for the file size in bits. For example, reserving 4 bytes allows you to represent file sizes up to 232 bits.
Reserve 3 bits at the start of the file to hold the number of bits modulo 8. You can use this to decide how many bits in the last byte of the file to ignore.
Use some kind of encoding to represent the end of stream; e.g. represent it as a character in the text stream that you are encoding.
Is there a way to deal with this without using some bits? AFAIK, No.
1 - And at a lower level, files are represented as sequences of disk blocks consisting of multiple bytes. So, from a physical storage perspective, compressing files that are already small (e.g. smaller than a disk block) doesn't achieve anything. Similarly saving or not saving (say) 3 bits when the representation is modeled as a byte sequence is at the border of being pointless ... if that was what was concerning you.
Yes, you can write bits to a file. In fact you are always writing bits to a file. The only thing is that you are writing eight bits at a time.
What you need is a bit buffer, say a 32-bit unsigned variable, into which you accumulate bits. Have another integer that tracks how many bits are in the bit buffer. Use the shift left and or (or plus) operators to put more bits in the bit buffer, and the and and shift right operators to remove them. Whenever you have eight or more bits in the bit buffer, you write those eight bits to the file as a byte. At the end, write the remaining bits (if any) to the file as the last byte.
So, to add the bits bits in value to the buffer:
bitBuffer |= value << bitCount;
bitcount += bits;
to write and remove available bytes:
while (bitCount >= 8) {
writeByte(bitBuffer & 0xff);
bitBuffer >>>= 8;
bitCount -= 8;
}
You need to make sure that when decoding, you don't mistake the filler bits in the last byte as another code. You can either send the actual number of bits in the message preceding the message (or the number of bits in the last byte), or you can add a symbol to your alphabet for end-of-stream that gets its own Huffman code, and end the message with that.
The other problem you have is that you will also need to transmit the Huffman code itself to the decoder before the coded symbols in order for the decoder to know how to decode. Look up "canonical Huffman codes" for how to approach that efficiently.

How to send value bigger than 127 in byte Java

I am working on an Smart Card where there is a method in javax.smartcardio.CommandAPDU.
CommandAPDU(int cla, int ins, int p1, int p2, byte[] data, int ne)
I need to send data as byte[] (5th argument). Now my problem is that, as Java primitive data types are signed the max value of a byte can not exceed 127. I need to send a value bigger than 127. To be precise, the hex value 94 which is equal to 148.
As some solution suggests that we can cast it to integer.
byte b = -108;
int i = b & 0xff;
I can't do that as the CommandAPDU(); constructor doesn't take an []. So how to do it?
Depending on how it is interpreted by the smart card, you could just send the correct negative value. If the smart card interprets value as unsigned, you could for example send -1 for 255.
You're calculating the APDU with unsigned bytes, while Java uses signed bytes.
It's just a matter of how the data is interpreted, sending -108 to the smart card will be interpreted in exactly the same way as sending 148 from a platform using unsigned bytes. The bit combination is exactly the same.
Java can even do the conversion itself so that you can write the code using unsigned numbers;
byte data = (byte)0x94; // stores -108 in "data", which will be interpreted
// as 148 on an unsigned platform
For long blocks of data, it is probably best to use a hexadecimal encoder/decoder. But be sure that you handle the data as bytes internally (directly decode and don't look back to the hex String). The Apache codec library contains a good encoder/decoder, or you can use Bouncy Castle or Guava or use one of the many examples on SO.

outputStream.write() for int[] array

I'm looking for something that does the same thing than outputStream.write() but which will accept an array of int.
Actually, I'm using this one : outputStream.write() but this one accepts only byte,byte[] or int.
I could use the byte[] but the values I want to send are
[255,44,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,255]
so I can use the byte[] because there range are only from -127 to 127 :/
It's to send a command on a Port_Com which accept only packet of 19 bytes and must be start and end with 255.
This is a common misconception about bytes, because of rumors repeated again and again.
Actually, the range of byte is from
00000000 (binary) to 11111111 (binary)
There is no reason to interpret bytes as numbers, if you're only interested in the bit patterns. There is, in particular, no reason to interpret bytes as signed numbers, just because java does it that way by default.
Hence, go ahead, as Jon Skeet says, cast your integers to byte and write those bytes.

How to read file created by C++ program in java?

I have one file created by c++ program which is in encrypted format. I want to read it in my java program. In case of decryption of file contents, decryption algorithm is performing operations on byte[which is unsigned char-BYTE in c/c++]. I used same decryption algorithm which I have used in my c/c++ program. This algorithm contains ^, %, * and - operations on byte. But byte datatype of java is signed because of which I am facing problems in decryption. How can I read file or process read data with 1byte at a time which is unsigned?
thanks in advance.
byte b = <as read from file>;
int i = b & 0xFF;
Perform operations on i as required
The standard method InputStream.read() reads one byte and fits it into a int, so in practice it is an unsinged byte. There are no unsigned primitive data types in java, so the only approach is to fit it in an upper primitive.
That being said you should have no trouble performing encryption/decryption over data bytes read from the file, since the bytes are the same, no matter if they are interpreted as signed or unsigned (0xFF can be 255 or -1). You say the alghorithm contains "^, %, *", etc. That is an interpretation of raw bytes, taking into account a character encoding (that fits 8 bit per character I suppose). You should not perform encryption/decryption operations over other than raw bytes.
First, InputStream.read() returns an int but it holds a byte; it uses an int so -1 can be returned if the EOF is reached. If the int is not -1, you can cast it to byte.
Second, there are read() metods that allow storing the bytes directly in a byte[]
And last, if you are going to use the file as a byte[] (and it is not too big) maybe it would be interesting copying the data from FileInputStream and write it into a ByteArrayOutputStream. You can get the resulting byte[] from the late object (note: do not use the .read() method, use .read(byte[], int, int) for performance).
Since there is no unsigned primitive type in Java, I think what you can do is to convert signed byte into integer (which will virtually be unsigned because the integer will always be positive). You can follow the code in here: Can we make unsigned byte in Java for the conversion.

Categories