I have data that I read from a socket and I know it has the following format:
[2bytes][8bytes][2bytes] = 12 bytes
That I want to read separately; and the values are in Hex. I actually captured that data a while ago in PHP and saved it to files and I can view it properly using od (unix):
$ od -h myFile
$ 0000000 02eb 02fe fe02 fefe 02fe 02fe 000a
$ 0000015
That, has a CR and LF at the end, resulting in 14 bytes. How can I obtain those values in Java reading from a socket? For instance how do I get that "02eb" (2 bytes) and convert it to a decimal value?
I am already reading from the socket, last thing I tried was:
mMessage = mBRin.readLine();
byte[] bytes = mMessage.trim().getBytes()
But that gives me 18 bytes in the array.
If it helps, in PHP to get that first part I did:
$input = socket_read($value,13,PHP_BINARY_READ);
$trim_input = trim($input);
$float_head = hexdec(bin2hex(substr($input,0,2)));
I think I am not understanding this, which may be the answer
I have data that I read from a socket and I know it has the following
format:
[2bytes][8bytes][2bytes] = 12 bytes
If you already have code to read bytes from a socket, you can use a ByteBuffer to convert the bytes to short, int, long etc. values.
Socket s = .....;
InputStream in = s.getInputStream();
byte [] buf = new byte[12];
// read 12 bytes from socket into bytes
in.read(buf);
ByteBuffer bb = ByteBuffer.allocate(buf.length);
bb.order(ByteOrder.LITTLE_ENDIAN);
bb.put(buf);
bb.flip();
short short1 = bb.getShort();
long long1 = bb.getLong();
short short2 = bb.getShort();
Note the call to set the byte buffer to little endian.
When you did the od command you got output similar to the following (this output comes from a file I created on my system to mimic yours). The od -h command reads the bytes from the file, puts them together as 2-byte shorts in little endian mode then prints out the short values in hex.
$ od -h binary.dat
0000000 02eb 02fe fe02 fefe 02fe 02fe 000a
0000015
However if you use the -tx1 you see the bytes in the real order they appear in the file.
$ od -tx1 binary.dat
$ 0000000 eb 02 fe 02 02 fe fe fe fe 02 fe 02 0a
$ 0000015
If you run this on your file I think you will see that it is really 13 bytes not 14 bytes and is terminated by a single LF, not CRLF. The "extra" byte you saw was a "present" from od -h is does not actually exist in file.
Anyhow, The first byte is value 235 (EB in hex). The second byte is 2. The question is - what is the correct value for that first short want. If, according to your socket protocol, the data is serialized in little endian mode, the value of those two bytes concatenated into a short is 02EB hex or 747. If the socket protocol uses big endian, then the value is EB02 hex or 60162.
The ByteBuffer approach gives you flexibility and allows you to read/write in either big-endian or little endian. It also allows you to separate the reading of data off the socket (into byte arrays) then later converting the data into numbers. This may make it easier for unit testing since you can create byte arrays for various test cases and make sure your parsing code works as expected.
The DataInputStream approach in sharadendu's answer will also work - but only if the socket protocol is big-endian. DataInputStream is hard-coded to big-endian.
Socket socket = ......;
DataInputStream dis = new DataInputStream(socket.getInputStream());
short f1 = dis.readShort();
long f2 = dis.readLong();
sort f3 = dis.readShort();
To print the values in hex use String.format("%x",f1);
Hope it helps ...
Related
I'm using java jaxb to unmarshal xml request via socket, before the actual xml
<?xml version="1.0"....
I receive these bytes
00 00 01 F9 EF BB BF
What are they?, size of the xml?, session id?...
The sender is using msxml4 to execute request's to my service.
Futhermore, I can see that the sender expect this type of header (it trunks the first 7 bytes if I send directly the xml response).
So when I have understood what these bytes are, is there any "normal" any method using jaxb that can be used to add this header or do I need to do it manually.
Thanks for any reply
This is a BOM header.
The first 4 bytes indicate file size 00 00 01 F9 = 0 + 0 + 256 + 249 = 505 (including the 3 bytes indicating UTF-8 (EF BB BF). Hence the xml length will be 502.
How to handle this stream with Jaxb view:
Byte order mark screws up file reading in Java
why org.apache.xerces.parsers.SAXParser does not skip BOM in utf8 encoded xml?
JAXB unmarshaller BOM handlle
However, I have prefeered to handle the stream byte by byte reading it into a StringBuffer (since I need it also in string format for logging)
My reading byte to byte solution is implemented to wait for the '<' char, hence first char in xml message.
To add the BOM heading before sending response I have used a similar method:
import java.nio.ByteBuffer;
public byte[] getBOMMessage(int xmlLenght) {
byte[] arr = new byte[7];
ByteBuffer buf = ByteBuffer.wrap(arr);
buf.putInt(xmlLenght+3);
arr[4]=(byte)0xef;
arr[5]=(byte)0xbb;
arr[6]=(byte)0xbf;
return arr;
}
I want to read and write data in SLE4442 smart card
i have ACR38U-i1 smart card reader
For write I am use this commandAPDU
byte[] cmdApduPutCardUid = new byte[]{(byte)0xFF, (byte)0xD0, (byte)0x40,(byte)0x00, (byte)4,(byte)6,(byte)2,(byte)6,(byte)2};
And for read data
byte[] cmdApduGetCardUid = new byte[]{(byte)0xFF,(byte)0xB0,(byte)0x40,(byte)0x00,(byte)0xFF};
both are execute and send SW= 9000
but no one data receive in responseAPDU
Like I write 6262 data but it not receive
I am also use Select command to before write and read command
The select command is
byte[] cmdApduSlcCardUid = new byte[]{(byte)0xFF,(byte)0xA4,(byte)0x00,(byte)0x00,(byte)0x01,(byte)0x06};
Have anyone Proper java code to read and write in SLE4442 smart card ?
APDU Commands related to work with Memory Cards could be different for different readers and implemented support. Here is an example for OmniKey reader.
Take a look to your ACR reader specification and use specific Pseudo-APDU command to work with SLE 4442.
For your question:
4.6.1 SELECT_CARD_TYPE: "FF A4 00 00 01 06", where 0x06 in the data meant "Infineon SLE 4432 and SLE 4442".
4.6.2 READ_MEMORY_CARD: "FF B0 00 [Bytes Address] [MEM_L]", where
[Bytes Address]: is the memory address location of memory card
[MEM_L]: Length of data to be read from the memory card
4.6.5 WRITE_MEMORY_CARD: "FF D0 00 [Bytes Address] [MEM_L] [Data]"
[Data]: data to be written to the memory card
You used P1 = 0x40 and this could be an issue.
I am trying to send a message via TCP sockets from a Java application and read it in Python 2.7
I want the first 4 bytes to specify the message length, so I could do:
header = socket.recv(4)
message_length = struct.unpack(">L",header)
message = socket.recv(message_length)
on the Python end.
Java side:
out = new PrintWriter(new BufferedWriter(new StreamWriter(socket.getOutputStream())),true);
byte[] bytes = ByteBuffer.allocate(4).putInt(message_length).array();
String header = new String(bytes, Charset.forName("UTF-8"));
String message_w_header = header.concat(message);
out.print(message_w_header);
This works for some message lengths (10, 102 characters) but for others it fails (for example 1017 characters). In the case of failing value if I output the values of each bytes I get:
Java:
Bytes 0 0 3 -7
Length 1017
Hex string 3f9
Python:
Bytes 0 0 3 -17
Length 1007
Hex string \x00\x00\x03\xef
I think this has something to do with signed bytes in Java and unsigned in Python but I can't figure out what should I do to make it work.
The issue is on Java side -- b'\x03\xf9' is not valid utf-8 byte sequence:
>>> b'\x03\xf9'.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf9 in position 1: invalid start byte
It seems new String(bytes, Charset.forName("UTF-8")); uses 'replace' error handler b'\xef' is the first of three bytes of '\ufffd' Unicode replacement character encoded in utf-8:
>>> b'\x03\xf9'.decode('utf-8', 'replace').encode('utf-8')
b'\x03\xef\xbf\xbd'
that is why you receive b'\x03\xef' instead of b'\x03\xf9' in Python.
To fix it, send bytes in Java instead of Unicode text.
Unrelated, sock.recv(n) may return less than n bytes. If the socket is blocking; you could create a file-like object using file = sock.makefile('rb') and call file.read(n) to read exactly n bytes.
We have a process which communicates with an external via MQ. The external system runs on a mainframe maching (IBM z/OS), while we run our process on a CentOS Linux platform. So far we never had any issues.
Recently we started receiving messages from them with non-printable EBCDIC characters embedded in the message. They use the characters as a compressed ID, 8 bytes long. When we receive it, it arrives on our queue encoded in UTF (CCSID 1208).
They need to original 8 bytes back in order to identify our response messages. I'm trying to find a solution in Java to convert the ID back from UTF to EBCDIC before sending the response.
I've been playing around with the JTOpen library, using the AS400Text class to do the conversion. Also, the counterparty has sent us a snapshot of the ID in bytes. However, when I compare the bytes after conversion, they are different from the original message.
Has anyone ever encountered this issue? Maybe I'm using the wrong code page?
Thanks for any input you may have.
Bytes from counterparty(Positions [5,14]):
00000 F0 40 D9 F0 F3 F0 CB 56--EF 80 04 C9 10 2E C4 D4 |0 R030.....I..DM|
Program output:
UTF String: [R030ôîÕ؜IDMDHP1027W 0510]
EBCDIC String: [R030ôîÃÃÂIDMDHP1027W 0510]
NATIVE CHARSET - HEX: [52303330C3B4C3AEC395C398C29C491006444D44485031303237572030353130]
CP500 CHARSET - HEX: [D9F0F3F066BE66AF663F663F623FC9102EC4D4C4C8D7F1F0F2F7E640F0F5F1F0]
Here is some sample code:
private void readAndPrint(MQMessage mqMessage) throws IOException {
mqMessage.seek(150);
byte[] subStringBytes = new byte[32];
mqMessage.readFully(subStringBytes);
String msgId = toHexString(mqMessage.messageId).toUpperCase();
System.out.println("----------------------------------------------------------------");
System.out.println("MESSAGE_ID: " + msgId);
String hexString = toHexString(subStringBytes).toUpperCase();
String subStr = new String(subStringBytes);
System.out.println("NATIVE CHARSET - HEX: [" + hexString + "] [" + subStr + "]");
// Transform to EBCDIC
int codePageNumber = 37;
String codePage = "CP037";
AS400Text converter = new AS400Text(subStr.length(), codePageNumber);
byte[] bytesData = converter.toBytes(subStr);
String resultedEbcdicText = new String(bytesData, codePage);
String hexStringEbcdic = toHexString(bytesData).toUpperCase();
System.out.println("CP500 CHARSET - HEX: [" + hexStringEbcdic + "] [" + resultedEbcdicText + "]");
System.out.println("----------------------------------------------------------------");
}
If a MQ message has varying sub-message fields that require different encodings, then that's how you should handle those messages, i.e., as separate message pieces.
But as you describe this, the entire message needs to be received without conversion. The first eight bytes need to be extracted and held separately. The remainder of the message can then have its encoding converted (unless other sub-fields also need to be extracted as binary, unconverted bytes).
For any return message, the opposite conversion must be done. The text portion of the message can be converted, and then that sub-string can have the original eight bytes prepended to it. The newly reconstructed message then can be sent back through the queue, again without automatic conversion.
Your partner on the other end is not using the messaging product correctly. (Of course, you probably shouldn't say that out loud.) There should be no part of such a message that cannot automatically survive intact across both directions. Instead of an 8-byte binary field, it should be represented as something more like a 16-byte hex representation of the 8-byte value for one example method. In hex, there'd be no conversion problem either way across the route.
It seems to me that the special 8 bytes are not actually EBCDIC character but simply 8 bytes of data. If it is in such case then I believe, as mentioned by another answer, that you should handle that 8 bytes separately without allowing it convert to UTF8 and then back to EBCDIC for further processing.
Depending on the EBCDIC variant you are using, it is quite possible that a byte in EBCDIC is not converting to a meaningful UTF-8 character, and hence, you will fail to get the original byte by converting the UTF8 character to EBCDIC you received.
A brief search on Google give me several EBCDIC tables (e.g. http://www.simotime.com/asc2ebc1.htm#AscEbcTables). You can see there are lots of values in EBCDIC that do not have character assigned. Hence, when they are converted to UTF8, you may not assume each of them will convert to a distinct character in Unicode. Therefore your proposed way of processing is going to be very dangerous and error-prone.
I've been making an image rescaler that uses the ImageIO library in Java to convert them to a buffered image. Unfortunately it doesn't recognise every type of JPEG that I may pass to it and so I need to "convert" these other types. The way I'm converting them is to take an existing APP0 tag from a standard JFIF JPEG and what I want to do is on the 3rd byte into the file insert 18 bytes of data (the FFE0 marker and the 16 byte APP0 tag) and then I want to add the rest of the file to the end of that.
So to generalise, what's the most efficient way to add/insert bytes of data mid way through a stream/file?
Thanks in advanced,
Alexei Blue.
This question is linked to a previous question of mine and so I'd like to thank onemasse for the answer given there.
Java JPEG Converter for Odd Image Types
If you are reading your images from a stream, you could make a proxy which acts like an inputstream and takes an outputstream. Override the read method so it returns the extra missing bytes when they are missing.
A proxy can be made by extending FilterInputStream http://download.oracle.com/javase/6/docs/api/java/io/FilterInputStream.html
If it is a file, the recommended way to do this is to copy the existing file to a new one, inserting, changing or removing bytes at the appropriate points. Then rename the new file to the old one.
In theory you could try to use RandomAccessFile (or equivalent) perform an in-place update of an existing file. However, it is a bit tricky, not as efficient as you might imagine and ... most important ... it is unsafe. (If your application or the system dies at an inopportune moment, you are left with a broken file, and no way to recover it.)
A PushbackInputStream might be what you need.
Thanks for the suggestion guys, I used a FilterInputStream at first but then saw there was no need to, I used the following piece of code to enter my APP0 Hex tag in:
private static final String APP0Marker = "FF E0 00 10 4A 46 49 46 00 01 01 01 00 8B 00 8B 00 00";
And in the desired converter method:
if (isJPEG(path))
{
fis = new FileInputStream(path);
bytes = new byte[(int)(new File(path).length())];
APP0 = hexStringToByteArray(APP0Marker.replaceAll(" ", ""));
for (int index = 0; index < bytes.length; index++)
{
if (index >= 2 && index <= (2 + APP0.length - 1))
{
b = APP0[index-2];
}
else
{
b = (byte) fis.read();
}//if-else
bytes[index] = b;
}//for
//Write new image file
out = new FileOutputStream(path);
out.write(bytes);
out.flush();
}//if
Hope this helps anyone having the a similar problem :)