eVRC smart cards - java

Does anyone have experience with reading eVRC (Electronic Vehicle Eegistration Cards), and APD U commands in JAVA?
Any example will be useful.
Thanks in advance.

I would strongly suggest you would go with the javax.smartcardio libraries. Note that there are some availability issues, such as for 64 bit and access conditions for 32 bits in the later Java runtime environments. That said, the APDU and CardTerminal interface is pretty neat compared to many other API's dealing with APDU's.
[UPDATE] about the commands, this seems to be a simple file based card that does not perform any encryption, and employs a proprietary file structure within the specified DF. So the basic operation is: retrieve ATR, SELECT by AID, now you are in the DF (the root of the application). Then select each file using SELECT by File ID, followed by an X number of READ BINARY commands.
E.g.
send "00A4040C 0X <AID>" // SELECT DF aid was not given in document, so find this out, probably JRC
send "00A40200 02 D001 00" // SELECT EF.Registration_A (and hopefully parse the response to get the file length)
send "00B00000 00" // READ BINARY return up to 256 bytes or
send "00B00005 XX" // READ BINARY return xx bytes, the number of bytes left, from offset 05
That would be in Java (out of the top of my head):
CommandAPDU command = new CommandAPDU(0x00, 0xA4, 0x02, 0x00, new byte[] { (byte) 0xD0, (byte) 0x01 }, 256);
ResponseAPDU response = channel.send(command);
Note that you might need to parse the first few bytes of the READ BINARY to find out the file length in a compatible way. Make sure you don't read over the actual number of bytes still left as you might get any error basically. When looping, only count the number of bytes actually returned, not the (maximum) number requested.
If you are using the smartcard IO libs, you only have to specify the first 4 bytes as the header, then the data (the length of the command data will be calculated for you) and then Ne, the maximum number of bytes you want returned (if applicable).
The main pain is parsing the underlying BER structure and verifying the signature of course, but I consider that out of scope.

You may like https://github.com/grakic/jevrc
JEvrc is a reusable open source Java library for reading public data from the Serbian/EU eVRC card. It includes a simplified TLV parser for parsing card data. It supports Serbian eVRC card profile but should be possible to generalize with a patch or two.

Related

How to inflate in Python some data that was deflated by Peoplesoft (Java)?

DISCLAIMER: Peoplesoft knowledge is not mandatory in order to help me with this one!
How could i extract the data from that Peoplesoft table, from the PUBDATALONG column?
The description of the table is here:
http://www.go-faster.co.uk/peopletools/psiblogdata.htm
Currently i am using a program written in Java and below is a piece of the code:
Inflater inflater = new Inflater();
byte[] result = new byte[rs.getInt("UNCOMPDATALEN")];
inflater.setInput(rs.getBytes("PUBDATALONG"));
int length = inflater.inflate(result);
System.out.println(new String(result, 0, length, "UTF-8"));
System.out.println();
System.out.println("-----");
System.out.println();
How could I rewrite this using Python?
It is a question that appeared in other forms on Stackoverflow but had no real answer.
I have basic understanding of what the code does in Java but i don't know any library in Python i could work with to achieve the same thing.
Some recommended to try zlib, as it is compatible with the algorithm used by Java Inflater class, but i did not succeed in doing that.
Considering the below facts from PeopleSoft manual:
When the message is received by the PeopleSoft database, the XML data
is converted to UTF-8 to prevent any UCS2 byte order issues. It is
also compressed using the deflate algorithm prior to storage in the
database.
I tried something like this:
import zlib
import base64
UNCOMPDATALEN = 362 #this value is taken from the DB and is the dimension of the data after decompression.
PUBDATALONG = '789CB3B1AFC8CD51284B2D2ACECCCFB35532D43350B2B7E3E5B2F130F40C8977770D8977F4710D0A890F0E710C090D8EF70F0D09080DB183C8BAF938BAC707FBBBFB783ADA19DAE86388D904B90687FAC0F4DAD940CD70F67771B533B0D147E6DAE8A3A9D5C76B3F00E2F4355C=='
print zlib.decompress(base64.b64decode(PUBDATALONG), 0, 362)
and I get this:
zlib.error: Error -3 while decompressing data: incorrect header check
for sure I do something wrong but I am not smart enough to figure it out by myself.
That string is not Base-64 encoded. It is simply hexadecimal. (I have no idea why it ends in ==, which makes it look a little like a Base-64 string.) You should be able to see by inspection that there are no lower case letters, or for that matter upper case letters after F as there would be in a typical Base-64 encoded string of compressed, i.e. random-appearing data.
Remove the equal signs at the end and use .decode("hex") in Python 2, or bytes.fromhex() in Python 3.

Connect Direct : File sending from Mainframe to Unix

When I am sending a Variable length file from Mainframe Connect direct to UNIX box, the file on UNIX have some extra bytes on the beginning of the Mainframe file, I tried using different SYSOPTS option but I am still getting those intial bytes. Any Idea ?
You should look at getting the file copied to a Fixed-Length record (recfm=FB) file on the mainframe before doing the transfer. There are a number of mainframe utilities that can do this (i.e. sort).
If you transfer it as a VB file you should also leave it as an EBCDIC file (the BDW/RDW fields are binary fields and should not be translated to ASCII).
As others have said, it would be useful to have an example of the file.
Following on from NealB. A vb file on the mainframe is stored in this format
<BDW><RDW>Record Data 1
<RDW>Record Data 2
....
<RDW>Record Data n-1
<BDW><RDW>Record Data n
<RDW>Record Data n+1
....
<RDW>Record Data o-1
<BDW><RDW>Record Data o
<RDW>Record Data o+1
....
Where
BDW : Block descriptor word is 4 bytes; the first 2 bytes are the block length (big endian format); the last 2 bytes will be hex 0's for disk files (tapes files can use these 2 bytes).
RDW : Record Descriptor word is 4 bytes; the first 2 bytes are the record length (big endian format); the last 2 bytes will be hex 0's.
So if Block length was 240 (and contained 3 80-byte records) then the file would be
---BDW--- ---RDW---
00F0 0000 0050 0000 80-bytes of data (record 1)
0050 0000 80-bytes of data (record 2)
0050 0000 80-bytes of data (record 3)
There may be a unix utility for handling mainframe VB files
There are are some vb options for Connect-Direct (NDM) (see http://pic.dhe.ibm.com/infocenter/sb2bi/v5r2/index.jsp?topic=%2Fcom.ibm.help.cd_interop_sysopts.doc%2FCDP_UNIXSysopts.html).
Looking at the documentation, you can not combine vb options with ascii translation; converting the file to Fixed-Length records (recfm=FB) on the mainframe may make a lot of sense.
Note: You could try looking at the file with the Record Editor and using the File-Wizard (button to the left of the layout name). The wizard should pickup that it is a Mainframe-VB file.
Note: While converting the file to a fixed-length record on the mainframe would be the best option, the java project JRecord can read Mainframe VB files if need be
Some extra bytes... how many is "some"?
If there are always 4 bytes, these may be the RDW (Record Descriptor Word) which carries the record length.
I don't know much about Connect Direct, but from a command line FTP session on the mainframe you
can verify the RDW status using the LOCSTAT command as follows:
Command:
LOCSTAT RDW
RDW's from VB/VBS files are retained as part of data.
Command:
If you see the above message you can drop the RDW's using the following command:
LOCSITE NORDW
If you are pulling from the mainframe then you can find out whether RDW's are being stripped or not using FTP command:
QUOTE STAT
You will then see several messages, one of which reports the RDW status:
211-RDWs from variable format datasets are retained as part of the data.
Again, you can fix this with
QUOTE SITE NORDW
after which QUOTE STAT should give you:
211=RDWs from variable format datasets are discarded
Are the extra bytes 0xEF 0xBB 0xBF, 0xFF 0xFE or 0xFE 0xFF? That's the UTF Byte Order Marker.
If it's UTF-8, ignore it. Strip it, if you like. It's pointless.
If it's UTF-16, then you can use the bytes to determine endianness. If you know the endianness, it's safe to ignore or strip them.
If you control the application generating the files, change it from saving UTF. Just save the files as ASCII and the BOMs will go away.

Force gzip to decompress despite CRC error

I think there's a way to do this but I'm not sure how? Basically, I was writing a compression program that resulted in a crc error when I tried to unzip the compressed data. Normally this means that the decompressor actually recognized my data as being in the right format and decompressed it, but when it compared the result to the expected length as indicated by the CRC, they weren't the same.
However, for comparison reasons, I actually do want to see the output to see if it's just a concatenation issue (which should be relatively obvious if the decompressed output isn't gibberish but just in the wrong order).
You said "unzip", but the question says "gzip". Which is it? Those are two different programs that operate on two different formats. I will assume gzip. Also the length is not "indicated by the CRC". The gzip trailer contains a CRC and an uncompressed length (modulo 232), which are two different things.
The gzip command will decompress all valid deflate data and write it out before checking the crc. So if, for example, I take a .gz file and corrupt just the crc (or length) at the end, and do:
gzip -dc < corrupt.gz > result
then result will be the entire, correct uncompressed data stream. There is no need to modify and recompile gzip, nor to write your own ungzipper. gzip will complain about the crc, but all of the data will be written nevertheless.
As far as I'm aware, the CRC check is part of the GZIP wrapper, not part of the actual compressed data in DEFLATE format.
So you should be able to take literally just the bytes that are the compressed data stream, ignoring the GZIP header and CRC at the end, and pass it through an Inflater.
In other words, you need to take just the bytes corresponding to those referred to as "compressed blocks" in the GZIP File format specification and try to decompress using a Java Inflater object. A little bit of work but possibly less than re-compiling the GZIP code as Greg suggests (though his option would also work in principle).

Is there any difference between Java byte code and .NET byte code? If so, shall I take hexadecimal of that values?

I would like to know if there any difference between Java byte code and .NET byte code? If there any difference, shall I take hexadecimal values of that Java byte code and .Net byte code. Because, hexadecimal is independent of languages and it is universal specification.
Problem description
We are developing a mobile application in j2me and Java. Here I am using external finger print reader for reading/verifying finger print. We are using one Java api for reading/verifying finger print.
I capture the finger template and raw image bytes. I convert the raw image bytes into hex form and stored in a separate text file.
Here we using a conversion tool (developed in .NET) that converts the hex form into image. With the help of that tool we are trying to get the image from that text file. But we cannot get the image correctly.
The .NET programmer says the Java byte and .NET byte differ. Java byte ranges from -128 to 127. But .NET byte ranges from 0 to 255. So there is a problem.
But my assumption here is: the hex is independent of Java & .net. It is common to both. So, instead of storing byte code in text file, I plan to convert that byte code into hexadecimal format. So,our .NET conversion tool automatically convert this hexadecimal into Image.
I don't know whether I am going on correct path or not?
Hexadecimal is just a way to represent numbers.
Java is compiled to bytecode and executed by a JVM.
.NET is compiled to bytecode and executed by the CLR.
The two formats are completely incompatible.
I capture the finger template and raw image bytes .I convert the raw image bytes into hex form and stored in a separate text file.
OK; note, storing as binary would have been easier (and more efficient), but that should work
Here we using a conversion tool (developed in .NET) that converts the hex form into image.With the help of that tool we are trying to get the image from that text file.But we cannot get the image correctly.
Rather than worrying about the image, the first thing to do is check where the problem is; there are two obvious scenarios:
you aren't reading the data back into the same bytes
you have the right bytes, but you can't get it to load as an image
First; figure out which of those it is, simply by storing some known data and attempting to read it back at the other end.
The .NET programmer says the java byte and .NET byte differ.Java byte ranges from -128 to 127.But .NET byte ranges from 0 to 255.So there is a problem.
That shouldn't be a problem for any well-written hex-encode. I would expect a single java byte to correctly write a single hex value between 00 and FF.
I dont know, whether i am going on Correct path or not?
Personally, I suspect you are misunderstanding the problem, which makes it likely that the solution is off the mark. If you want to make life easier, store as binary rather than text; but there is no inherent problem exchanging hex around. If I had to pack raw binary data into a text file, personally I'd probably go for base-64 rather than hex (it will be shorter), but either is fine.
As I mentioned above: first figure out whether the problem is in reading the bytes, vs processing the bytes into an image. I'm also making the assumption that the bytes here are an image format that both environments can process, and not (for example) a custom serialization format.
Yes, Java byte code and .NET’s byte code are two different things that are not interchangeable. As to the second part of your question, I have no idea what you are talking about.
Yes they are different while there are tools that can migrate from one to an other.
Search google fro java bytecode IL comparison . This one from same search

How to get data out of network packet data in Java

In C if you have a certain type of packet, what you generally do is define some struct and cast the char * into a pointer to the struct. After this you have direct programmatic access to all data fields in the network packet. Like so :
struct rdp_header {
int version;
char serverId[20];
};
When you get a network packet you can do the following quickly :
char * packet;
// receive packet
rdp_header * pckt = (rdp_header * packet);
printf("Servername : %20.20s\n", pckt.serverId);
This technique works really great for UDP based protocols, and allows for very quick and very efficient packet parsing and sending using very little code, and trivial error handling (just check the length of the packet). Is there an equivalent, just as quick way in java to do the same ? Or are you forced to use stream based techniques ?
Read your packet into a byte array, and then extract the bits and bytes you want from that.
Here's a sample, sans exception handling:
DatagramSocket s = new DatagramSocket(port);
DatagramPacket p;
byte buffer[] = new byte[4096];
while (true) {
p = new DatagramPacket(buffer, buffer.length);
s.receive(p);
// your packet is now in buffer[];
int version = buffer[0] << 24 + buffer[1] << 16 + buffer[2] < 8 + buffer[3];
byte[] serverId = new byte[20];
System.arraycopy(buffer, 4, serverId, 0, 20);
// and process the rest
}
In practise you'll probably end up with helper functions to extract data fields in network order from the byte array, or as Tom points out in the comments, you can use a ByteArrayInputStream(), from which you can construct a DataInputStream() which has methods to read structured data from the stream:
...
while (true) {
p = new DatagramPacket(buffer, buffer.length);
s.receive(p);
ByteArrayInputStream bais = new ByteArrayInputStream(buffer);
DataInput di = new DataInputStream(bais);
int version = di.readInt();
byte[] serverId = new byte[20];
di.readFully(serverId);
...
}
I don't believe this technique can be done in Java, short of using JNI and actually writing the protocol handler in C. The other way to do the technique you describe is variant records and unions, which Java doesn't have either.
If you had control of the protocol (it's your server and client) you could use serialized objects (inc. xml), to get the automagic (but not so runtime efficient) parsing of the data, but that's about it.
Otherwise you're stuck with parsing Streams or byte arrays (which can be treated as Streams).
Mind you the technique you describe is tremendously error prone and a source of security vulnerabilities for any protocol that is reasonably interesting, so it's not that great a loss.
I wrote something to simplify this kind of work. Like most tasks, it was much easier to write a tool than to try to do everything by hand.
It consisted of two classes, Here's an example of how it was used:
// Resulting byte array is 9 bytes long.
byte[] ba = new ByteArrayBuilder()
.writeInt(0xaaaa5555) // 4 bytes
.writeByte(0x55) // 1 byte
.writeShort(0x5A5A) // 2 bytes
.write( (new BitBuilder()) // 2 bytes---0xBA12
.write(3, 5) // 101 (3 bits value of 5)
.write(2, 3) // 11 (2 bits value of 3)
.write(3, 2) // 010 (...)
.write(2, 0) // 00
.write(2, 1) // 01
.write(4, 2) // 0002
).getBytes();
I wrote the ByteArrayBuilder to simply accumulate bits. I used a method chaining pattern (Just returning "this" from all methods) to make it easier to write a bunch of statements together.
All the methods in the ByteArrayBuilder were trivial, just like 1 or 2 lines of code (I just wrote everything to a data output stream)
This is to build a packet, but tearing one apart shouldn't be any harder.
The only interesting method in BitBuilder is this one:
public BitBuilder write(int bitCount, int value) {
int bitMask=0xffffffff;
bitMask <<= bitCount; // If bitcount is 4, bitmask is now ffffff00
bitMask = ~bitMask; // and now it's 000000ff, a great mask
bitRegister <<= bitCount; // make room
bitRegister |= (value & bitMask); // or in the value (masked for safety)
bitsWritten += bitCount;
return this;
}
Again, the logic could be inverted very easily to read a packet instead of build one.
edit: I had proposed a different approach in this answer, I'm going to post it as a separate answer because it's completely different.
Look at the Javolution library and its struct classes, they will do just what you are asking for. In fact, the author has this exact example, using the Javolution Struct classes to manipulate UDP packets.
This is an alternate proposal for an answer I left above. I suggest you consider implementing it because it would act pretty much the same as a C solution where you could pick fields out of a packet by name.
You might start it out with an external text file something like this:
OneByte, 1
OneBit, .1
TenBits, .10
AlsoTenBits, 1.2
SignedInt, +4
It could specify the entire structure of a packet, including fields that may repeat. The language could be as simple or complicated as you need--
You'd create an object like this:
new PacketReader packetReader("PacketStructure.txt", byte[] packet);
Your constructor would iterate over the PacketStructure.txt file and store each string as the key of a hashtable, and the exact location of it's data (both bit offset and size) as the data.
Once you created an object, passing in the bitStructure and a packet, you could randomly access the data with statements as straight-forward as:
int x=packetReader.getInt("AlsoTenBits");
Also note, this stuff would be much less efficient than a C struct, but not as much as you might think--it's still probably many times more efficient than you'll need. If done right, the specification file would only be parsed once, so you would only take the minor hit of a single hash lookup and a few binary operations for each value you read from the packet--not bad at all.
The exception is if you are parsing packets from a high-speed continuous stream, and even then I doubt a fast network could flood even a slowish CPU.
Short answer, no you can't do it that easily.
Longer answer, if you can use Serializable objects, you can hook your InputStream up to an ObjectInputStream and use that to deserialize your objects. However, this requires you have some control over the protocol. It also works easier if you use a TCP Socket. If you use a UDP DatagramSocket, you will need to get the data from the packet and then feed that into a ByteArrayInputStream.
If you don't have control over the protocol, you may be able to still use the above deserialization method, but you're probably going to have to implement the readObject() and writeObject() methods rather than using the default implementation given to you. If you need to use someone else's protocol (say because you need to interop with a native program), this is likely the easiest solution you are going to find.
Also, remember that Java uses UTF-16 internally for strings, but I'm not certain that it serializes them that way. Either way, you need to be very careful when passing strings back and forth to non-Java programs.

Categories