Log all APDUs inside a Java Card applet - java

Is it possible to save all APDU commands sent to a Java Card applet inside that applet?
For instance: terminal sends 00 B2 01 0C 00, I want to save it somewhere inside my applet in order to be able to analyse it later.

Sure that's possible. It is required to generate a persistent buffer of some kind. There are various tricks to do this.
The easiest one is to generate a list, where each node holds an new array in which you copy the command. Simply determine the command size first, then copy everything in. Don't forget to copy in the Le bytes for type 2 and type 4 commands.
Probably the best method is to generate a huge array and copy each and every command to it. Persistent arrays are simply fields generated using new byte[size]. Note that the maximum size of the array is 32 Ki - 1You may want to store the size of the command before the command or in a separate persistent array.
As the amount of on card persistent storage is usually pretty minimal you may want to generate some kind of cyclic buffer, where you reuse or overwrite the oldest commands. Mind that there is often no garbage collection possible and if it exists it usually only runs during startup and it may take a long time.
You can immediately copy the header in the process method of the applet. You should only copy the rest of the command data once you receive the bytes, e.g. after using setIncomingAndReceive and finally setOutgoing / setOutgoingAndSend for the Le byte(s).
Finally you need some command to read out the log as well. Note that a command can be 4 + 1 + 255 + 1 = 262 bytes if you include the Le byte. A command response only holds 256 bytes + the status word. So you may need to read it out in multiple parts, e.g. using a counter to indicate the specific APDU and offset.
Extended length APDU's deserve a chapter all in themselves, so I'll leave them out for now.
I'll also leave the actual implementation as an exercise if you don't mind, you'd probably have an interface such as:
interface APDULogger {
short logNewCommand(byte[] commandHeader, short commandHeaderOffset);
void logNc(short nc);
void logCommandData(byte[] commandData, short commandDataOffset, short commandDataSize);
void logNe(short ne);
}
and
interface APDURetreiver {
void retrieveCommand(short history, byte[] commandHeader, short commandHeaderOffset);
short retrieveNc();
short retrieveCommandData(byte[] commandData, short commandDataOffset, short maxCommandDataSize);
short retrieveNe();
}
but mind you, this is just out of the top of my mind. You may want to keep some state too (calling the logNe(short) method signature twice is probably an error).

Related

Efficient way to parse a datagram in Java

Right now I am using a socket and a datagram packet. This program is for a LAN network and sends at least 30 packets a second at 500 bytes maximum.
this is how I am receiving my data
payload = new String(incomingPacket.getData(), incomingPacket.getOffset(), incomingPacket.getLength(), "UTF-8");
Currently I am using no offset and I parse one by one through each character. I use the first 2 characters right now to determine what type of message it is but that is subject to change, then I break down variables and seperate the data with an exclamation mark to tell me when the next variable begins. At the end I parse it and apply it to my program. Is there a faster way to break down and interpret datagram packets? Will there be a performance difference if I put the length of the variables in the offset. Maybe an example would be useful. Also I think my variables are too small to use StringBuilder so I use normal concatenation.
What you are talking about here is setting up your own protocol for communication. While I have this as the fourth part of my socket tutorial (I'm currently working on part 3, non-blocking sockets) I can explain some things here already.
There are several ways of setting up such a protocol, depending on your needs.
One way of doing it is having a byte in front of each piece of data declaring the size, in bytes. That way, you know the length of the byte array containing the next variable value. This makes it easy to read out whole variables in one go via the System.arraycopy method. This is a method I've used before. If the object being sent is always the same, this is all you need to do. Write the variables in a standardized order, add the size of the variable value and you're done.
If you have to send multiple types of objects throught the stream you might want to add a bit of meta data. This meta data can then be used to tell what kind of object is being sent and the order of the variables. This meta data can then be put in a header which you add before the actual message. Once again, the values in the header are preceded by the size byte.
In my tutorial I'll write up a complete code example.
Don't use a String at all. Just process the underlying byte array directly: scan it for delimiters, counts, what have you. You can use a DataInputStream wrapped around a ByteArrayInputStream wrapped around the byte array if you want an API oriented to Java primitive datatypes.

Best data structure for storing dynamically sized blocks from file input Java

I'm working on a Java program where I'm reading from a file in dynamic, unknown blocks. That is, each block of data will not always be the same size and the size is determined as data is being read. For I/O I'm using a MappedByteBuffer (the file inputs are on the order of MB).
My goal:
Find an efficient way to store each complete block during the input phase so that I can process it.
My constraints:
I am reading one byte at a time from the buffer
My processing method takes a primitive byte array as input
Each block gets processed before the next block is read
What I've tried:
I played around with dynamic structures like Lists but they don't have backing arrays and the conversion time to a primitive array concerns me
I also thought about using a String to store each block and then getBytes() to get the byte[], but it's so slow
Reading the file multiple times in order to find the block size first, and then grab the relevant bytes
I am trying to find an approach that doesn't defeat the purpose of fast I/O. Any advice would be greatly appreciated.
Additional Info:
I'm using a rolling hash to decide where blocks should end
Here's a bit of pseudo-code:
circular_buffer[] = read first 128 bytes
rolling_hash = hash(buffer[])
block_storage = ??? // this is the data structure I'd like to use
while file has more text
b = next byte
add b to block_storage
add b to next index in circular_buffer (if reached end, start adding/overwriting front)
shift rolling_hash one byte to the right
if hash has a certain characteristic
process block_storage as a byte[] //should contain entire block of data
As you can see, I'm reading one byte at a time, and storing/overwriting that one byte repeatedly. However, once I get to the processing stage, I want to be able to access all of the info in the block. There is no predetermined max size of a block either, so I can't pre-allocate.
It seems to me, that you reqire a dynamically growing buffer. You can use the built in BytaArrayOutputStream to achieve that. It will automatically grow to store all data written to it. You can use write(int b) and toByteArray() to realize add b to block_storage and process block_storage as a byte[].
But take care - this stream will grow unbounded. You should implement some sanity checks around it to avoid using up all memory (e.g. count bytes written to it and break by throwing an exception, when it exceeds an reasonable amount). Also make sure to close and throw away the reference to a stream after consuming the block, to allow the GC to free up memory.
edit: As #marcman pointed out, the buffer can be reset().

Update data only by difference between files (delta for java)

UPDATE: I solved the problem with a great external library - https://code.google.com/p/xdeltaencoder/. The way I did it is posted below as the accepted answer
Imagine I have two separate pcs who both have an identical byte[] A.
One of the pcs creates byte[] B, which is almost identical to byte[] A but is a 'newer' version.
For the second pc to update his copy of byte[] A into the latest version (byte[] B), I need to transmit the whole byte[] B to the second pc. If byte[] B is many GB's in size, this will take too long.
Is it possible to create a byte[] C that is the 'difference between' byte[] A and byte[] B? The requirements for byte[] C is that knowing byte[] A, it is possible to create byte[] B.
That way, I will only need to transmit byte[] C to the second PC, which in theory would be only a fraction of the size of byte[] B.
I am looking for a solution to this problem in Java.
Thankyou very much for any help you can provide :)
EDIT: The nature of the updates to the data in most circumstances is extra bytes being inserted into parts of the array. Ofcourse it is possible that some bytes will be changed or some bytes deleted. the byte[] itself represents a tree of the names of all the files/folders on a target pc. the byte[] is originally created by creating a tree of custom objects, marshalling them with JSON, and then compressing that data with a zip algorithm. I am struggling to create an algorithm that can intelligently create object c.
EDIT 2: Thankyou so much for all the help everyone here has given, and I am sorry for not being active for such a long time. I'm most probably going to try to get an external library to do the delta-encoding for me. A great part about this thread is that I now know what I want to achieve is called! I believe that when I find an appropriate solution I will post it and accept it so others can see as to how I solved my problem. Once again, thankyou very much for all your help.
Using a collection of "change events" rather than sending the whole array
A solution to this would be to send a serialized object describing the change rather than the actual array all over again.
public class ChangePair implements Serializable{
//glorified struct
public final int index;
public final byte newValue;
public ChangePair(int index, byte newValue) {
this.index = index;
this.newValue = newValue;
}
public static void main(String[] args){
Collection<ChangePair> changes=new HashSet<ChangePair>();
changes.add(new ChangePair(12,(byte)2));
changes.add(new ChangePair(1206,(byte)3));
}
}
Generating the "change events"
The most efficient method for achieving this would be to track changes as you go, but assuming thats not possible you can just brute force your way through, finding which values are different
public static Collection<ChangePair> generateChangeCollection(byte[] oldValues, byte[] newValues){
//validation
if (oldValues.length!=newValues.length){
throw new RuntimeException("new and old arrays are differing lengths");
}
Collection<ChangePair> changes=new HashSet<ChangePair>();
for(int i=0;i<oldValues.length;i++){
if (oldValues[i]!=newValues[i]){
//generate a change event
changes.add(new ChangePair(i,newValues[i]));
}
}
return changes;
}
Sending and recieving those change events
As per this answer regarding sending serialized objects over the internet you could then send your object using the following code
Collection<ChangePair> changes=generateChangeCollection(oldValues,newValues);
Socket s = new Socket("yourhostname", 1234);
ObjectOutputStream out = new ObjectOutputStream(s.getOutputStream());
out.writeObject(objectToSend);
out.flush();
On the other end you would recieve the object
ServerSocket server = new ServerSocket(1234);
Socket s = server.accept();
ObjectInputStream in = new ObjectInputStream(s.getInputStream());
Collection<ChangePair> objectReceived = (Collection<ChangePair>) in.readObject();
//use Collection<ChangePair> to apply changes
Using those change events
This collection can then simply be used to modify the array of bytes on the other end
public static void useChangeCollection(byte[] oldValues, Collection<ChangePair> changeEvents){
for(ChangePair changePair:changeEvents){
oldValues[changePair.index]=changePair.newValue;
}
}
Locally log the changes to the byte array, like a little version control system. In fact you could use a VCS to create patch files, send them to the other side and apply them to get an up-to-date file;
If you cannot log changes, you would need to double the array locally, or (not so 100% safe) use an array of checksums on blocks.
The main problem here is data compression.
Kamikaze offers you good compression algorithms for data arrays. It uses Simple16 and PForDelta coding. Simple16 is a good and (as the name says) simple list compression option. Or you can use Run Lenght Encoding. Or you can experiment with any compression algorithm you have available in Java...
Anyway, any method you use will be optimized if you first preprocess the data.
You can reduce the data calculating differences or, as #RichardTingle pointed, creating pairs of different data locations.
You can calculate C as B - A. A will have to be an int array, since the difference between two byte values can be higher than 255. You can then restore B as A + C.
The advantage of combining at least two methods here is that you get much better results.
E.g. if you use the difference method with A = { 1, 2, 3, 4, 5, 6, 7 } and B = { 1, 2, 3, 5, 6, 7, 7 }. The difference array C will be { 0, 0, 0, 1, 1, 1, 0 }. RLE can compress C in a very effective way, since it is good for compressing data when you have many repeated numbers in sequence.
Using the difference method with Simple16 will be good if your data changes in almost every position, but the difference between values is small. It can compress an array of 28 single-bit values (0 or 1) or an array of 14 two-bit values to a single 32-byte integer.
Experiment, it all will depend on how your data behaves. And compare the data compression ratios for each experiment.
EDIT: You will have to preprocess the data before JSON and zip compressing.
Create two sets old and now. The latter contains all files that exists now. For the former, the old files, you have at least two options:
Should contain all files that existed before you sent them to the other PC. You will need to keep a set of what the other PC knows to calculate what has changed since the last synchronization, and send only the new data.
Contains all files since you last checked for changes. You can keep a local history of changes and give each version an "id". Then, when you sync, you send the "version id" together with the changed data to the other PC. Next time, the other PC first sends its "version id" (or you keed the "version id" of each PC locally), then you can send the other PC all the new changes (all the versions that come after the one that PC had).
The changes can be represented by two other sets: newFiles, and deleted files. (What about files that changed in content? Don't you need to sync these too?) The newFiles contains the ones that only exist in set now (and do not exist in old). The deleted set contains the files that only exist in set old (and do not exist in now).
If you represent each file as an String with the full pathname, you safely will have unique representations of each file. Or you can use java.io.File.
After you reduced your changes to newFiles and deleted files set, you can convert them to JSON, zip and do anything else to serialize and compress the data.
So, what I ended up doing was using this:
https://code.google.com/p/xdeltaencoder/
From my test it works really really well. However, you will need to make sure to checksum the source (in my case fileAJson), as it does not do it automatically for you!
Anyways, code below:
//Create delta
String[] deltaArgs = new String[]{fileAJson.getAbsolutePath(), fileBJson.getAbsolutePath(), fileDelta.getAbsolutePath()};
XDeltaEncoder.main(deltaArgs);
//Apply delta
deltaArgs = new String[]{"-d", fileAJson.getAbsolutePath(), fileDelta.getAbsolutePath(), fileBTarget.getAbsolutePath()};
XDeltaEncoder.main(deltaArgs);
//Trivia, Surpisingly this also works
deltaArgs = new String[]{"-d", fileBJson.getAbsolutePath(), fileDelta.getAbsolutePath(), fileBTarget.getAbsolutePath()};
XDeltaEncoder.main(deltaArgs);

How to generate incremental identifier in java

I have requirement in which I continuously receive messages that needs to be written in a file. Every time a new message is received it needs to be written in a separate file. What I want is to generate an unique identifier to be used as a file-name. I also want to preserve the order of the messages as well. By this I mean, the identifier generated as a file-name should always be incremental.
I was using UUID.randomUUID() to generate file-names but the problem with this approach is that UUID only assures randomness of the identifier but is not incremental. As a result I am losing the ordering of the file (I want file generated first should appear first in the list).
Approaches known
Can use System.currentTimeMillis() but I can receive multiple messages at same time stamp.
2.Another approach could be to implement static long value and increment it whenever a file is to be created and use the long value as a file-name. But I am not sure about this approach. Also it doesn't seem to be a proper solution to my problem. I think there could be far better solutions than this one.
If someone could suggest me a better solution to this problem, will be highly appreciated.
If you want your id value to uniformly rise even between server restarts, then you must either base it on the system time or have some elaborately robust logic that persists the last ID used. Note that achieving robustness on its own is not hard, but achieving it in a performant and scalable way is.
If you additionally need the id to be unique across multiple nodes in a redundant server cluster, then you need even more elaborate logic, which definitely involves a persistent store to which all the boxes synchronize access. Making this performant is, of course, even harder.
The best option I can see is to have a quite long ID so there's room for these parts:
System.currentTimeMillis for long-term uniqueness (across restarts);
System.nanotime for finer granularity;
a unique id of each server node (determined in a platform-specific way).
The method will still have to remember the last value generated and retry in case of a duplicate. It won't have to retry too many times, though, just until the next nanoTime clock tickā€”it could even busy-wait for it.
Sketch of code without point 3 (single-node implementation):
private static long lastNanos;
public static synchronized String uniqueId() {
for (;/*ever*/;) {
final long n = System.nanoTime();
if (n == lastNanos) continue;
lastNanos = n;
return "" + System.currentTimeMillis() + n;
}
}
Ok, my hands up. My last answer was fairly flaky and I've deleted it.
Keeping with the spirit of the site, I thought I'd try a different tac.
If you say you are keeping these messages in a single file then you could try something like creating an unique Id out of the size of the file?
Before you write the message to the file it's id could be the current size of the file.
You could add the filename + size as the id if these messages need to be unique across a number of files.
I'll leave the hot potato of synchronization to another day. But you could wrap all of this up in a syncronized object that keeps track of things.
Also, I am assuming that any messages written to the file will not be removed in the future.
ADDITIONAL NOTE:
You could create an message processing object that opens the file on construction (or via a create method).
This object will get the initial size of the file and this will be used as the unique id.
As each message is added (in a synchronized manner), the id is incremented by the size of the message.
This would address the performance issues. Will not work if more than one JVM/Node accesses the same file.
Skeletal Idea:
public class MessageSink {
private long id = 0;
public MessageSink(String filename) {
id = ... get file size ..
}
public synchronized addMessage(Message msg) {
msg.setId(id);
.. write to file + flush ..
.. or add to stack of messages that need to be written to file
.. at a later stage.
id = id + msg.getSize();
}
public void flushMessages() {
.. open file
.. for each message in stack write ...
.. flush and close file
}
}
I had the same requirement and found a suitable solution. Twitter Snowflake uses a simple algorithm to generate sortable 64bit (long) ids. Snowflake is written on Scala but the approach is simple and could be easily used in a Java code.
id is composed of:
timestamp - 41 bits (millisecond precision w/ a custom epoch gives us 69 years);
machine id - 10 bits (MAC address could be used as a hardware id);
sequence number - 12 bits - rolls over every 4096 per machine (with protection to avoid rollover in the same ms)
Formula looks like: ((timestamp - customEpoch) << timestampShift) | (machineId << machineIdShift) | sequenceNumber;
Shift for each component depends on it's bits position in ID.
Detailed description and source code could be found at github:
Twitter Snowflake
Basic Java implementation of the Snowflake algorithm

How to initialize huge float arrays in java, android?

I was creating a opengl android application. I was trying to render a opengl object with vertices more than 50,000.
float itemVerts [] = {
// f 231/242/231 132/142/132 131/141/131
0.172233487787643f, -0.0717437751698985f, 0.228589675538813f,
0.176742968653347f, -0.0680393472738536f, 0.2284149434494f,
0.167979223684599f, -0.0670168837233226f, 0.24286384937854f,
// f 131/141/131 230/240/230 231/242/231
0.167979223684599f, -0.0670168837233226f, 0.24286384937854f,
0.166391290343292f, -0.0686544011752973f, 0.241920432968569f,......
and many more.... But when i do this in a function or constructor i get a error while compiling that The code of method () is exceeding the 65535 bytes limit. So I was wondering if there is a different way to do this.
I tried storing the value in file and reading it back. But the IO operation, with string parsing of such huge record is very slow. Takes more than 60 sec. Which is not good.
Please let me know if there is any other way to do this. Thank you for your time and help.
But when i do this in a function or constructor i get a error while
compiling that The code of method () is exceeding the 65535 bytes
limit. So I was wondering if there is a different way to do this.
Put it outside the constructor (as a class variable or field)? If this doesn't change, just make it a constant. If it does change, make it a constant anyway and copy it in the constructor.
I tried storing the value in file and reading it back. But the IO
operation, with string parsing of such huge record is very slow. Takes
more than 60 sec. Which is not good.
If you do decide to keep it in an external file and read it in, don't read it as a string, just serialize it somehow (Java serialization, Protocol Buffers, etc.).
The program don't have to parse the float if we preprocess the data.
Write another program that write all float to a binary file using DataOutputStream.
In your program, read them back using DataInputStream. You might want to chain it with BufferedInputStream.
For this cases I normally use the assets folder to store files in binary format (even you can define some kind of file format to include the vertex, normals etc...) and allocate it on application initialization as wannik explains.
I would proprocess and store floats in binary form, then mmap it as byte buffer and create fload array out of it. This way you get float array, without parsing or allocation of space.

Categories