How to use ANTLR4 with binary data? - java

From the homepage:
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for [...] or binary files.
I have read through the docs now for some hours and think that I have some basic understanding of ANTLR, but I have a hard time to find any references to processing binary files. And I'm not the only one as it seems.
I need to create a parser for some binary data and would like to decide if ANTLR is of any help or not.
Binary data structure
That binary data is structured in logical fields like field1, which is followed by field2, which is followed by field3 etc. and all those fields have a special purpose. The length of all those fields may differ AND may not be known at the time the parser is generated, so e.g. I do know that field1 is e.g. 4 bytes always, field2 might simply be 1 byte and field3 might be 1 to 10 bytes and might be followed by additional field3s with n bytes, depending on the actual value of the data. That is the second problem, I know the fields are there and e.g. with field1 I know it's 4 bytes, but I don't know the actual value, but that is what I'm interested in. Same goes for the other fields, I need the values from all of those.
What I need in ANTLR
This sounds like a common structure and use case for some arbitrary binary data to me, but I don't see any special handling of such data in ANTLR. All examples are using some kind of texts and I don't see some value extraction callbacks or such. Additionally, I think I would need some callbacks influencing the parsing process itself, so for e.g. one callback is called on the first byte of field3, I check that, decide that one to N additional bytes need to be consumed and that those are logically part of field3 and tell the parser that, so it's able to proceed "somehow".
In the end, I would get some higher level "field" objects and ANTLR would provide the underlying parse logic with callbacks and listener infrastructure, walking abilities etc.
Did anyone ever do something like that and can provide some hints to examples or the concrete documentation I seem to have missed? Thanks!
EN 13757-3:2012
I don't think it makes understanding my question really easier, but the binary data I'm referring to is defined in the standard EN 13757-3:2012:
Communication systems for and remote reading of meters - Part
3: Dedicated application layer
The standard is not freely available on the net (anymore?), but the following PDF might provide you an overview of how example data looks like in page 4. Especially that bytes of the mentioned fields are not constant, only the overall structure of the datagram is defined.
http://fastforward.ag/downloads/docu/FAST_EnergyCam-Protocol-wirelessMBUS.pdf
The tokens for the grammar would be the fields, implemented by a different amount of bytes, but with a value etc. Regarding the self-description of ANTLR, I would expected such things to work somehow...
Alternative: Kaitai.io
Whoever is in a comparable position like me currently, have a look at Kaitai.io, which reads very promising:
https://stackoverflow.com/a/40527106/2055163

Related

How can I update a serialized HashMap contained in a file?

I have a file that contains a serialized HashMap containing an element of type MyObject:
�� sr java.util.HashMap���`� F
loadFactorI thresholdxp?# w  t (a54d88e06612d820bc3be72877c74f257b561b19sr com.myproject.MyObject C�m�I�/ I partitionL hashcodet Ljava/lang/String;L idt Ljava/lang/Long;L offsetq ~ L timestampq ~ L topicq ~ xp q ~ ppppx
Now, I also have some other MyObject objects that I would like to add to that map. However, I dont want to first read and deserialize the map back into memory, then update it and then write the whole updated map back to file. How would one update the serialization in the file in a more efficient way?
How would one update the serialization in the file in a more efficient way?
Basically by reverse engineering the binary protocol that Java uses when serializing objects into their binary representation. That would enable you to understand which elements in that binary blob would need to be updated in which way.
Other people have already done that, see here for example.
Anything else is just work. You sitting down and writing code.
Or you write the few lines of code that read in the existing files, and write out a new file with that map plus the other object you need in there.
You see, efficiency depends on the point of view:
do you think the single update of a file with binary serialized objects is so time critical that it needs to be done by manually "patching" that binary file
do you think it is more efficient to spend hours and hours to learn the underlying binary format, to correctly update its content?
The only valid reason (I can think of) why to do that: to learn exactly such things: binary data formats, and how to patch content. But even then there might be "better" assignments that give you more insights (of real value in the real world) than ... spending your time re-implementing Java binary serialization.

How to modify update a large file with small content changes at specific indexes

I need to modify a file. We've already written a reasonably complex component to build sets of indexes describing where interesting things are in this file, but now I need to edit this file using that set of indexes and that's proving difficult.
Specifically, my dream API is something like this
//if you'll let me use kotlin for a second, assume we have a simple tuple class
data class IdentifiedCharacterSubsequence { val indexOfFirstChar : int, val existingContent : String }
//given these two structures
List<IdentifiedCharacterSubsequences> interestingSpotsInFile = scanFileAsPerExistingBusinessLogic(file, businessObjects);
Map<IdentifiedCharacterSubsequences, String> newContentByPreviousContentsLocation = generateNewValues(inbterestingSpotsInFile, moreBusinessObjects);
//I want something like this:
try(MutableFile mutableFile = new com.maybeGoogle.orApache.MutableFile(file)){
for(IdentifiedCharacterSubsequences seqToReplace : interestingSpotsInFile){
String newContent = newContentByPreviousContentsLocation.get(seqToReplace);
mutableFile.replace(seqToReplace.indexOfFirstChar, seqtoReplace.existingContent.length, newContent);
//very similar to StringBuilder interface
//'enqueues' data changes in memory, doesnt actually modify file until flush call...
}
mutableFile.flush();
// ...at which point a single write-pass is made.
// assumption: changes will change many small regions of text (instead of large portions of text)
// -> buffering makes sense
}
Some notes:
I cant use RandomAccessFile because my changes are not in-place (the length of newContent may be longer or shorter than that of seq.existingContent)
The files are often many megabytes big, thus simply reading the whole thing into memory and modifying it as an array is not appropriate.
Does something like this exist or am I reduced to writing my own implementation using BufferedWriters and the like? It seems like such an obvious evolution from io.Streams for a language which typically emphasizes indexed based behaviour heavily, but I cant find an existing implementation.
Lastly: I have very little domain experience with files and encoding schemes, so I have taken no effort to address the 'two-index' character described in questions like these: Java charAt used with characters that have two code units. Any help on this front is much appreciated. Is this perhaps why I'm having trouble finding an implementation like this? Because indexes in UTF-8 encoded files are so pesky and bug-prone?

What is the drawback for using Strings for non-String specific data?

I know this might be a kind of "silly" question. I have created software applications before where I initialized basically all of my variables as strings, and saved them in my database as VARCHARs. Then, I would gather them from the database and convert them as needed. Is there any reason this is not an efficient method for initializing variables and saving them in my database?
I know that for extremely large applications, this can cause an issue with computing time, because I am unnecessarily converting variables that could have been initialized as the appropriate type to begin with. But, for smaller applications, is this "okay" to do?
Some reasons to use proper types
1. Least surprise. If developers are going to grab numerical data from your database, they would find it weird that you're storing them as strings.
2. Developer convenience. Another is the nuisance of having to parse the data into the correct type every time. If you just store it as the correct type, then you would save people the trouble of having to put
int age = 0;
try {
age = Integer.parseInt(ageStr);
} catch (NumberFormatException e) {
throw new RuntimeException(e);
}
all over the code.
3. Data quality. The code example above hints at a third problem. Now it's possible for somebody to store "no_age" or "foo" or something in the column, which is a data quality issue. The best way to deal with errors is to make them impossible in the first place.
4. Storage efficiency. Storage efficiency is a factor as well. Different types have different ways of encoding data, and strings are not an efficient way to store numbers, bits, etc.
5. Network efficiency. If you store data in wasteful formats, then that often translates to unnecessary network utilization. This is why binary formats are generally more efficient than text formats like JSON or XML. But web services don't typically treat network efficiency as the driving engineering concern.
6. Processing efficiency. If the data is inherently numeric, then forcing everybody to parse it incurs processing cost.
7. Different types support different rules. In his answer, Hightower makes the good point that different types have special rules for ordering, which impacts ranges and sorts. I like this point because it impacts actual program behavior, whereas the concerns I mention above might be more academic for small apps with a single developer.
An example illustrating the efficiency benefit
Suppose you want to store eight bits. If you were to store that as a string you might have "TFFTFFTF", which under UTF-8 and ASCII would take 64 bits (8 chars x 8 bits per char) to store eight bits of actual information. Relatively speaking that's a big difference.
Incidentally, even if your data is numeric, it's not good to just use BIGINT, for example. The different types of integer in a database have different storage requirements and so you should think about the number of bits you actually need, use unsigned representations if appropriate (no reason to waste a sign bit on numbers that can't be negative), etc. Wrong choices tend to add up quickly as you create new foreign keys that have to be BIGINTs now, new rows that all have a bunch of BIGINTs, etc. Your storage and backup requirements end up being needlessly demanding.
So. Is it "OK" to use strings?
These efficiency concerns may not matter at all for something small, which is what you were asking. Or there may be reasons to prefer an inefficient format over one that's more efficient, as my JSON/XML example above suggests. So as far as whether it's "OK", I can't answer that, but hopefully the considerations above give you some tools to make that decision yourself.
Still I'd try to get into the habit of using the right type, and I certainly wouldn't go out of my way to store things as strings without some reason. In bitset cases I could see potentially avoiding having to deal with bit manipulation, which can be tricky til you get the hang of it. (But some databases have special bitset types.) You mention not knowing the type and maybe that's a plausible reason in some cases, though I would lean more on refactoring here.
There are some reasons. For examples, think about searching for a time range. This is easy to find using datetime fields. But not easy with strings, because you have to do it at your application.
Other point is sorting on a varchar will be different to a int type field. At varchar 10 is before 2, but at int it comes after that.

How do I identify that I am at the last byte of a serialized Java object?

Question
What is (if there is any) terminating characters/byte sequences in serialized java objects?
Background
I'm working on a small self-education project where I would like to serialize java objects and write them to a stream where there are read and then unserialized. Since, I will need to identify the borders between serialized objects and I can't be sure that the current object is not the last one, is there a terminating character that is always there that I can use as my identifier?
I noticed that there is a magic number ACED that allows me to identify the start of the object, so how do I identify the end?
EDIT:
If there is no terminating character, is there any safe terminating characters/sequences that I can use (insert) to identify the end of the object?
In theory you should always be able to find the end of an object, in practice you cannot. I understand the problem is customised writeObject implementations that don't call either defaultReadObject or readFields have a non-standard representation.
I've played about with serialisation in the past. Including creating streams for use when I've been doing unusual things to the ObjectInputStream. It's not pleasant(!).
You can read the details in the spec, and the source is worth a read.
there are none. AFAIK the only requirement is that the deserialiser know when to stop reading, when given a corresponding serialisation. subject to that, the serialiser can write whatever it wants -- in any position not just the last.
if you're old skool dump a 32-bit length field at the beginning a refuse to handle objects bigger than 4 gig.
nu scool, you just make sure your read and your write logic are consistent and don't care about the length.
You can add a terminating object to your object stream. e.g. null or a special String.
However, I suggest that you instead convert the ObjectsStream to a byte[] and write the byte length of the byte[] followed by its data. This way each ObjectStream is independent and you always know where it finishes.
Have you considered applying a record-marking layer similar to HTTP Chunked encoding?
The Chunked encoding is intended to solve a generalization of this scenario: identifying the end of a message of indeterminate length that both itself contains no identifiable end, and is embedded in a longer stream without ending it.

Developing a (file) exchange format for java

I want to come up with a binary format for passing data between application instances in a form of POFs (Plain Old Files ;)).
Prerequisites:
should be cross-platform
information to be persisted includes a single POJO & arbitrary byte[]s (files actually, the POJO stores it's names in a String[])
only sequential access is required
should be a way to check data consistency
should be small and fast
should prevent an average user with archiver + notepad from modifying the data
Currently I'm using DeflaterOutputStream + OutputStreamWriter together with InflaterInputStream + InputStreamReader to save/restore objects serialized with XStream, one object per file. Readers/Writers use UTF8.
Now, need to extend this to support the previously described.
My idea of format:
{serialized to XML object}
{delimiter}
{String file name}{delimiter}{byte[] file data}
{delimiter}
{another String file name}{delimiter}{another byte[] file data}
...
{delimiter}
{delimiter}
{MD5 hash for the entire file}
Does this look sane?
What would you use for a delimiter and how would you determine it?
The right way to calculate MD5 in this case?
What would you suggest to read on the subject?
TIA.
It looks INsane.
why invent a new file format?
why try to prevent only stupid users from changing file?
why use a binary format ( hard to compress ) ?
why use a format that cannot be parsed while being received? (receiver has to receive entire file before being able to act on the file. )
XML is already a serialization format that is compressable. So you are serializing a serialized format.
Would serialization of the model (if you are into MVC) not be another way? I'd prefer to use things in the language (or standard libraries) rather then roll my own if possible. The only issue I can see with that is that the file size may be larger than you want.
1) Does this look sane?
It looks fairly sane. However, if you are going to invent your own format rather than just using Java serialization then you should have a good reason. Do you have any good reasons (they do exist in some cases)? One of the standard reasons for using XStream is to make the result human readable, which a binary format immediately loses. Do you have a good reason for a binary format rather than a human readable one? See this question for why human readable is good (and bad).
Wouldn't it be easier just to put everything in a signed jar. There are already standard Java libraries and tools to do this, and you get compression and verification provided.
2) What would you use for a delimiter and how determine it?
Rather than a delimiter I'd explicitly store the length of each block before the block. It's just as easy, and prevents you having to escape the delimiter if it comes up on its own.
3) The right way to calculate MD5 in this case?
There is example code here which looks sensible.
4) What would you suggest to read on the subject?
On the subject of serialization? I'd read about the Java serialization, JSON, and XStream serialization so I understood the pros and cons of each, especially the benefits of human readable files. I'd also look at a classic file format, for example from Microsoft, to understand possible design decisions from back in the days that every byte mattered, and how these have been extended. For example: The WAV file format.
Let's see this should be pretty straightforward.
Prerequisites:
0. should be cross-platform
1. information to be persisted includes a single POJO & arbitrary byte[]s (files actually, the POJO stores it's names in a String[])
2. only sequential access is required
3. should be a way to check data consistency
4. should be small and fast
5. should prevent an average user with archiver + notepad from modifying the data
Well guess what, you pretty much have it already, it's built-in the platform already:Object Serialization
If you need to reduce the amount of data sent in the wire and provide a custom serialization ( for instance you can sent only 1,2,3 for a given object without using the attribute name or nothing similar, and read them in the same sequence, ) you can use this somehow "Hidden feature"
If you really need it in "text plain" you can also encode it, it takes almost the same amount of bytes.
For instance this bean:
import java.io.*;
public class SimpleBean implements Serializable {
private String website = "http://stackoverflow.com";
public String toString() {
return website;
}
}
Could be represented like this:
rO0ABXNyAApTaW1wbGVCZWFuPB4W2ZRCqRICAAFMAAd3ZWJzaXRldAASTGphdmEvbGFuZy9TdHJpbmc7eHB0ABhodHRwOi8vc3RhY2tvdmVyZmxvdy5jb20=
See this answer
Additionally, if you need a sounded protocol you can also check to Protobuf, Google's internal exchange format.
You could use a zip (rar / 7z / tar.gz / ...) library. Many exists, most are well tested and it'll likely save you some time.
Possibly not as much fun though.
I agree in that it doesn't really sound like you need a new format, or a binary one.
If you truly want a binary format, why not consider one of these first:
Binary XML (fast infoset, Bnux)
Hessian
google packet buffers
But besides that, many textual formats should work just fine (or perhaps better) too; easier to debug, extensive tool support, compresses to about same size as binary (binary compresses poorly, and information theory suggests that for same effective information, same compression rate is achieved -- and this has been true in my testing).
So perhaps also consider:
Json works well; binary support via base64 (with, say, http://jackson.codehaus.org/)
XML not too bad either; efficient streaming parsers, some with base64 support (http://woodstox.codehaus.org/, "typed access API" under 'org.codehaus.stax2.typed.TypedXMLStreamReader').
So it kind of sounds like you just want to build something of your own. Nothing wrong with that, as a hobby, but if so you need to consider it as such.
It likely is not a requirement for the system you are building.
Perhaps you could explain how this is better than using an existing file format such as JAR.
Most standard files formats of this type just use CRC as its faster to calculate. MD5 is more appropriate if you want to prevent deliberate modification.
Bencode could be the way to go.
Here's an excellent implementation by Daniel Spiewak.
Unfortunately, bencode spec doesn't support utf8 which is a showstopper for me.
Might come to this later but currently xml seems like a better choice (with blobs serialized as a Map).

Categories