Memory-efficient XML manipulation in Java - java

We are in the process of implementing a transactional system that has two backend components:
Component A generates an initial XML response
Component B modifies the initial response XML
The resulting XML is sent back to the requestor. Since we are likely doing this under heavy load, I'd like to do this in a very CPU/memory efficient way.
What is the best way to perform the above while keeping a tight leash on overall memory utilization?
Specifically, is my best best to do a DOM parse, of the output of Component A and pass that to Component B to modify in memory? Is there a better way to do this using SAX which may be more memory efficient? Are there standard libraries that do this via SAX or DOM?
Thanks for any insights.
-Raj

Generally, SAX is more memory-efficient than DOM, because the entire document does not need to be loaded into memory for processing. The answer, however, depends on the specifics of your "Component B modifies the initial response XML" requirements.
If each change is local to its own XML sub-tree (i.e. you may need data from all nodes leading to the root of the tree, but not siblings), SAX will work better.
If the changes require referencing siblings to produce the results, DOM will work better, because it would let you avoid constructing your own data structure for storing the siblings.

An aspect or filter on componet B that applys an XSL-T transformation to the initial XML response might be a clean way to accomplish it. The memory utilization depends on the size of the request and the number of instances in memory. CPU will be dependent on these two factors as well.
DOM requires that the whole XML document be resident in memory before you modify it. If it's just a couple of elements that have to change, then SAX is a good alternative.

SAX is an event-based parsing utility. You are notified of events such as beginDocument(), startElement(), endElement(), etc. You save in memory the things you wish to save. You only control the events you want, which can really increase the speed of parsing and decrease the use of memory. It can be memory efficient, depending on what and how much of the things you are saving in memory. For the general case, SAX is more memory efficient versus DOM. DOM will save the entire document in memory in order to process it.

Related

Parsing huge XML with non-forward cursor movement

I'm creating a task to parse two large XML files and find 1-1 relation between elements. I am completely unable to keep whole file in memory and I have to "jump" in my file to check up to n^2 combinations.
I am wondering what approach may I take to navigate between nodes without killing my machine. I did some reading on StAX and I liked the idea but cursor moves one way only and I will have to go back to check different possibilities.
Could you suggest me any other possibility? I need one with commercial use allowance.
I'd probably consider reading the first file into some sort of structured cache and then read the 2nd XML document, referencing against this cache (the cache could actually be a DB - it doesn't need to be in memory).
Otherwise there's no real solution (that I know of) unless you could read the whole file into memory. This ought to perform better too rather than going back and forth across the DOM of an XML document.
One solution would be an XML database. These usually have good join optimizers so as well as saving memory they may be able to avoid the O(n^2) elapsed time.
Another solution would be XSLT, using xsl:key to do "manual" optimization of the join logic.
If you explain the logic in more detail there may turn out to be other solutions using XSLT 3.0 streaming.

Does the complexity of XML structure has influence on parsing speed?

From "parsing speed" point of view, how much influence(if any) has number of attributes and depth of XML document on parsing speed?
Is it better to use more elements or as many attributes as possible?
Is "deep" XML structure hard to read?
I am aware that if I would use more attributes, XML would be not so heavy and that adapting XML to parser is not right way to create XML file
thanks
I think, it depends on whether you are doing validation or not. If you are validating against a large and complex schema, then proportionately more time is likely to be spent doing the validation ... than for a simple schema.
For non-validating parsers, the complexity of the schema probably doesn't matter much. The performance will be dominated by the size of the XML.
And of course performance also depends the kind of parser you are using. A DOM parser will generally be slower because you have to build a complete in-memory representation before you start. With a SAX parser, you can just cherry-pick the parts you need.
Note however that my answer is based on intuition. I'm not aware of anyone having tried to measure the effects of XML complexity on performance in a scientific fashion. For a start, it is difficult to actually characterize XML complexity. And people are generally more interested in comparing parsers for a given sample XML than in teasing out whether input complexity is a factor.
Performance is a property of an implementation. Different parsers are different. Don't try to get theoretical answers about performance, just measure it.
Is it better to use more elements or as many attributes as possible?
What has that got to do with performance of parsing? I find it very hard to believe that any difference in performance will justify distorting your XML design. On the contrary, using a distorted XML design in the belief that it will improve parsing speed will almost certainly end up giving you large extra costs in the applications that generate and consume the XML.
If you are using Sax parser it does not matter whether XML is a large one or not as it is a top down parser and not hold the full XML at memory but For DOM it matters as it holds the full XML in memory. You can get some idea about comparison of XML parsers in my blogpost here

Parsing binary data in Java - high volume, single thread

I need to parse (and transform and write) a large binary file (larger than memory) in Java. I also need to do so as efficiently as possible in a single thread. And, finally, the format being read is very structured, so it would be good to have some kind of parser library (so that the code is close to the complex specification).
The amount of lookahead needed for parsing should be small, if that matters.
So my questions are:
How important is nio v io for a single threaded, high volume application?
Are there any good parser libraries for binary data?
How well do parsers support streaming transformations (I want to be able to stream the data being parsed to some output during parsing - I don't want to have to construct an entire parse tree in memory before writing things out)?
On the nio front my suspicion is that nio isn't going to help much, as I am likely disk limited (and since it's a single thread, there's no loss in simply blocking). Also, I suspect io-based parsers are more common.
Let me try to explain if and how Preon addresses all of the concerns you mention:
I need to parse (and transform and write) a large binary file (larger
than memory) in Java.
That's exactly why Preon was created. You want to be able to process the entire file, without loading it into memory entirely. Still, the program model gives you a pointer to a data structure that appears to be in memory entirely. However, Preon will try to load data as lazily as it can.
To explain what that means, imagine that somewhere in your data structure, you have a collection of things that are encoded in a binary representation with a constant size; say that every element will be encoded in 20 bytes. Then Preon will first of all not load that collection in memory at all, and if you're grabbing data beyond that collection, it will never touch that region of your encoded representation at all. However, if you would pick the 300th element of that collection, it would (instead of decoding all elements up to the 300th element), calculate the offset for that element, and jump there immediately.
From the outside, it is as though you have a reference to a list that is fully populated. From the inside, it only goes out to grab an element of the list if you ask for it. (And forget about it immediately afterward, unless you instruct Preon to do things differently.)
I also need to do so as efficiently as possible in a single thread.
I'm not sure what you mean by efficiently. It could mean efficiently in terms of memory consumption, or efficiently in terms of disk IO, or perhaps you mean it should be really fast. I think it's fair to say that Preon aims to strike a balance between an easy programming model, memory use and a number of other concerns. If you really need to traverse all data in a sequential way, then perhaps there are ways that are more efficient in terms of computational resources, but I think that would come at the cost of "ease of programming".
And, finally, the format being read is very structured, so it would be
good to have some kind of parser library (so that the code is close to
the complex specification).
The way I implemented support for Java byte code, is to just read the byte code specification, and then map all of the structures they mention in there directly to Java classes with annotations. I think Preon comes pretty close to what you're looking for.
You might also want to check out preon-emitter, since it allows you to generate annotated hexdumps (such as in this example of the hexdump of a Java class file) of your data, a capability that I haven't seen in any other library. (Hint: make sure you hover with your mouse over the hex numbers.)
The same goes for the documentation it generates. The aim has always been to mak sure it creates documentation that could be posted to Wikipedia, just like that. It may not be perfect yet, but I'm not unhappy with what it's currently capable of doing. (For an example: this is the documentation generated for Java's class file specification.)
The amount of lookahead needed for parsing should be small, if that matters.
Okay, that's good. In fact, that's even vital for Preon. Preon doesn't support lookahead. It does support looking back though. (That is, sometimes part the encoding mechanism is driven by data that was read before. Preon allows you to declare dependencies that point back to data read before.)
Are there any good parser libraries for binary data?
Preon! ;-)
How well do parsers support streaming transformations (I want to be
able to stream the data being parsed to some output during parsing - I
don't want to have to construct an entire parse tree in memory before
writing things out)?
As I outlined above, Preon does not construct the entire data structure in memory before you can start processing it. So, in that sense, you're good. However, there is nothing in Preon supporting transformations as first class citizens, and it's support for encoding is limited.
On the nio front my suspicion is that nio isn't going to help much, as
I am likely disk limited (and since it's a single thread, there's no
loss in simply blocking). Also, I suspect io-based parsers are more
common.
Preon uses NIO, but only it's support for memory mapped files.
On NIO vs IO you are right, going with IO should be the right choice - less complexity, stream oriented etc.
For a binary parsing library - checkout Preon
Using a Memory Mapped File you can read through it without worrying about your memory and it's fast.
I think you are correct re NIO vs IO unless you have little endian data as NIO can read little endian natively.
I am not aware of any fast binary parsers, generally you want to call the NIO or IO directly.
Memory mapped files can help with writing from a single thread as you don't have to flush it as you write. (But it can be more cumbersome to use)
You can stream the data how you like, I don't forsee any problems.

Output large xml from resultset

We have an application in which an XML string is created from a stored proc resultset and transformed using XSLT to return to the calling servlet. This work fine with smaller dataset but causing out of memory error with large amount of data. What will be the ideal solution in this case ?
XSLT transformations, in general, require the entire dataset to be loaded into memory, so the easiest thing is to get more memory.
If you can rewrite your XSLT, there is Streaming Transformations for XML which allow for incremental processing of data.
If you're processing the entire XML document at once then it sounds like you'll need to allocate more memory to the Java heap. But that only works up to the defined maximum heap size. Do you know a reasonable maximum data set size or is it unbounded?
Why do you need the database to generate the XML?
Few important things to note.
You mentioned works fine functionally with small data-set but goes out of memory with large data sets. you need to identify whether its creation of datasets that causes out of memory or transfer of datasets in the same process.
You are doing something which is making many objects to stay in memory. Re-Check your code and nullify some objects explicitly after usage.This will make life easier for garbage collector. Play with MaxPermSize settings of JVM. This will give you additional space for strings.
This approach is going to have a limitation that even if you are able to transfer large datasets for single user this might go outOfMemory for multiple users.
A suggestion that might work for you.
Break this in an Asyncronous process.Make creation of large datasets separate process and downloading of that datasets a different process.
While making the datasets available for download you can very well control the memory consumption using stream based downloading.

Java: Serializing a huge amount of data to a single file

I need to serialize a huge amount of data (around 2gigs) of small objects into a single file in order to be processed later by another Java process. Performance is kind of important. Can anyone suggest a good method to achieve this?
Have you taken a look at google's protocol buffers? Sounds like a use case for it.
I don't know why Java Serialization got voted down, it's a perfectly viable mechanism.
It's not clear from the original post, but is all 2G of data in the heap at the same time? Or are you dumping something else?
Out of the box, Serialization isn't the "perfect" solution, but if you implement Externalizable on your objects, Serialization can work just fine. Serializations big expense is figuring out what to write and how to write it. By implementing Externalizable, you take those decisions out of its hands, thus gaining quite a boost in performance, and a space savings.
While I/O is a primary cost of writing large amounts of data, the incidental costs of converting the data can also be very expensive. For example, you don't want to convert all of your numbers to text and then back again, better to store them in a more native format if possible. ObjectStream has methods to read/write the native types in Java.
If all of your data is designed to be loaded in to a single structure, you could simply do ObjectOutputStream.writeObject(yourBigDatastructure), after you've implemented Externalizable.
However, you could also iterate over your structure and call writeObject on the individual objects.
Either way, you're going to need some "objectToFile" routine, perhaps several. And that's effectively what Externalizable provides, as well as a framework to walk your structure.
The other issue, of course, is versioning, etc. But since you implement all of the serialization routines yourself, you have full control over that as well.
A simplest approach coming immediately to my mind is using memory-mapped buffer of NIO (java.nio.MappedByteBuffer). Use the single buffer (approximately) corresponding to the size of one object and flush/append them to the output file when necessary. Memory-mapped buffers are very effecient.
Have you tried java serialization? You would write them out using an ObjectOutputStream and read 'em back in using an ObjectInputStream. Of course the classes would have to be Serializable. It would be the low effort solution and, because the objects are stored in binary, it would be compact and fast.
I developped JOAFIP as database alternative.
Apache Avro might be also usefull. It's designed to be language independent and has bindings for the popular languages.
Check it out.
protocol buffers : makes sense. here's an excerpt from their wiki : http://code.google.com/apis/protocolbuffers/docs/javatutorial.html
Getting More Speed
By default, the protocol buffer compiler tries to generate smaller files by using reflection to implement most functionality (e.g. parsing and serialization). However, the compiler can also generate code optimized explicitly for your message types, often providing an order of magnitude performance boost, but also doubling the size of the code. If profiling shows that your application is spending a lot of time in the protocol buffer library, you should try changing the optimization mode. Simply add the following line to your .proto file:
option optimize_for = SPEED;
Re-run the protocol compiler, and it will generate extremely fast parsing, serialization, and other code.
You should probably consider a database solution--all databases do is optimize their information, and if you use Hibernate, you keep your object model as is and don't really even think about your DB (I believe that's why it's called hibernate, just store your data off, then bring it back)
If performance is very importing then you need write it self. You should use a compact binary format. Because with 2 GB the disk I/O operation are very important. If you use any human readable format like XML or other scripts you resize the data with a factor of 2 or more.
Depending on the data it can be speed up if you compress the data on the fly with a low compression rate.
A total no go is Java serialization because on reading Java check on every object if it is a reference to an existing object.

Categories