Is storing temporary files into JackRabbit a good idea? - java

does anobody know how much overhead jackrabbit has, in comparison with pure FS persistence ?
I'm using it for a CMS project, but I also have to persist temporary files (that unfortunately have properies/metadata)... Don't know if I should also employ jackrabbit for that.
I think the overhead is significant enough to avoid this .... at least the IO on filesystem.
These files are the same as the rest of files in repo, but it is for sure, that they will be deleted in a minute.
Should I create a layer to persist files with properties via JAVA IO API, should I use jackrabbit or should I use database ? If so, can it be set for performance somehow ?

By default, Jackrabbit stores the binaries in the FileDataStore, which uses a FileOutputStream, so the overhead is relatively low. However, the binaries in the data store remains until garbage collected, which might be a problem for you if you create a huge number of temporary files.
Metadata: it depends how much metadata you have. The metadata is stored in the persistence manager and possibly in the search index (Lucene). The main performance problem there is usually fulltext search, so disable it if possible.
should I use jackrabbit or should I use database
That really depends on your use case. Jackrabbit does not claim to be "faster than a database", but the data model (hierarchical, key value pairs) may be better or easier to use.

Related

The best way to store large set of objects with efficient update/append operations

What is my specific use case?
I have set of objects representing e. g. profiles. Objects can be modified (updated), deleted or added. Each object has several properties, but modification of single property value just marks the whole object as "modified" (so from persistence layer point of view, an object is atomic). There are no relations between the objects.
Size of such set is between 10 - 50000 (but theoretically there's no limit - user can append additional objects). Single object size is up to 500KB (but usually it will be smaller, about 60KB).
Objects should be read and updated as fast as possible. There's also one more key requirement: they should be persisted on hard disk with possibility to copy or move them. My app is written in Java and run on Windows 7-10 OS.
What was my initial approach?
I came to conclusion that each object can be easily represented as single JSON file. The problem lies in keeping such large set of files on a disk. Windows filesystem doesn't seem to be good at handling too many (even small) files.
Then I thought that my files can be stored in virtual filesystem. The first obvious solution was to pack them in ZIP archive in such way:
profiles.zip:
--- profile1.json
--- profile2.json
...
--- profile10000.json
It would be great solution in terms of portability and the read performance is also ok. BUT, it seems the new objects can't be appended to ZIP archive without copying all files stored in the archive... Or at least I didn't find a way to do it.
What should I do then...?
I've searched for other solutions. I consider using:
Fast relational database - but I feel it is like to take a sledgehammer to crack a nut. Especially I don't need to handle relations or transactions (I don't even need a server, it is only for one local user).
NoSQL object databases, e.g. MapDb or Nitrite - it sounds ok, but I couldn't find any reliable comparisons or popularity ratings. It is important for me to pick a credible solution.
Some other virtual filesystems that can be managed in Java? Maybe I missed something?
Could you provide any ideas or advices based on experience? I need fast read/update of whole objects in large datasets with portability (that can be achieved in Java and Windows OS).
It is very hard to answer the question unless we know the size of each object in memory. One suggestion I can give is to try hybrid frameworks which support in memory access as well as persistence to disk.
Ehcache is one of the frameworks which I think will work for you and it it easily supports 50000 objects in memory itself. Even Couchbase supports similar options and a flexibility of immediate or eventual persistence.

is h2 a persistent alternative to java collections with disk backend

I'm still looking for Java Collections that are persistent and have comparable access times for performance. The Real data should stay on the disk but for faster access times I need a cache in the RAM so I can stream the content from the file to the main memory.
I read about h2 have such a cache function. Is there a option to cache the whole file on start up?
And can somebody say something about the performance?
Currently, I have more than 100.000 items in a Java HashMap (key value is custom class which contains a byte array).
Thank you!
Partially. The H2 MVStore can be used as a persistent java.util.Map. But not as a list, stack, or so. The H2 Database is a relational database, with SQL and JDBC API, and with the latest version uses the MVStore as the default storage engine.
Other projects such as MapDB support similar features than the MVStore.

JackRabbit persistence managers clarification

I'm trying to decide what type of persistence manager to use for my project. I read this wiki entry about persistenceManagers.
First of all, due to JCR-2802 (all non-bundle PM deprecated), there are only
BundleFsPersistenceManager
BundleDbPersistenceManager
Mysql,H2,PostgreSQL,Oracle,Derby,MSSQL - PersistenceManagers
and all those InMem, Object, Xml PersistenceManagers are deprecated. (MemoryFileSystem still OK while InMemPM is deprecated ?)
So that as I see this, BundleFsPersistenceManager uses LocalFileSystem to persist files (is there a wiki entry that explains the means of how content is stored into files? - like different types of node properties such as nt:file) on filesystem and BundleDbPersistenceManager uses DbFileSystem to store the exact same files into DBMS ? Otherwise lucene indexing and full text searching wouldn't be possible right ?
So that the reasons are clustering and distributed nature of systems and atomicity...otherwise the database implementation would be redundant right ? Like this people have more choices.
MemoryFileSystem still OK while InMemPM is deprecated ?
Yes... It's a bit sad the the in-memory persistence manager is deprecated, because it allows to run fast unit tests. However, you could also use a database persistence manager together with an in-memory database (such as an H2 database).
is there a wiki entry that explains the means of how content is stored into files?
No, because this is an implementation detail and subject to change, you shouldn't ever need to parse or write such files yourself, and use Jackrabbit instead.
like different types of node properties such as nt:file
File content is stored in the DataStore. Node and property data and links to the data store is the persistence manager.
Otherwise lucene indexing and full text searching wouldn't be possible right ?
Lucene indexing is independent on the persistence manager or the data format the persistence manager uses. The Lucene indexing internally doesn't access the persistence manager data directly.
otherwise the database implementation would be redundant right ?
It's just that some people prefer storing all data in a database (for example because they already have a database and know very well how to operate / backup / maintain it). The majority seems to be OK to store the data in the file system directly, however there is no built-in transactional file based persistence manager in Jackrabbit. For this, you would need to use a Jackrabbit extension such as the (commercial) CRX from Adobe (disclaimer: I work for Adobe).

Recommend an indexed file format that can be updated via random access in Java

I need an indexed file format that can hold a few hundred large variable sized binary blobs.
Blobs are around 1-5MB and the file could be as large as 1 GB. I need to be able to quickly find, read, add and remove blobs without recreating the the entire file. I have no need to compress the blobs, however if blobs were removed, I'd like to reclaim or reuse the space.
Ideally there would be a Java API.
I'm currently doing this with a ZIP format, but there's no known way to update a ZIP file without recreating it and performance is bad.
I've looked into SQLite but its blob performance was slow, and its overkill for my needs.
Any thoughts, or should I roll my own?
And if I do roll my own, any book or web page suggestions?
Berkeley DB Java Edition does what you need. It's free.
You need some virtual file system. Our SolFS is the one of the options yet we have only JNI layer, as the engine is written in C. There exists one more option, CodeBase, but as they don't provide an evaluation version of their file system, I know a few about it.
SolFS is ideally suitable for your task, because it lets you have alternative streams for files and associate searchable metadata with each file or even alternative stream.

How to efficiently manage files on a filesystem in Java?

I am creating a few JAX-WS endpoints, for which I want to save the received and sent messages for later inspection. To do this, I am planning to save the messages (XML files) into filesystem, in some sensible hierarchy. There will be hundreds, even thousands of files per day. I also need to store metadata for each file.
I am considering to put the metadata (just a couple of fields) into database table, but the XML file content itself into files in a filesystem in order not to bloat the database with content data (that is seldomly read).
Is there some simple library that helps me in saving, loading, deleting etc. the files? It's not that tricky to implement it myself, but I wonder if there are existing solutions? Just a simple library that already provides easy access to filesystem (preferrably over different operating systems).
Or do I even need that, should I just go with raw/custom Java?
Is there some simple library that
helps me in saving, loading, deleting
etc. the files? It's not that tricky
to implement it myself, but I wonder
if there are existing solutions? Just
a simple library that already provides
easy access to filesystem (preferrably
over different operating systems).
Java API
Well, if what you need to do is really simple, you should be able to achieve your goal with java.io.File (delete, check existence, read, write, etc.) and a few stream manipulations with FileInputStream and FileOutputStream.
You can also throw in Apache commons-io and its handy FileUtils for a few more utility functions.
Java is independent of the OS. You just need to make sure you use File.pathSeparator, or use the constructor File(File parent, String child) so that you don't need to explicitly mention the separator.
The Java file API is relatively high-level to abstract the differences of the many OS. Most of the time it's sufficient. It has some shortcomings only if you need some relatively OS-specific feature which is not in the API, e.g. check the physical size of a file on the disk (not the the logical size), security rights on *nix, free space/quota of the hard drive, etc.
Most OS have an internal buffer for file writing/reading. Using FileOutputStream.write and FileOutputStream.flush ensure the data have been sent to the OS, but not necessary written on the disk. The Java API support also this low-level integration to manage these buffering issue (example here) for system such as database.
Also both file and directory are abstracted with File and you need to check with isDirectory. This can be confusing, for instance if you have one file x, and one directory /x (I don't remember exactly how to handle this issue, but there is a way).
Web service
The web service can use either xs:base64Binary to pass the data, or use MTOM (Message Transmission Optimization Mechanism) if files are large.
Transactions
Note that the database is transactional and the file system not. So you might have to add a few checks if operations fails and are re-tried.
You could go with a complicated design involving some form of distributed transaction (see this answer), or try to go with a simpler design that provides the level of robustness that you need. A possible design could be:
Update. If the user wants to overwrite a file, you actually create a new one. The level of indirection between the logical file name and the physical file is stored in database. This way you never overwrite a physical file once written, to ensure rollback is consistent.
Create. Same story when user want to create a file
Delete. If the user want to delete a file, you do it only in database first. A periodic job polls the file system to identify files which are not listed in database, and removes them. This two-phase deletes ensures that the delete operation can be rolled back.
This is not as robust as writting BLOB in real transactional database, but provide some robustness. You could otherwise have a look at commons-transaction, but I feel like the project is dead (2007).
There is DataNucleus, a Java persistence provider. It is little too heavy for this case, but it supports JPA and JDO java standards with different datastores (RDBMS, object storage, XML, JSON, Excel, etc.). If the product is already using JPA or JDO, it might be worth considering using NataNucleus, as saving data into different datastores should be transparent. I suppose DataNucleus supports splitting the data into several files, creating the sensible directory/file structure I wanted (in my question), but this is just a guess.
Support for XML and JSON seems to be experimental.

Categories