Ehcache without database table - java

I have an Entity that right now it's stored on my database via Hibernate.
I'd like to remove it from database (as i'm not interested to relate it with other data, or to make some query) and i'd like to persist it on EHCache and dump all data on file ones a day.
I was wondering if i could do that without having an entity linked to database table.
What is your experience?

Unfortunately, it can't work the way you expect it to work.
Hibernate stores a dehydrated entity representation, so using the EHcache data directly will require you to implement a hydration/dehydration processing logic.
If you plan on porting to a non-standard data store, like using a persistent cache as a database, you need more control than Hibernate offers you.
I would try to replace Hibernate 2nd level cache with a service-layer caching implementation (e.g. Spring or even your custom caching abstraction layer). This way you control how the data is going to be serialized/deserialized.
But this is a significant amount of work, so I suggest you take a look on Redis.

Related

JPA - What exactly does it mean for an entity object to persist? What is the definition of persistence?

I'm fairly new to java web applications and I am undertaking the task of learning JPA. However, it is not explicitly clear what it means for an entity object to persist. I think I have an idea, but I would rather not assume its meaning.
I am referencing the Oracle JPA Doc, but they continue to use the words like "persist" or "persistence" when describing persistent fields/properties. Can someone shed some light on this idea of persistence? And maybe define what it means for an instance of an entity to be persistent?
And if you could not use the word "persistent" (or any form of the word) in your definition that would be much appreciated. A simple answer would be great, but more in-depth explanations are definitely welcome! Thanks so much!
Persistence simply means to Store Permanently.
In JAVA we work with Objects and try to store Object's values into database(RDBMS mostly).
JPA provides implementation for Object Relation Mapping(ORM) ,so that we can directly store Object into Database as a new Tuple.
Object, in JPA, are converted to Entity for mapping it to the Table in Database.
So Persisting an Entity means Permanently Storing Object(Entity) into Database.
Hope this Helps!!
"Persist" means "lives on after the application is shut down". The object is not just in volatile memory; it's in more permanent storage on disk. If the application is shut down, or the user ends their session and begins a new one, the old data is still available from permanent storage on disk.
Databases store information on disks, unless they are in-memory versions that give you the advantage of using SQL but little else. If you use a relational SQL database, you get a query language that makes it easy to Create/Read/Update/Delete information without having to worry about how it's stored on the disk.
SQL databases store relations on disk using different data structures (e.g. B-Tree). Relations are defined in terms of tables and columns. Each record in a table consists of a tuple of row values. Objects have to map tables and columns to objects and attributes using object-relational mapping. JPA generalizes this idea and builds it into Java EE, following the example of implementations like TopLink and Hibernate.
NoSQL databases, like MongoDB, also store information on disk as documents rather than relations.
Object databases serialize an object and all its children using formats like Java serialization, XML, JSON, or custom formats (e.g. Google protocol buffers).
Graph databases, like Neo4J, can be thought of as more general cases of object databases.

Use hibernate when saving objects to the database

In my java application I have some serialized entity classes with inheritance. When saving instances of these classes i am converting them to a byte array and saving to a longblob column in my database table. Is there any advantage using hibernate to implement this program. Because as far I understand hibernate is used to map entities with database tables in a proper way. But here I don't have a relational model to map attributes of entities. I am saving them as objects. Am I missing something. Please clarify me. Thanks in advance.
If you don't have a relational data model to save those objects and you can't change your schema, then you can use your current approach.
If you use PostgreSQL you might be interested in JSON storage as well. That way you can store your hierarchies using JSON objects and you can even run native SQL queries against them (although not inheritance-aware, but you can cope with that if you use some _class column to differ between object types).
The cleanest approach is to have the relation model in sync with your business domain model. That way you can benefit from:
optimistic locking (preventing lost updates phenomena)
caching (2nd level cache and query cache)
query-able hierarchies
an external DBA hierarchies could run an update on your hierarchies using mere SQL
auditing

JDBC Query Caching and Precaching

Scenario:
I have a need to cache the results of database queries in my web service. There about 30 tables queried during the cycle of a service call. I am confident data in a certain date range will be accessed frequently by the service, and I would like to pre-cache that data. This would mean caching around 800,000 rows at application startup, the data is read-only. The data does not need to be dynamically refreshed, this is reference data. The cache can't be loaded on each service call, there's simply too much data for that. Data outside of this 'frequently used' window is not time critical and can be lazy loaded. Most queries would return 1 row, and none of the tables have a parent/child relationship to each other, though there will be a few joins. There is no need for dynamic sql support.
Options:
I intended to use myBatis, but there isn't a good method to warm up the cache. myBatis can't understand that the service query select * from table where key = ? is already covered by the startup pre-cache query select * from table.
As far as I understand it (documentation overload), Hibernate has the same problem. Additionally, these tables were designed with composite keys and no primary key, which is an extra hassle for Hibernate.
Question:
Preferred: Is there a myBatis solution for this problem ? I'd very much like to use it. (Familiarity, simplicity, performance, funny name, etc)
Alternatively: Is there an ORM or DB-friendly cache that offers what I'm looking for ?
You can use distributed caching solution like NCache or Tayzgrid which provide indexing and queries features along with cache startup loader.
You can configure indexes on attributes of your entities in cache. A cache startup loader can be configured to load all data from database in cache at cache startup. While loading data, cache will create indexes for all entities in memory.
Object Query Language (OQL) feature, which provides queries similar to SQL can then be used to query in-memory data.
The variety of options for third-party products (free and paid) is too broad and too dependent on your particular requirements and operational capabilities to try to "answer" here.
However, I will suggest an alternative to an explicit cache of your read-only data.
You clearly believe that the memory footprint of your dataset will fit into RAM on a reasonably-sized server. My suggestion is that you use your database engine directly (no additional external cache), but configured the database with internal cache large enough to hold your whole dataset. If all of your data is residing in the database server's RAM, it will be accessed very quickly.
I have used this technique successfully with mySQL, but I expect the same applies to all major database engines. If you cannot figure out how to configure your chosen database appropriately, I suggest that you follow ask a separate, detailed question.
You can warm the cache by executing representative queries when you start your system. These queries will be relatively slow because they have to actually do the disk I/O to pull the relevant blocks of data into the cache. Subsequent queries that access the same blocks of data will be much faster.
This approach should give you a huge performance boost with no additional complexity in your code or your operational environment.
Sormula may do want you want. You would need to annotate each POJO to be cached like:
#Cached(type=ReadOnlyCache.class)
public class SomePojo {
...
}
Pre-populate the cache by invoking selectAll method for each:
Database db = new Database(one of the JNDI constructors);
Table<SomePojo> t = db.getTable(SomePojo.class);
t.selectAll();
The key is that the cache is stored in the Table object, t. So you would need to keep a reference to t and use it for subsequent queries. Or in the case of many tables, keep reference to database object, db, and use db.getTable(...) to get tables to query.
See javadoc and tests in org.sormula.tests.cache.readonly package.

XML vs. object trees

In my current project (an order management system build from scratch), we are handling orders in the form of XML objects which are saved in a relational database.
I would outline the requirements like this:
Selecting various details from anywhere in the order
Updating / enriching data (e.g. from the CRM system)
Keeping a record of the changes (invalidating old data, inserting new values)
Details of orders should be easily selectable by SQL queries (for 2nd level support)
What we did:
The serialization is done with proprietary code, disassembling the order into tables like customer, address, phone_number, order_position etc.
Whenever an order is processed a bit further (e.g. due to an incoming event), it is read completely from the database and assembled back into a XML document.
Selection of data is done by XPath (scattered over code).
Most updates are done directly in the database (the order will then be reloaded for the next step).
The problems we face:
The order structure (XSD) evolves with every release. Therefore XPaths and the custom persistence often breaks and produces bugs.
We ended up having a mixture of working with the document and the database (because the persistence layer can not persist the changes in the document).
Performance is not really an issue (yet), since it is an offline system and orders are often intentionally delayed by days.
I do not expect free consultancy here, but I am a little confused on how the approach could be improved (next time, basically).
What would you think is a good solution for handling these requirements?
Would working with an object graph, something like JXPath and OGNL and an OR mapper be a better approach? Or using XML support of e.g. the Oracle database?
If your schema changes often, I would advise against using any kind of object-mapping. You'd keep changing boilerplate code just for the heck of it.
Instead, use the declarative schema definition to validate data changes and access.
Consider an order as a single datum, expressed as an XML document.
Use a document-oriented store like MongoDB, Cassandra or one of the many XML databases to manipulate the document directly. Don't bother with cutting it into pieces to store it in a relational db.
Making the data accessible via reporting tools in a relational database might be considered secondary. A simple map-reduce job on a MongoDB, for example, could populate the required order details into a relational database whenever required, separating the two use cases quite naturally.
The standard Java EE approach is to represent your data as POJOs and use JPA for the database access and JAXB to convert the objects to/from XML.
JPA
Object-to-Relational standard
Supported by all the application server vendors.
Multiple available implementations EclipseLink, Hibernate, etc.
Powerful query language JPQL (that is very similar to SQL)
Handles query optimization for you.
JAXB
Object-to-XML standard
Supported by all the application server vendors.
Multiple implementations available: EclipseLink MOXy, Metro, Apache JaxMe, etc.
Example
http://bdoughan.blogspot.com/2010/08/creating-restful-web-service-part-15.html

ORM Technologies vs JDBC?

My question is regarding ORM and JDBC technologies, on what criteria would you decide to go for an ORM technology as compared to JDBC and other way round ?
Thanks.
JDBC
With JDBC, developer has to write code to map an object model's data representation to a relational data model and its corresponding database schema.
With JDBC, the automatic mapping of Java objects with database tables and vice versa conversion is to be taken care of by the developer manually with lines of code.
JDBC supports only native Structured Query Language (SQL). Developer has to find out the efficient way to access database, i.e. to select effective query from a number of queries to perform same task.
Application using JDBC to handle persistent data (database tables) having database specific code in large amount. The code written to map table data to application objects and vice versa is actually to map table fields to object properties. As table changed or database changed then it’s essential to change object structure as well as to change code written to map table-to-object/object-to-table.
With JDBC, it is developer’s responsibility to handle JDBC result set and convert it to Java objects through code to use this persistent data in application. So with JDBC, mapping between Java objects and database tables is done manually.
With JDBC, caching is maintained by hand-coding.
In JDBC there is no check that always every user has updated data. This check has to be added by the developer.
HIBERNATE.
Hibernate is flexible and powerful ORM solution to map Java classes to database tables. Hibernate itself takes care of this mapping using XML files so developer does not need to write code for this.
Hibernate provides transparent persistence and developer does not need to write code explicitly to map database tables tuples to application objects during interaction with RDBMS.
Hibernate provides a powerful query language Hibernate Query Language (independent from type of database) that is expressed in a familiar SQL like syntax and includes full support for polymorphic queries. Hibernate also supports native SQL statements. It also selects an effective way to perform a database manipulation task for an application.
Hibernate provides this mapping itself. The actual mapping between tables and application objects is done in XML files. If there is change in Database or in any table then the only need to change XML file properties.
Hibernate reduces lines of code by maintaining object-table mapping itself and returns result to application in form of Java objects. It relieves programmer from manual handling of persistent data, hence reducing the development time and maintenance cost.
Hibernate, with Transparent Persistence, cache is set to application work space. Relational tuples are moved to this cache as a result of query. It improves performance if client application reads same data many times for same write. Automatic Transparent Persistence allows the developer to concentrate more on business logic rather than this application code.
Hibernate enables developer to define version type field to application, due to this defined field Hibernate updates version field of database table every time relational tuple is updated in form of Java class object to that table. So if two users retrieve same tuple and then modify it and one user save this modified tuple to database, version is automatically updated for this tuple by Hibernate. When other user tries to save updated tuple to database then it does not allow saving it because this user does not have updated data.
Complexity.
ORM If your application is domain driven and the relationships among objects is complex or you need to have this object defining what the app does.
JDBC/SQL If your application is simple enough as to just present data directly from the database or the relationships between them is simple enough.
The book "Patterns of enterprise application architecture" by Martin Fowler explains much better the differences between these two types:
See: Domain Model and Transaction Script
I think you forgot to look at "Functional Relational Mapping"
I would sum up by saying:
If you want to focus on the data-structures, use an ORM like JPA/Hibernate
If you want to shed light on treatments, take a look at FRM libraries: QueryDSL or Jooq
If you need to tune your SQL requests to specific databases, use JDBC and native SQL requests
The strengh of various "Relational Mapping" technologies is portability: you ensure your application will run on most of the ACID databases.
Otherwise, you will cope with differences between various SQL dialects when you write manually the SQL requests.
Of course you can restrain yourself to the SQL92 standard (and then do some Functional Programming) or you can reuse some concepts of functionnal programming with ORM frameworks
The ORM strenghs are built over a session object which can act as a bottleneck:
it manages the lifecycle of the objects as long as the underlying database transaction is running.
it maintains a one-to-one mapping between your java objects and your database rows (and use an internal cache to avoid duplicate objects).
it automatically detects association updates and the orphan objects to delete
it handles concurrenty issues with optimistic or pessimist lock.
Nevertheless, its strengths are also its weaknesses:
The session must be able to compare objects so you need to implements equals/hashCode methods
But Objects equality must be rooted on "Business Keys" and not database id (new transient objects have no database ID!).
However, some reified concepts have no business equality (an operation for instance).
A common workaround relies on GUIDs which tend to upset database administrators.
The session must spy relationship changes but its mapping rules push the use of collections unsuitable for the business algorithms.
Sometime your would like to use an HashMap but the ORM will require the key to be another "Rich Domain Object" instead of another light one...
Then you have to implement object equality on the rich domain object acting as a key...
But you can't because this object has no counterpart on the business world.
So you fall back to a simple list that you have to iterate on (and performance issues result from)
The ORM API are sometimes unsuitable for real-world use.
For instance, real world web applications try to enforce session isolation by adding some "WHERE" clauses when you fetch data...
Then the "Session.get(id)" doesn't suffice and you need to turn to more complex DSL (HSQL, Criteria API) or go back to native SQL
The database objects conflicts with other objects dedicated to other frameworks (like OXM frameworks = Object/XML Mapping).
For instance, if your REST services use jackson library to serialize a business object.
But this Jackson exactly maps to an Hibernate One.
Then either you merge both and a strong coupling between your API and your database appears
Or you must implement a translation and all the code you saved from the ORM is lost there...
On the other side, FRM is a trade-off between "Object Relational Mapping" (ORM) and native SQL queries (with JDBC)
The best way to explain differences between FRM and ORM consists into adopting a DDD approach.
Object Relational Mapping empowers the use of "Rich Domain Object" which are Java classes whose states are mutable during the database transaction
Functional Relational Mapping relies on "Poor Domain Objects" which are immutable (so much so you have to clone a new one each time you want to alter its content)
It releases the constraints put on the ORM session and relies most of time on a DSL over the SQL (so portability doesn't matter)
But on the other hand, you have to look into the transaction details, the concurrency issues
List<Person> persons = queryFactory.selectFrom(person)
.where(
person.firstName.eq("John"),
person.lastName.eq("Doe"))
.fetch();
It also depends on the learning curve.
Ebean ORM has a pretty low learning curve (simple API, simple query language) if you are happy enough with JPA annotations for mapping (#Entity, #Table, #OneToMany etc).

Categories