In our application we have configured Hibernate to work with EHcache. Aim is that once object is loaded in Cache, no db call should every be invoked unless object is changed.
In order to test this, i am making call to object and printing identityhashcode [using call System.identityHashCode(this)] of object. And i notice that identityhashcode of object is changing in every call which makes us feel that object is loading everytime.
But in logs, we donot see Hibernate making any sql calls to database.
Can someone please guide, if my test is correct or not?
There are many things that might explain the difference. Also, not hitting the database might also mean that you are getting objects from the session cache (aka, first level cache). Make sure you create the object in one session and retrieve it twice in another session (the first might hit the database, the second shouldn't).
The ideal would be to ask Hibernate if the object was retrieved from the cache. The easiest way, is to enable the statistics collection and then print the hits/misses:
Statistics stats = sessionFactory.getStatistics();
stats.setStatisticsEnabled(true);
... // do your work
long hitCount = stats.getQueryCacheHitCount();
long missCount = stats.getQueryCacheMissCount();
Since you don't see any calls to the database, it's pretty safe to say that the cache is working.
The reason you see different identity hashcodes is because EHCache doesn't store the objects as is. Rather it stores a serialized version, which it will deserialize when there's a cache hit. The deserialized version is a new object, hence the different identityHashCode.
It's still faster than going to the database.
Related
When changing the object put to the MapStore, it doesn't seem to accept my changes (setting mysql db id on persistance). The change made does not appear in the object when checked out later on. That means to me, that the store method call is an additional call after the serialization.
Do I have to put the object to the hazelcast map once more?
The problem is that currently in Hazelcast the serialization of the value is done before the database interaction is done. So no matter what you do in your mapstore, it will not be visible in the serialized content. So your conclusion is correct.
Combining the mapstore with database generated id's or optimistic locking using a version field currently is a PITA. This is an issue we are looking at for some other customers and hope to provide a solution ASAP.
In my grails application, we are calling a stored procedure that may update several thousands of records. After the stored-proc call, I need to send many of these records back to the UI in json format. But, hibernate continues to see the old object after the stored proc is complete. I have tried evict() on each of those objects and loaded those again using HQL, but no avail.
What is the best way out of this problem.
Answer lies in the question. :) Use refresh(). Refer this.
If you want to clear the hibernate session altogether then you can use session.clear(). Refer clear.
For that you would need to get hold of the current session, which you do in two ways:
Get hold of sessionFactory, get current session and clear the same.
grailsApplication.mainContext.sessionFactory.currentSession.clear()
Use withSession closure.
DomainABC.withSession{s-> s.clear()}
I was wondering if there was a way to tell Hibernate to generate some kind of console warning when it has too many objects of a certain type in the session cache. I would like to do this for load testing as we have OutOfMemoryException problems on occasion with BLOB loading from Oracle.
We are still using Hibernate 3.6.10 for now. Our best approach for this testing at the moment is to just generate more data than the system would be able to handle in a normal use case and try to load the parent object and see if it crashes. Doing it this way just feels kind of bad.
Any suggestions are welcome.
One note that I forgot to mention is that this "logging" idea is something I would like to be able to leave in production code to pinpoint specific problems.
- EDIT -
Here's an example of what I'm trying to do:
Say I have an #Entity ClassX that has a lazy loaded list of #Entity ClassY objects. Some how, I would like to have a log message spit out when 100 or more instances of ClassY are loaded into the session cache. This way, during development I can load a ClassX object and notice if I (or another developer on the team) happen to be accessing that list when I shouldn't be.
You could attach an Interceptor to listen to object load events, maintaining a count for each unique entity type and logging a warning whenever it goes past a certain threshold. The documentation shows you how to define a session-scoped interceptor, by passing it in at creation time:
Session session = sf.openSession( new AuditInterceptor() );
Most likely you're not creating your Session manually so this may not be helpful, but possibly the way that you are declaring your session has some way of passing an Interceptor through.
It's easier to declare a SessionFactory-scoped Interceptor but it doesn't seem to give you any reference back to the Session that the object is being created within, otherwise you'd be able to knock up some sort of counter in a WeakHashMap (with Session as the key so that you don't leak memory). If you're using the default Thread-local session strategy then you could always ask sessionFactory.getCurrentSession().
I have a Hibernate Entity in my code. i would fetch that and based on the value of one of the properties ,say "isProcessed" , go on and :
change value of "isProcessed" to "Yes" (the property that i checked)
add some task to a DelayedExecutor.
in my performance test, i have found that if i hammer the function,a classic dirty read scenario happens and i add too many tasks to the Executor that all of them would be executed. i can't use checking the equality of the objects in the Queue based on anything , i mean java would just execute all of them which are added.
how can i use hibernate's dirty object stuff to be able to check "isProcessed" before adding the task to executor? would it work?
hope that i have been expressive enough.
If you can do all of your queries to dispatch your tasks using the same Session, you can probably patch something together. The caveat is that you have to understand how hibernate's caching mechanisms (yes, that's plural) work. The first-level cache that is associated with the Session is going to be the key here. Also, it's important to know that executing a query and hydrating objects will not look into and return objects from the first-level cache...the right hand is not talking to the left hand.
So, to accomplish what you're trying to do (assuming you can keep using the same Session...if you can't do this, then I think you're out of luck) you can do the following:
execute your query
for each returned object, re-load it with Session's get method
check the isProcessed flag and dispatch if need-be
By calling get, you'll be sure to get the object from the first-level cache...where all the dirty objects pending flush are held.
For background, this is an extremely well-written and helpful document about hibernate caching.
My understanding of Hibernate is that as objects are loaded from the DB they are added to the Session. At various points, depending on your configuration, the session is flushed. At this point, modified objects are written to the database.
How does Hibernate decide which objects are 'dirty' and need to be written?
Do the proxies generated by Hibernate intercept assignments to fields, and add the object to a dirty list in the Session?
Or does Hibernate look at each object in the Session and compare it with the objects original state?
Or something completely different?
Hibernate does/can use bytecode generation (CGLIB) so that it knows a field is dirty as soon as you call the setter (or even assign to the field afaict).
This immediately marks that field/object as dirty, but doesn't reduce the number of objects that need to be dirty-checked during flush. All it does is impact the implementation of org.hibernate.engine.EntityEntry.requiresDirtyCheck(). It still does a field-by-field comparison to check for dirtiness.
I say the above based on a recent trawl through the source code (3.2.6GA), with whatever credibility that adds. Points of interest are:
SessionImpl.flush() triggers an onFlush() event.
SessionImpl.list() calls autoFlushIfRequired() which triggers an onAutoFlush() event. (on the tables-of-interest). That is, queries can invoke a flush. Interestingly, no flush occurs if there is no transaction.
Both those events eventually end up in AbstractFlushingEventListener.flushEverythingToExecutions(), which ends up (amongst other interesting locations) at flushEntities().
That loops over every entity in the session (source.getPersistenceContext().getEntityEntries()) calling DefaultFlushEntityEventListener.onFlushEntity().
You eventually end up at dirtyCheck(). That method does make some optimizations wrt to CGLIB dirty flags, but we've still ended up looping over every entity.
Hibernate takes a snapshot of the state of each object that gets loaded into the Session. On flush, each object in the Session is compared with its corresponding snapshot to determine which ones are dirty. SQL statements are issued as required, and the snapshots are updated to reflect the state of the (now clean) Session objects.
Take a look to org.hibernate.event.def.DefaultFlushEntityEventListener.dirtyCheck
Every element in the session goes to this method to determine if it is dirty or not by comparing with an untouched version (one from the cache or one from the database).
Hibernate default dirty checking mechanism will traverse current attached entities and match all properties against their initial loading-time values.
You can better visualize this process in the following diagram:
These answers are incomplete (at best -- I am not an expert here). If you have an hib man entity in your session, you do NOTHING to it, you can still get an update issued when you call save() on it. when? when another session updates that object between your load() and save(). here is my example of this: hibernate sets dirty flag (and issues update) even though client did not change value