When changing the object put to the MapStore, it doesn't seem to accept my changes (setting mysql db id on persistance). The change made does not appear in the object when checked out later on. That means to me, that the store method call is an additional call after the serialization.
Do I have to put the object to the hazelcast map once more?
The problem is that currently in Hazelcast the serialization of the value is done before the database interaction is done. So no matter what you do in your mapstore, it will not be visible in the serialized content. So your conclusion is correct.
Combining the mapstore with database generated id's or optimistic locking using a version field currently is a PITA. This is an issue we are looking at for some other customers and hope to provide a solution ASAP.
Related
In our application we have configured Hibernate to work with EHcache. Aim is that once object is loaded in Cache, no db call should every be invoked unless object is changed.
In order to test this, i am making call to object and printing identityhashcode [using call System.identityHashCode(this)] of object. And i notice that identityhashcode of object is changing in every call which makes us feel that object is loading everytime.
But in logs, we donot see Hibernate making any sql calls to database.
Can someone please guide, if my test is correct or not?
There are many things that might explain the difference. Also, not hitting the database might also mean that you are getting objects from the session cache (aka, first level cache). Make sure you create the object in one session and retrieve it twice in another session (the first might hit the database, the second shouldn't).
The ideal would be to ask Hibernate if the object was retrieved from the cache. The easiest way, is to enable the statistics collection and then print the hits/misses:
Statistics stats = sessionFactory.getStatistics();
stats.setStatisticsEnabled(true);
... // do your work
long hitCount = stats.getQueryCacheHitCount();
long missCount = stats.getQueryCacheMissCount();
Since you don't see any calls to the database, it's pretty safe to say that the cache is working.
The reason you see different identity hashcodes is because EHCache doesn't store the objects as is. Rather it stores a serialized version, which it will deserialize when there's a cache hit. The deserialized version is a new object, hence the different identityHashCode.
It's still faster than going to the database.
In my grails application, we are calling a stored procedure that may update several thousands of records. After the stored-proc call, I need to send many of these records back to the UI in json format. But, hibernate continues to see the old object after the stored proc is complete. I have tried evict() on each of those objects and loaded those again using HQL, but no avail.
What is the best way out of this problem.
Answer lies in the question. :) Use refresh(). Refer this.
If you want to clear the hibernate session altogether then you can use session.clear(). Refer clear.
For that you would need to get hold of the current session, which you do in two ways:
Get hold of sessionFactory, get current session and clear the same.
grailsApplication.mainContext.sessionFactory.currentSession.clear()
Use withSession closure.
DomainABC.withSession{s-> s.clear()}
I've got a hibernate interfaced mysql database with a load of different types of objects, some of which are periodically retrieved and altered by other pieces of code, which are operating in JADE agents. Because of the way the objects are retrieved (in queries, returning collections of objects) they don't seem to be managed by the entity manager, and definitely aren't managed when they're passed to agents without an entity manager factory or manager.
The objects from the database are passed about between agents, before arriving back at the database, at this point, I want to update the version of the object in the database - but each time I merge the object, it creates a new object in the database.
I'm fairly sure that I'm not using the merge method properly. Can anyone suggest a good way that I can combine the updated object with the existing database object without knowing in advance which properties of the object have changed? Possibly something along the lines of searching for the existing object and deleting it, then adding the new one, but I'm not sure how to do this without messing up PKeys etc
Hibernate has saveOrUpdate-method which either saves the object or updates it depending if an object with a same ID already exists:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/objectstate.html#objectstate-saveorupdate
I have an application which can read/write changes to a database table. Another instance of the same application should be able to see the updated values in the database. i am using hibernate for this purpose. If I have 2 instances of the application running, and if i make changes to the db from one instance for the first time, the updated values can be seen from the second. But any further changes from the first instance is not reflected in the second. Please throw some light.
This seems to be a bug in your cache settings. By default, Hibernate assumes that it's the only one changing the database. This allows it to efficiently cache objects. If several parties can change tables in the DB, then you must switch off caching for those tables/instances.
You can use hibernate.connection.autocommit=true
This will make hibernate commit each SQL update to the database immediately and u should be able to see the changes from the other application.
HOWEVER, I would strongly discourage you from doing so. Like Aaron pointed out, you should only use one Hibernate SessionFactory with a database.
If you do need multiple applications to be in sync, think about using a shared cache,e.g. Gemstone.
I am getting this exception in a controller of a web application based on spring framework using hibernate. I have tried many ways to counter this but could not resolve it.
In the controller's method, handleRequestInternal, there are calls made to the database mainly for 'read', unless its a submit action.
I have been using, Spring's Session but moved to getHibernateTemplate() and the problem still remains.
basically, this the second call to the database throws this exception. That is:
1) getEquipmentsByNumber(number) { firstly an equipment is fetched from the DB based on the 'number', which has a list of properties and each property has a list of values. I loop through those values (primitive objects Strings) to read in to variables)
2) getMaterialById(id) {fetches materials based on id}
I do understand that the second call, most probably, is making the session to "flush", but I am only 'reading' objects, then why does the second call throws the stale object state exception on the Equipment property if there is nothing changed?
I cannot clear the cache after the call since it causes LazyExceptions on objects that I pass to the view.
I have read this:
https://forums.hibernate.org/viewtopic.php?f=1&t=996355&start=0
but could not solve the problem based on the suggestions provided.
How can I solve this issue? Any ideas and thoughts are appreciated.
UPDATE:
What I just tested is that in the function getEquipmentsByNumber() after reading the variables from list of properties, I do this: getHibernateTemplate().flush(); and now the exception is on this line rather then the call to fetch material (that is getMaterialById(id)).
UPDATE:
Before explicitly calling flush, I am removing the object from session cache so that no stale object remains in the cache.
getHibernateTemplate().evict(equipment);
getHibernateTemplate().flush();
OK, so now the problem has moved to the next fetch from DB after I did this. I suppose I have to label the methods as synchronized and evict the Objects as soon as I am finished reading their contents! it doesn't sound very good.
UPDATE:
Made the handleRequestInternal method "synchronized". The error disappeared. Ofcourse, not the best solution, but what to do!
Tried in handleRequestInternal to close the current session and open a new one. But it would cause other parts of the app not to work properly. Tried to use ThreadLocal that did not work either.
You're mis-using Hibernate in some way that causes it to think you're updating or deleting objects from the database.
That's why calling flush() is throwing an exception.
One possibility: you're incorrectly "sharing" Session or Entities, via member field(s) of your servlet or controller. This is the main reason 'synchronized' would change your error symptoms.. Short solution: don't ever do this. Sessions and Entities shouldn't & don't work this way -- each Request should get processed independently.
Another possibility: unsaved-value defaults to 0 for "int" PK fields. You may be able to type these as "Integer" instead, if you really want to use 0 as a valid PK value.
Third suggestion: use Hibernate Session explicitly, learn to write simple correct code that works, then load the Java source for Hibernate/ Spring libraries so you can read & understand what these libraries are actually doing for you.
I also have been struggling with this exception, but when it continued to recur even when I put a lock on the object (and in a test environment, where I knew I was the only process touching the object), I decided to give the parenthetical in the stack trace its due consideration.
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect):
[com.rc.model.mexp.MerchantAccount#59132]
In our case it turned out that the mapping was wrong; we had type="text" in the mapping for one field that was a mediumtext type in the database, and it seems that Hibernate really hates that, at least under certain circumstances. We removed the type specification altogether from the mapping for this field, and the problem was resolved.
Now the weird thing is that in our production environment, with the supposedly problematic mapping in place, we do NOT get this exception. Does anybody have any idea why this might be? We are using the same version of MySQL - "5.0.22-log" (I don't know what the "-log" means) - in dev and production envs.
Here are 3 possibilities (as I do not know exactly, which kind of hibernate session handling you are using). Add one after another and test:
Use bi-directional mapping with inverse=true between parent object and child object, so the change in parent or child will get propagate to the other end of relation properly.
Add support for Optimistic Locking using TimeStamp or Version column
Use join query to fetch the whole object graph [ parent+children] together to avoid the second call altogether.
Lastly, if and only if nothing works:
Load the parent again by Id (you have that already) and populate modified data then update.
Life will be good! :)
This problem was something that I had experienced and was quite frustrating, although there has to be something a little odd going on in your DAO/Hibernate calls, because if you're doing a lookup by ID there is no reason to get a stale state, since that is just a simple lookup for an object.
First, make sure all your methods are annotated with #Transaction(required=true) // you'll have to look up the exact syntax
However, this exception is usually thrown when you try to make changes to an object that has been detached from the session it was retrieved from. The solution to this is often not simple and would require more code posted so we can see exactly what is going on; my general suggestion would be to create a #Service that performs these kinds of things within a single transaction