I was wondering if there was a way to tell Hibernate to generate some kind of console warning when it has too many objects of a certain type in the session cache. I would like to do this for load testing as we have OutOfMemoryException problems on occasion with BLOB loading from Oracle.
We are still using Hibernate 3.6.10 for now. Our best approach for this testing at the moment is to just generate more data than the system would be able to handle in a normal use case and try to load the parent object and see if it crashes. Doing it this way just feels kind of bad.
Any suggestions are welcome.
One note that I forgot to mention is that this "logging" idea is something I would like to be able to leave in production code to pinpoint specific problems.
- EDIT -
Here's an example of what I'm trying to do:
Say I have an #Entity ClassX that has a lazy loaded list of #Entity ClassY objects. Some how, I would like to have a log message spit out when 100 or more instances of ClassY are loaded into the session cache. This way, during development I can load a ClassX object and notice if I (or another developer on the team) happen to be accessing that list when I shouldn't be.
You could attach an Interceptor to listen to object load events, maintaining a count for each unique entity type and logging a warning whenever it goes past a certain threshold. The documentation shows you how to define a session-scoped interceptor, by passing it in at creation time:
Session session = sf.openSession( new AuditInterceptor() );
Most likely you're not creating your Session manually so this may not be helpful, but possibly the way that you are declaring your session has some way of passing an Interceptor through.
It's easier to declare a SessionFactory-scoped Interceptor but it doesn't seem to give you any reference back to the Session that the object is being created within, otherwise you'd be able to knock up some sort of counter in a WeakHashMap (with Session as the key so that you don't leak memory). If you're using the default Thread-local session strategy then you could always ask sessionFactory.getCurrentSession().
Related
what is meaning of could not initialize proxy - no Session error?? what meaning of lazy object and why occur??, and how to avoid it without change the hibernate laze property and without using Hibernate.initialize() because that doesn't work for me.
There are a couple of things here, first of all you appear to asking "what is lazy loading".
If you have an object to be loaded from the database which has a relationship with another object, lazy loading allows you to only load the required object, and the related object will only be loaded when you need it.
The exception you are having is caused by trying to access the related object after the objects have been disconnected from the database session (obviously a db session is required to load them).
There are a few ways we can deal with the situation, the most appropriate will entirely depend your application.
You can always load all the data you need inside the transaction where the object is first loaded (Hibernate.initialise, or sometimes just calling the getter will work), this will remove your exception: the down side to this is to will find that you are regularly leading a lot of data and could have performance issues
Another way is to pass the Id to where you need to use the object, load a new one from the database and do your work inside the transaction, passing ids a lot is not very OO but sometimes it's the best option.
If for instance you are having this error in UI bindings or other places where you only want to "get", you may want to consider a "session in view" which will provide a db session for your lazy loading.
I can not tell you what is the best option without knowing about your application and it's architecture.
If you wish to discuss a any of this further please let me know.
You should mark your method with annotation #Transactional
#Transactional
void method(){
Entity e = ... (from database);
e.getLazyField();
}
I get an entity 'A' using
getHibernateTemplate().get(A.class, 100)
from the database. Lets say this entity 'A' has a property 'value' 200 in the database.
Now, in my Java code, I change a property for this entity. lets say, I change the 'value' property to '500' and then add it to some list.
Now, If I again do getHibernateTemplate().get(A.class, 100) for the same Entity, I am getting the updated entity(that has a value of 500). How do I force hibernate to get me the entity from the database, but not the one updated in my code?
Is this what is called as 'First Level Caching'?
Your assumption (about first level caching) is correct. As for example stated here: Interface Session:
The main runtime interface between a Java application and Hibernate.
This is the central API class abstracting the notion of a persistence service.
Or here Chapter 2. Architecture; 2.1. Overview
Extract: Session (org.hibernate.Session)
A single-threaded, short-lived object representing a conversation between the application and the persistent store. It wraps a JDBC
connection and is a factory for Transaction. Session holds a mandatory
first-level cache of persistent objects that are used when navigating
the object graph or looking up objects by identifier.
And also, you can see the methods available for us, to remove an object form the session:
evict(Object object):
Remove this instance from the session cache.
refresh(Object object):
Re-read the state of the given instance from the underlying database.
clear():
Completely clear the session.
And many more. Evict in this case should be working. We have to take the current instance ('A') and explicitly Evict it from the session.
If we've already loaded some/more stuff, and we do not know, what to Evict(), we simply need to get the fresh data. Then we can call Clear() to completely reset the session and start again.
This is a bit radical, because none of the objects in the session will be updated/inserted on session Flush()... but it could be what we want in this scenario (very often used for testing... load, clear... change and flush)
I suggest searching Google for hibernate commit, flush, and detach and reading up on when they write to the database. Better yet, I recommend reading a good book on Hibernate if you haven't already done so (search amazon.com for good reviews on a book) to get a good grasp of the technology.
My reason for responding to this post is not to answer your question directly, but suggest that you edit your hibernate.cfg.xml file and set the following to true:
< property name="hibernate.show_sql" > false < /property >.
This will cause a display to your console window to list when every sql statement that is sent to the database. This way, you can see exactly when a write to the database occurs. You can then experiment with what you research/read and verify it works as you expect.
I'm here for a bit of advice. I'm using Hibernate with Java. I've implemented a controller interface to distance the User interface from the actual communication with the database. For the given interface I've implemented a database controller class that does the actual communication. This is fed to the user interface by a static controller factory.
Now I discover that the Hibernate doesn't actually load everything I want into the memory. For each controller method call I'm always opening session, doing my stuff, closing the session. Therefore when I try to access my object structure I'm prompted with error that
could not initialize proxy - no Session
With little effort and googling I concluded that the object to which my active object is referencing is not in the memory.
Now I have an option to keep the session open from the moment I start using my objects to the end. But it seems a bit redundant and energy inefficient. I guess I won't lose much by keeping the session open, but I kind of intended to keep the user interface purely out oh the database business. Having my controller interface to have a "tearDown" (and "setUp") method for the user interface seems a bit against given logic.
When you use lazy loading - that is quite often the default case in Hibernate - you can't have access to unloaded instances after the session is closed.
For example, you have a parent table and a child table, which is mapped in a 1:n relation (In the mapping file or as an annotation). Then you do like this:
1) open session
2) load parent
3) close session
4) call parent.getChild() (or sth. like this)
Then in step 4) you'll get an error message, because Hibernate didn't load the item before, it wants to do now (lazy loading), but it can't, because the session already is close.
If you want to close the session, make sure all necessary data already is loaded. For example if you had done step 4) before step 3) in that example, then it would have worked, and after closing the session you even could access that child again, because it already would be loaded. But you wouldn't be able to store it in the database later, because of the closed session.
I don't understand. You can load entity objects from db and use them after closing session. The objects states will be detached then. You will have to possibly to reattach them to session sometimes to synchronize theirs states with db.
I am getting this exception in a controller of a web application based on spring framework using hibernate. I have tried many ways to counter this but could not resolve it.
In the controller's method, handleRequestInternal, there are calls made to the database mainly for 'read', unless its a submit action.
I have been using, Spring's Session but moved to getHibernateTemplate() and the problem still remains.
basically, this the second call to the database throws this exception. That is:
1) getEquipmentsByNumber(number) { firstly an equipment is fetched from the DB based on the 'number', which has a list of properties and each property has a list of values. I loop through those values (primitive objects Strings) to read in to variables)
2) getMaterialById(id) {fetches materials based on id}
I do understand that the second call, most probably, is making the session to "flush", but I am only 'reading' objects, then why does the second call throws the stale object state exception on the Equipment property if there is nothing changed?
I cannot clear the cache after the call since it causes LazyExceptions on objects that I pass to the view.
I have read this:
https://forums.hibernate.org/viewtopic.php?f=1&t=996355&start=0
but could not solve the problem based on the suggestions provided.
How can I solve this issue? Any ideas and thoughts are appreciated.
UPDATE:
What I just tested is that in the function getEquipmentsByNumber() after reading the variables from list of properties, I do this: getHibernateTemplate().flush(); and now the exception is on this line rather then the call to fetch material (that is getMaterialById(id)).
UPDATE:
Before explicitly calling flush, I am removing the object from session cache so that no stale object remains in the cache.
getHibernateTemplate().evict(equipment);
getHibernateTemplate().flush();
OK, so now the problem has moved to the next fetch from DB after I did this. I suppose I have to label the methods as synchronized and evict the Objects as soon as I am finished reading their contents! it doesn't sound very good.
UPDATE:
Made the handleRequestInternal method "synchronized". The error disappeared. Ofcourse, not the best solution, but what to do!
Tried in handleRequestInternal to close the current session and open a new one. But it would cause other parts of the app not to work properly. Tried to use ThreadLocal that did not work either.
You're mis-using Hibernate in some way that causes it to think you're updating or deleting objects from the database.
That's why calling flush() is throwing an exception.
One possibility: you're incorrectly "sharing" Session or Entities, via member field(s) of your servlet or controller. This is the main reason 'synchronized' would change your error symptoms.. Short solution: don't ever do this. Sessions and Entities shouldn't & don't work this way -- each Request should get processed independently.
Another possibility: unsaved-value defaults to 0 for "int" PK fields. You may be able to type these as "Integer" instead, if you really want to use 0 as a valid PK value.
Third suggestion: use Hibernate Session explicitly, learn to write simple correct code that works, then load the Java source for Hibernate/ Spring libraries so you can read & understand what these libraries are actually doing for you.
I also have been struggling with this exception, but when it continued to recur even when I put a lock on the object (and in a test environment, where I knew I was the only process touching the object), I decided to give the parenthetical in the stack trace its due consideration.
org.hibernate.StaleObjectStateException: Row was updated or deleted by
another transaction (or unsaved-value mapping was incorrect):
[com.rc.model.mexp.MerchantAccount#59132]
In our case it turned out that the mapping was wrong; we had type="text" in the mapping for one field that was a mediumtext type in the database, and it seems that Hibernate really hates that, at least under certain circumstances. We removed the type specification altogether from the mapping for this field, and the problem was resolved.
Now the weird thing is that in our production environment, with the supposedly problematic mapping in place, we do NOT get this exception. Does anybody have any idea why this might be? We are using the same version of MySQL - "5.0.22-log" (I don't know what the "-log" means) - in dev and production envs.
Here are 3 possibilities (as I do not know exactly, which kind of hibernate session handling you are using). Add one after another and test:
Use bi-directional mapping with inverse=true between parent object and child object, so the change in parent or child will get propagate to the other end of relation properly.
Add support for Optimistic Locking using TimeStamp or Version column
Use join query to fetch the whole object graph [ parent+children] together to avoid the second call altogether.
Lastly, if and only if nothing works:
Load the parent again by Id (you have that already) and populate modified data then update.
Life will be good! :)
This problem was something that I had experienced and was quite frustrating, although there has to be something a little odd going on in your DAO/Hibernate calls, because if you're doing a lookup by ID there is no reason to get a stale state, since that is just a simple lookup for an object.
First, make sure all your methods are annotated with #Transaction(required=true) // you'll have to look up the exact syntax
However, this exception is usually thrown when you try to make changes to an object that has been detached from the session it was retrieved from. The solution to this is often not simple and would require more code posted so we can see exactly what is going on; my general suggestion would be to create a #Service that performs these kinds of things within a single transaction
An object I mapped with hibernate has strange behavior. In order to know why the object behaves strangely, I need to know what makes that object dirty. Can somebody help and give me a hint?
The object is a Java class in a Java/Spring context. So I would prefer an answer targetting the Java platform.
Edit: I would like to gain access to the Hibernate dirty state and how it changes on an object attached to a session. I don't know how a piece of code would help.
As for the actual problem: inside a transaction managed by a Spring TransactionManager I do some (read) queries on Objects and without doing an explicit save on these Objects they are saved by the TransactionManager because Hibernate thinks that some of these (and not all) are dirty. Now I need to know why Hibernate thinks those Objects are dirty.
I would use an interceptor. The onFlushDirty method gets the current and previous state so you can compare them. Implement the Interceptor interface and extend EmptyInterceptor, overriding onFlushDirty. Then add an instance of that class using configuration.setInterceptor (Spring may require you to do this differently). You can also add an interceptor to the session rather than at startup.
Here is the documentation on interceptors.
create a Test Case or similar, so you can reproduce the problem with a single click.
enable logging for org.hibernate check the logging for the string "dirty" (actually you don't need all of org.hibernate, but I don't know the exact logger.
Find to spots in the program, one where the entity is not dirty, one where it is dirty. Find the middle of the code between the two points, and put a logging statement there, for logging the isdirty Value. Continue with the strategy until you have reduced the code to a single line.
Check out the hibernate code. Find the code that does the dirty checking. Use a debugger to step through it.
Assuming that the state of the object cannot be accessed directly (e.g. no public or package protected fields) and is not fiddled with by reflection, you can put a breakpoint at the start of all of the object's methods and run through the scenario that makes the object dirty in the debugger.