Hibernate Monitoring Solution - java

I would like to monitor hibernate action.
I see on the internet the zentracker monitor solution that permit to monitor a lot of activity of hibernate.
But It is compatible with the last version of hibernate 3.5.*?
if it's not, do you have solution to monitor query execution time, sessionFactory opened, persitence object created, ... ?
Thank you in advance for your help.
Best regards,
Florent
P.S: I'm french, sorry for my english.

I see on the internet the zentracker monitor solution that permit to monitor a lot of activity of hibernate. But It is compatible with the last version of hibernate 3.5.x?
Why don't you get the sources and recompile the project with a more recent version of Hibernate Core? Well, I did because I was curious and it doesn't compile, there are a few API changes that require some modifications. But nothing overcomplicated though. And since the project doesn't seem to be very active, your best option would be to make them yourself.
If it's not, do you have solution to monitor query execution time, sessionFactory opened, persitence object created, ... ?
Well, as I said, you can make it compatible...
I personally gather Statistics via JMX and use a custom tool. From the documentation:
20.6.2. Metrics
Hibernate provides a number of
metrics, from basic information to
more specialized information that is
only relevant in certain scenarios.
All available counters are described
in the Statistics interface API, in
three categories:
Metrics related to the general Session usage, such as number of open
sessions, retrieved JDBC connections,
etc.
Metrics related to the entities, collections, queries, and caches as a
whole (aka global metrics).
Detailed metrics related to a particular entity, collection, query
or cache region.
For example, you can check the cache
hit, miss, and put ratio of entities,
collections and queries, and the
average time a query needs. Be aware
that the number of milliseconds is
subject to approximation in Java.
Hibernate is tied to the JVM precision
and on some platforms this might only
be accurate to 10 seconds.
Have a look at Performance Monitoring using Hibernate for more inspiration.
See also
Hibernate Profiler: a commercial tool
Related questions
Tool for monitoring Hibernate cache usage
References
Hibernate Core Reference Guide
20.6. Monitoring performance

Related

Hibernate + MySQL Best practices for reporting data

I am creating a webapp in Spring Boot (Spring + Hibernate + MySQL).
I have already created all the CRUD operations for the data of my app, and now I need to process the data and create reports.
As per the complexity of these reports, I will create some summary or pre proccesed tables. This way, I can trigger the reports creation once, and then get them efficiently.
My doubt is if I should build all the reports in Java or in Stored Procedures in MySQL.
Pros of doing it in Java:
More logging
More control of the structures (entities, maps, list, etc)
Catching exceptions
If I change my db engine (it would not happen, but never know)
Cons of doing it in Java:
Maybe memory?
Any thoughts on this?
Thanks!
Java. Though both are possible. It depends on what is most important and what skills are available for maintenance and the price of maintaining. Stored procedures are usually very fast, but availability and performance also depends on what exact database you use. You will need special skills, and then you have it all working on that specific database.
Hibernate does come with a special dialect written for every database to get the best performance out of the persistence layer. It’s not that fast as a stored procedure, but it comes pretty close. With Spring Data on top of that, all difficulty is gone. Maintenance will not cost that much and people who know Spring Data are more available than any special database vendor.
You can still create various “difficult” queries easily with HQL, so no block there. But Hibernate comes with more possibilities. You can have your caching done by eh-cache and with Hibernate envers you will have your audit done in no time. That’s the nice thing about this framework. It’s widely used and many free to use maven dependencies are there for the taking. And if in future you want to change your database, you can do it by changing like 3 parameters in your application.properties file when using Spring Data.
You can play with some annotations and see what performs better. For example you have the #Inheritance annotation where you can have some classes end up in the same table or split it to more tables. Also you have the #MappedSuperclass where you can have one JpaObject with the id which all your entities can extend. If you want some more tricks on JPA, maybe check this post with my answer on how to use a superclass and a general repository.
As per the complexity of these reports, I will create some summary or
pre proccesed tables. This way, I can trigger the reports creation
once, and then get them efficiently.
My first thought is, is this required? It seems like adding complexity to the application that perhaps isn't needed. Premature optimisation and all that. Try writing the reports in SQL and running an execution plan. If it's good enough, you have less code to maintain and no added batch jobs to administer. Consider load testing using E.G. jmeter or gatling to see how it holds up under stress.
Consider using querydsl or jooq for reporting. Both provide a database abstraction layer and fluent API for querying databases, which deliver the benefits listed in the "Pros of doing it in Java" section of the question and may be more suited to the problem. This blog post jOOQ vs. Hibernate: When to Choose Which is well worth a read.

Hibernate: force 'with (NOLOCK)' hint on all select queries, without changing isolation level

Quick background story:
I work on a very old application that has recently been having issues with locks on the database. The app is written in Java and uses Hibernate. One of the issues we identified are transactions that are kept alive unnaturally long while also having isolation levels changed between READ_COMMITED and READ_UNCOMMITED frequently. While we acknowledge that the clear solution is refactoring the code so that transactions are smaller, this would be an enormous effort that we cannot afford entirely right now (most used parts of the app are being migrated to a new system but this procedure is relatively slow).
So - because we use READ_UNCOMMITED for all our Select operations and READ_COMMITED for everything else, a DBA that has been helping us, identified a possible solution in changing the isolation level to a global READ_COMMITED and changing all select queries to include the hint 'with (NOLOCK)'. He says functionally there should be no difference in the way data is retrieved (since we use dirty reads right now with no problem) while providing us with an advantage in not having to frequently change isolation level within the transaction. I believe his idea also comes in regards to recent reports we've been having about database locks being caused by isolation level changes.
So - Can we (and if so, how?) tell hibernate to add a 'with (nolock)' hint on all queries being automatically generated by the usage of mapped java objects and HQL (and maybe even existing SQL being passed to hibernate, though this seems like pushing it :) ) WITHOUT changing the isolation level?
Final side notes: we are using an older version of hibernate, v3.5 and right now an upgrade is unlikely, some incredibly 'smart' people decided to taint it at some point, inserting some of their own code that the application uses. Upgrading has been tried and failed multiple times.
Also: i have checked quite a few related threads, the general idea seems to be: don't use nolock, change isolation level, which - as stated - we're not looking to do.
Edit1: Since the app has been continuously developed in the past 12 years, there are loads of modules that haven't been even once glanced over by the current dev team, the ideal solution would be something that doesn't require the identification of every single bit of Java code that uses persisted objects.
Edit2: A possible way to go about this - should Hibernate allow it - would be to add a form of Interceptor that receives the formatted SQL query before being passed to the db driver. I would then take care of adding the hints myself, using some form of regex.
Thank you very much in advance.
You cannot use (NOLOCK) with HQL. You can however with native SQL if you decide to change your queries. Something like:
getCurrentSession().createSQLQuery("select * from table with(NOLOCK)").list();

Spring + Hibernate web application - When to use cache?

I'm building java web application which in future can generate a lot of traffic.
All in all it uses some quite simple queries to the database but still some kind of cache may be necessary to keep low latency and to prevent high database access rate.
Shall I bother with cache from the start? is it necessity?
Is it hard to implement or use some open source solutions on existing queries and how such cache will know when database state changed?
It all depends on how much traffic you expect, do you have some estimate of the max volume or the number of users?
Most of the times you don't need to worry about the cache from the beginning and can add later a hibernate second level cache later on.
You can start the development without a cache configured, then add it later on by choosing a cache provider and plug it as second level cache provider. EHCache is a frequent choice.
You can then annotate your entities with #Cache with different strategies, for example read only, etc.

What are the best practices for migrating an Oracle 10g database to Microsoft SQL 2008 R2? Application is using Hibernate

Basically what the title says. Going forward, we need to start supporting both database platforms (and will start writing migrations accordingly), but we need to do the first initial "port".
Our DBAs are confident they can convert the schema, tables, data types, etc. but our developers have less confidence that the DAOs will "just work". Can someone point us towards some resources we can review? Ideally common pitfalls to avoid, specific tests to run, etc. We will of course run the full suite of database tests at the application layer, but want to do as much preparation as possible before then.
Pay attention to and test performance under load. Oracle does some things fundamentally differently than other database vendors. Tom Kyte's excellent book Expert Oracle Database Architecture points out several differences. A couple of highlights:
Oracle never locks data just to read it. Many other databases do.
A writer of data in Oracle never blocks a reader. A reader of data never blocks a writer. Again, many other vendors do.
Not paying attention to things like this can cause big headaches after a conversion when locking issues surface. This is not to imply a superiority of one product over another, rather it just means that what works well with one vendor's product may fail miserably in another, and custom approaches depending on the database may be required.
Ditto (although on a quite simple schema, have to say). "Just worked". Hibernate magic.
I had my peace of mind because we had 100% test coverage for DAO layer. So when schema was recreated on MS SQL, and some table and column names were updated in the mapping (don't remember why, but DBAs asked to, may be naming convention), we just run our tests and found no failed ones.
P.S. Recalled one interesting detail: functional tests were all OK. But when PTE started on MS SQL database, we have found that a concurrent access to one particular table was times slower than on Oracle due to locks propagation. We had to redesign that functionality.
I think the first step would be to get an empty MS SQL schema, use hbm2ddl=true and let Hibernate create the tables there. Then show this to your DBAs and ask if this makes sense.
Populating data is less of a problem, I'd guess queries would be more slippery (especially if you use raw JDBC in some places). You might also want to check query plans for commonly used queries and see if these make sense, too.

What is the best way to serialize an EMF model instance?

I have an Eclipse RCP application with an instance of an EMF model populated in memory. What is the best way to store that model for external systems to access? Access may occur during and after run time.
Reads and writes of the model are pretty balanced and can occur several times a second.
I think a database populated using Hibernate + Teneo + EMF would work nicely, but I want to know what other options are out there.
I'm using CDO (Connected Data Objects) in conjunction with EMF to do something similar. If you use the examples in the Eclipse wiki, it doesn't take too long to get it running. A couple of caveats:
For data that changes often, you probably will want to use nonAudit mode for your persistence. Otherwise, you'll save a new version of your EObject with every commit, retaining the old ones as well.
You can choose to commit every time your data changes, or you can choose to commit at less frequent intervals, depending on how frequently you need to publish your updates.
You also have fairly flexible locking options if you choose to do so.
My application uses Derby for persistence, though it will be migrated to SQL Server before long.
There's a 1 hour webinar on Eclipse Live (http://live.eclipse.org/node/635) that introduces CDO and gives some good examples of its usage.
I'd go with Teneo to do the heavy lifting unless performance is a real problem (which it won't be unless your models are vast). Even if it is slow you can tune it using JPA annotations.

Categories