Theoretically, session.get() method is supposed to hit the database always, no matter whether the entity is stored in the cache or not. But whenever I use session.get() or session.load(), both doesn't hit the database second time.
Session session = factory.openSession();
tx = session.beginTransaction();
Customer cust = (Customer)session.get(Customer.class,2);
System.out.println(cust.getCid()+","+cust.getFirstName()+","+cust.getLastName()+","+cust.getPhone());
Customer cust2 = (Customer)session.get(Customer.class,2);
System.out.println(cust2.getCid()+","+cust2.getFirstName()+","+cust2.getLastName()+","+cust2.getPhone());
tx.commit();
session.close();
and this is the output,
Hibernate: select customer0_.cid as cid1_1_0_, customer0_.firstName as firstNam2_1_0_, customer0_.lastName as lastName3_1_0_, customer0_.email as email4_1_0_, customer0_.phone as phone5_1_0_, customer0_.aid as aid6_1_0_ from mycustomers customer0_ where customer0_.cid=?
2,Sam,pp,9799999999
2,Sam,pp,9799999999
Select query is executed only once and next time, it's retrieved from the cache. Same output if I use session.load() method also.
Am I missing something here? Please clarify.
Here's what's happening here:
The first query on console
It will always return a “proxy”. For example if you do session.load(Customer.class, 2), it will return a proxy object. Proxy object just have an identifier value and nothing else. You can imagine it to be somewhat like this.
customer.id = 2;
customer.fname = null;
customer.lname = null;
customer.address = null;
//rest all properties are null
It will Hit the database whenever you'll access the properties. In your case you're immediately calling ust.getCid() so it will immediately hit the database to fetch those queries. So the first query that you see in your console will appear for both the cases (i.e., session.get() and session.load())
Try doing this and see what your console looks like:
Session session = factory.openSession();
tx = session.beginTransaction();
Customer cust = (Customer)session.get(Customer.class,2);
//do not call any getter.
You'll see the difference on your console.
Why is second query not appearing
Hibernate Second Level Cache
You're trying to access the same object that you've accessed previously. Hibernate will then (instead of fetching it from database again) fetch it from second level cache.
You'll find detailed example of this scenario on this page : Hibernate Second Level Cache.
(just see the last example, it's similar to what you're getting)
Related
I am using the below set of code for an update:
private void updateAvatarPath(Integer param1, String param2, String param3, boolean param4){
Transaction avatarUpdatePathTransaction = session.beginTransaction();
String updateQuery = "query goes here with param";
Query query = session.createSQLQuery(updateQuery);
query.executeUpdate();
avatarUpdatePathTransaction.commit();
session.flush();
}
This function is being called from a loop. So this takes time to update since for each loop it's hitting the DB. Instead of hitting DB every time, to increase the performance I am planning to execute it as batches. But have no idea how to do it.
session.doWork() is one of the solutions which I got. I want to know any other option available to do it.
You should move Transaction avatarUpdatePathTransaction = session.beginTransaction(); before the start of your loop and avatarUpdatePathTransaction.commit(); after the end of your loop.
The recommended pattern is to have one session per "unit of work", in your case this seems to be modifying multiple entities in a single session/transaction.
The session.flush(); is not necessary I think, committing the transaction should flush the session
i am using both second level cache and query cache. Here is the code snippet
//first block
session = factory.openSession();
company1=(Company)session.get(Company.class, 1);
session.close();
//second block
session = factory.openSession();
tx = session.beginTransaction();
Query updateQuery=session.createQuery("update Company set companyName = 'newCompany' where companyId=1");
updateQuery.setCacheable(true);
updateQuery.executeUpdate();
tx.commit();
session.close();
//Third block
session = factory.openSession();
company1=(Company)session.get(Company.class, 1); // line 1
session.close();
In second block i did update in query . In third block i am getting the company record thru second level cache. What i was expecting i will get the same result(in 3rd block) what i got in first block but i got the updated record (done by query update in 2nd block) i.e "newCompany" at line 1
So looks like query cache and second level cache are in synch with
each other as update done by query cache is picked by second level cache.
UPDATE:-
So How does query and second level cahe works in synch? I mean does query cache first check
under second level cache whether there has been any update for the given query parameter?
The query cache stores IDs returned by previous executions of a cacheable select query.
Let's say you execute the following cacheable query:
select line from OrderLine line join line.order order
where line.status = ? and order.date = ?
If you execute it once, Hibernate will store the IDs of the lines returned by the query in its query cache. And it will store the lines themselves in the second-level cache.
If you execute the same query a second time, with the same parameters, Hibernate will extract the IDs from the query cache, without executing the select query. Then it will get every line by ID (which should be fast since the lines are in the second-level cache)
If you insert, update or delete a line or an order, Hibernate will detect it. Since this modification could affect the result of the cached query, the cache entries associated with this query in the query cache will be evicted. So the nexttime you execute this query again, it will be executed against the database, and the results will be stored again in the query cache.
So in my database, I have 3 rows, two rows have defaultFlag as 0 and one is set to 1, now in my processing am updating defaultProperty of one object to 1 from 0 but am not saving this object yet.
Before saving I need to query database and find if any row has defaultFlag set or not, there would be only 1 default set.
So before doing update am running query to find if default is set and i get 2 values out, note here if i go and check in db then there is only 1 row with default set but query gives me two result because this.object default property has changed from 0 to 1 but note that this object is not yet saved in database.
I am really confused here as to why hibernate query is returning 2 when there is one row with default set in database and other object whose default property has changed but it is not saved.
Any thoughts would be helpful. I can provide query if need be.
Update
Following suggestions, I added session.clear() to before running the query.
session.clear();
String sql = "SELECT * FROM BANKACCOUNTS WHERE PARTYID = :partyId AND CURRENCYID = :currencySymbol AND ISDEFAULTBANKACCOUNT= :defaultbankAccount";
SQLQuery q = session.createSQLQuery(sql);
q.addEntity(BankAccount.class);
q.setParameter("partyId", partyId);
q.setParameter("currencySymbol", currencySymbol);
q.setParameter("defaultbankAccount", 1);
return q.uniqueResult();
and it returns 1 row in result as expected but now am getting
nested exception is org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session exception
Either query which row has the "default flag" set before you start changing it, or query for a list of rows with default flag set & clear all except the one you're trying to set.
Very easy, stop mucking about with your "brittle" current approach which will break in the face of concurrency or if data is ever in an inconsistent state. Use a reliable approach instead, which will always set the data to a valid state.
protected void makeAccountDefault (BankAccount acc) {
// find & clear any existing 'Default Accounts', other than specified.
//
String sql = "SELECT * FROM BANKACCOUNTS WHERE PARTYID = :partyId AND CURRENCYID = :currencySymbol AND ISDEFAULTBANKACCOUNT= :defaultbankAccount";
SQLQuery q = session.createSQLQuery(sql);
q.addEntity(BankAccount.class);
q.setParameter("partyId", partyId);
q.setParameter("currencySymbol", currencySymbol);
q.setParameter("defaultbankAccount", 1);
//
List<BackAccount> existingDefaults = q.list();
for (BankAccount existing : existingDefaults) {
if (! existing.equals( acc))
existing.setDefaultBankAccount( false);
}
// set the specified Account as Default.
acc.setDefaultBankAccount( true);
// done.
}
This is how you write proper code, do it simple & reliable. Never make or depend on weak assumptions about the reliability of data or internal state, always read & process "beforehand state" before you do the operation, just implement your code clean & right and it will serve you well.
I think that your second query won't be executed at all because the entity is already in the first level cache.
As your transaction is not yet commited, you don't see the changes in the underlying database.
(this is only a guess)
That's only a guess because you're not giving many details, but I suppose that you perform your myObject.setMyDefaultProperty(1) while your session is open.
In this case, be careful that you don't need to actually perform a session.update(myObject) to save the change. It is the nominal case when database update is transparently done by hibernate.
So, in fact, I think that your change is saved... (but not commited, of course, thus not seen when you check in db)
To verify this, you should enable the hibernate.show_sql option. You will see if an Update statement is triggered (I advise to always enable this option in development phase anyway)
Due to legacy code issues I need to calculate a unique index manually and can't use auto_increment, when inserting a new row to the database.
The problem is that multiple inserts of multiple clients (different machines) can occur simultaneously. Therefore I need to lock the row with the highest id from being read by other transactions while the current transaction is active. Alternatively I could lock the whole table from any reads. Time is not an issue in this case because writes/reads are very rare (<1 op per second)
It tried to set the isolation level to 8 (Serializable), but then MySQL throws a DeadLockException. Interestingly the SELECT to determine the next ID is still done, which contradicts my understanding of serializable.
Also setting the LockMode to PESSIMISTIC_READ of the select, doesn't seem to help.
public void insert(T entity) {
EntityManager em = factory.createEntityManager();
try {
EntityTransaction transaction = em.getTransaction();
try {
transaction.begin();
int id = 0;
TypedQuery<MasterDataComplete> query = em.createQuery(
"SELECT m FROM MasterDataComplete m ORDER BY m.id DESC", MasterDataComplete.class);
query.setMaxResults(1);
query.setLockMode(LockModeType.PESSIMISTIC_READ);
List<MasterDataComplete> results = query.getResultList();
if (!results.isEmpty()) {
MasterDataComplete singleResult = results.get(0);
id = singleResult.getId() + 1;
}
entity.setId(id);
em.persist(entity);
transaction.commit();
} finally {
if (transaction.isActive()) {
transaction.rollback();
}
}
} finally {
em.close();
}
}
Some words to the application:
It is Java-Standalone, runs on multiple clients which connect to the same DB Server and it should work with multiple DB servers (Sybase Anywhere, Oracle, Mysql, ...)
Currently the only idea I've got left is just to do the insert and catch the Exception that occurs when the ID is already in use and try again. This works because I can assume that the column is set to primary key/unique.
The problem is that with PESSIMISTIC_READ you are blocking others UPDATE on the row with the highest ID. If you want to block other's SELECT you need to use PESSIMISTIC_WRITE.
I know it seems strange since you're not going to UPDATE that row.. ..but if you want the other blocks while executing a SELECT you should lye and say: "Hay all.. ..I read this row and will UPDATE it".. ..so that they will not be allowed to read that row sinche the DB engine thinks that you will modify it before the commit.
SERIALIZABLE itself according to the documentation converts all plain SELECT statements to SELECT ... LOCK IN SHARE MODE so does not more than what you're already doing explicitly.
I'm trying to insert a new object into my database. I followed a step-by-step tutorial but it seems it doesn't work for me. In the tutorial there was the following line :
Transaction tx = dao.GetSession().beginTransaction();
The GetSession doesn't pop up, i get the error "GetSession() is not visible from DaoHibernateSupport".
I replaced the line with the following :
Transaction tx = dao.getSessionFactory().getCurrentSession().beginTransaction();
but then i got a null Exception on the currentSession.
I read online and added the current_session_context property, set as "thread".
Everything seems to work now, i don 't get any Exception but still no rows are inserted into my MySql database. The table is InnoDB.
Here is my final code:
Banner banner = new Banner();
banner.setUrl(url);
banner.setCategorie(categorie);
banner.setCuvinteCheie(cuvinte_cheie);
banner.setMaxCpc(cpc);
banner.setPath(cale);
banner.setPaththumb(caleThumb);
banner.setAdvertiserId(Integer.parseInt(session.getAttribute("UserID").toString()));
BannerDAO dao = new BannerDAO();
SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
dao.setSessionFactory(sessionFactory);
Transaction tx = dao.getSessionFactory().getCurrentSession().beginTransaction();
dao.save(banner);
tx.commit();
dao.getSessionFactory().getCurrentSession().close();
So no exceptions raised here, but when i access the database there are no rows in the table.
Can you please help me ?
Thank you!
You may try
Transaction tx = dao.getSessionFactory().openSession().beginTransaction();
instead of
Transaction tx = dao.getSessionFactory().getCurrentSession().beginTransaction();
I figured it out. When i used reverse engineering in MyEclipse i created a SpringDAO instead of BasicDAO. Now the method getSession() works fine.