i am using both second level cache and query cache. Here is the code snippet
//first block
session = factory.openSession();
company1=(Company)session.get(Company.class, 1);
session.close();
//second block
session = factory.openSession();
tx = session.beginTransaction();
Query updateQuery=session.createQuery("update Company set companyName = 'newCompany' where companyId=1");
updateQuery.setCacheable(true);
updateQuery.executeUpdate();
tx.commit();
session.close();
//Third block
session = factory.openSession();
company1=(Company)session.get(Company.class, 1); // line 1
session.close();
In second block i did update in query . In third block i am getting the company record thru second level cache. What i was expecting i will get the same result(in 3rd block) what i got in first block but i got the updated record (done by query update in 2nd block) i.e "newCompany" at line 1
So looks like query cache and second level cache are in synch with
each other as update done by query cache is picked by second level cache.
UPDATE:-
So How does query and second level cahe works in synch? I mean does query cache first check
under second level cache whether there has been any update for the given query parameter?
The query cache stores IDs returned by previous executions of a cacheable select query.
Let's say you execute the following cacheable query:
select line from OrderLine line join line.order order
where line.status = ? and order.date = ?
If you execute it once, Hibernate will store the IDs of the lines returned by the query in its query cache. And it will store the lines themselves in the second-level cache.
If you execute the same query a second time, with the same parameters, Hibernate will extract the IDs from the query cache, without executing the select query. Then it will get every line by ID (which should be fast since the lines are in the second-level cache)
If you insert, update or delete a line or an order, Hibernate will detect it. Since this modification could affect the result of the cached query, the cache entries associated with this query in the query cache will be evicted. So the nexttime you execute this query again, it will be executed against the database, and the results will be stored again in the query cache.
Related
I am using the below set of code for an update:
private void updateAvatarPath(Integer param1, String param2, String param3, boolean param4){
Transaction avatarUpdatePathTransaction = session.beginTransaction();
String updateQuery = "query goes here with param";
Query query = session.createSQLQuery(updateQuery);
query.executeUpdate();
avatarUpdatePathTransaction.commit();
session.flush();
}
This function is being called from a loop. So this takes time to update since for each loop it's hitting the DB. Instead of hitting DB every time, to increase the performance I am planning to execute it as batches. But have no idea how to do it.
session.doWork() is one of the solutions which I got. I want to know any other option available to do it.
You should move Transaction avatarUpdatePathTransaction = session.beginTransaction(); before the start of your loop and avatarUpdatePathTransaction.commit(); after the end of your loop.
The recommended pattern is to have one session per "unit of work", in your case this seems to be modifying multiple entities in a single session/transaction.
The session.flush(); is not necessary I think, committing the transaction should flush the session
I trying persist a many registers in database reading a file with many lines
I´m using a forech to read the list of objects wrapped in file
logs.stream().forEach(log -> save(log));
private LogData save(LogData log) {
return repository.persist(log);
}
But the inserts are slow
Do i have a way to speed the inserts?
Your way take a long time because you persist element by element, so you go n time to the database, I would like to use Batch processing instead to use one transaction instead of N transaction, so the persist method can be :
public void persist(List<Logs> logs) {
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
logs.forEach(log -> session.save(log));// from the comment of #shmosel
tx.commit();
session.close();
}
Use a Batch Insert, Google "Hibernate Batch Insert" or replace with whatever name of your ORM if it's not Hibernate.
https://www.tutorialspoint.com/hibernate/hibernate_batch_processing.htm
To insert at every line makes this program slowly, why dont you think to collect n lines, and insert n lines together at once.
Theoretically, session.get() method is supposed to hit the database always, no matter whether the entity is stored in the cache or not. But whenever I use session.get() or session.load(), both doesn't hit the database second time.
Session session = factory.openSession();
tx = session.beginTransaction();
Customer cust = (Customer)session.get(Customer.class,2);
System.out.println(cust.getCid()+","+cust.getFirstName()+","+cust.getLastName()+","+cust.getPhone());
Customer cust2 = (Customer)session.get(Customer.class,2);
System.out.println(cust2.getCid()+","+cust2.getFirstName()+","+cust2.getLastName()+","+cust2.getPhone());
tx.commit();
session.close();
and this is the output,
Hibernate: select customer0_.cid as cid1_1_0_, customer0_.firstName as firstNam2_1_0_, customer0_.lastName as lastName3_1_0_, customer0_.email as email4_1_0_, customer0_.phone as phone5_1_0_, customer0_.aid as aid6_1_0_ from mycustomers customer0_ where customer0_.cid=?
2,Sam,pp,9799999999
2,Sam,pp,9799999999
Select query is executed only once and next time, it's retrieved from the cache. Same output if I use session.load() method also.
Am I missing something here? Please clarify.
Here's what's happening here:
The first query on console
It will always return a “proxy”. For example if you do session.load(Customer.class, 2), it will return a proxy object. Proxy object just have an identifier value and nothing else. You can imagine it to be somewhat like this.
customer.id = 2;
customer.fname = null;
customer.lname = null;
customer.address = null;
//rest all properties are null
It will Hit the database whenever you'll access the properties. In your case you're immediately calling ust.getCid() so it will immediately hit the database to fetch those queries. So the first query that you see in your console will appear for both the cases (i.e., session.get() and session.load())
Try doing this and see what your console looks like:
Session session = factory.openSession();
tx = session.beginTransaction();
Customer cust = (Customer)session.get(Customer.class,2);
//do not call any getter.
You'll see the difference on your console.
Why is second query not appearing
Hibernate Second Level Cache
You're trying to access the same object that you've accessed previously. Hibernate will then (instead of fetching it from database again) fetch it from second level cache.
You'll find detailed example of this scenario on this page : Hibernate Second Level Cache.
(just see the last example, it's similar to what you're getting)
I am very confused with the output of the following code that tries to avoid Hibernate caching.
I open a fresh Hibernate session, run a query, and check the result when it stops at the indicated breakpoint. Before continuing execution, I go to MySQL and delete or add a row. When I continue executing, the query still shows old data and old row count, inspite of the evictAllRegions() call on the hibernate cache, while the plain JDBC query shows the updated count (as expected).
Setting hibernate.cache.use_second_level_cache and hibernate.cache.use_query_cache to false didn't help. I guess it shouldn't matter as the cache is being cleared manually.
So, why is Hibernate not hitting the database?
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb...");
for (int i = 0; i < 15; i++) {
session = HibernateUtil.getSessionFactory().openSession();
// Old data keeps being returned
list = session.createCriteria(Language.class).list();
// JDBC fetches expected count
Statement statement = conn.createStatement();
ResultSet resultSet = statement.executeQuery("select * from language");
int x = 0;
while (resultSet.next()) x++; // count the rows
[Breakpoint here]
session.close();
HibernateUtil.getSessionFactory().getCache().evictAllRegions();
}
I believe this is a result of having the transaction isolation level set to REAPEATABLE-READ in MySQL.
When you issue the query from your code, MySQL creates a snapshot of the language table that it continues to present for the remainder of that transaction. So the data is effective cached at MySQL rather than at Hibernate.
http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html#isolevel_repeatable-read
I noticed weird behavior in my application. It looks like commited data is not visible right after commit. Algorithm looks like this :
connection1 - insert into table row with id = 5
connection1 - commit, close
connection2 - open
connection2 - select from table row with id = 5 (no results)
connection2 - insert into table row with id = 5 (PRIMARY KEY VIOLATION, result is in db)
If select on connection2 returns no results then i do insert, otherwise it is update.
Server has many databases (~200), it looks like commit is done but changes are in DB later. I use java and jdbc. Any ideas would be appreciated.
This behavior corresponds to the REPEATABLE READ isolation mode, see SET TRANSACTION:
REPEATABLE READ
All statements of the current transaction can only see rows committed before the
first query or data-modification statement
was executed in this transaction.
Try connection.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED) to see if it makes a difference.