I understand that #Cacheput calls the method no matter what and updates the result in the cache? Then why do we need this annotation in the first place if the method is called everytime?
Consider an API that would simply return certain data from a database using #Cacheable annotation. And if another api would update the same data in the primary data source, it would also need to update it in the cache. Here for 2nd API, you need to use #CachePut annotation to implement that.
Why do we need this annotation in the first place?
That's the syntactical expression, that you are telling the spring container to enable cache. It eventually override any with same key in cache.
Real life scenario
A product-refresh operation, where you want a specific product details to be re-calculated, if any change in price and then store that product in cache for any future reference. #CachePut eventually override any with same key in cache.
#Cacheput will only update the values that are stale and hence it calls the method every time to update the cache.
Related
I have a Spring Boot web service with some method that does a get-from-DB followed by insert-if-not-present (with some logic in between that I'd rather keep in Java for now).
The method is annotated with #Transactional, but of course with the default isolation level (read committed), it's possible to end up with the same row inserted twice if two run in parallel.
If I change isolation level to serializable, then I would get a performance hit.
Would it be better to use plain Java synchronized, and to synchronize on a global object that uniquely represents the item being queried/added? Basically I would intern the string param that gets passed to the method, which represents some item name, and synchronize against that.
Obviously I wouldn't be able to scale horizontally, but let's assume this instance of the web service is the only client of the DB.
Add unique constraint for inserted data. This way you will be not able to insert the same data twice.
If you can be sure that the Web Service is the only client and that only one instance is running than Synchronized will do the work perfectly but it's not a good practices and you better do Strict Db isolation even implement coherence checks and contraint at DB Level
I would like to update a field by a unique ID in a MySQL database.
First method: fetch the object (select * from) from the database using unique ID (by uniqueresult()), then set the desired value to object and saveOrUpdate is performed.
Second method is to write an update query in a DAO implementation (update table tab set tab.name=123 where..., executeUpdate()), also the same result.
Which is a good way to perform update operation and why?
Well if you are using Hibernate, why would you do it in native SQL when you can just use Hibernate Sessions's .get(), .load(), .merge() .update() methods.
Here's an example from Hibernate documentation to modify a persistent object:
DomesticCat cat = (DomesticCat) sess.load( Cat.class, new Long(69) );
cat.setName("PK");
sess.flush(); // changes to cat are automatically detected and persisted
For further reading you can check Modifying persistent objects and Modifying detached objects sections in Hibernate documentation.
And according to the documentation :
The most straightforward way to update the state of an object is to load() it and then manipulate it directly while the Session is open.
I hope this answers both your questions Which is a good way to perform update operation and why?.
There is no "good way" to perform the update you want to. It entierly depend on your needs.
Both method work, but the first will permit you to update more than one field without having to modify your sql query. It will be the responsability of the developper to take care at the state of the object before calling the saveOrUpdate method.
The second method will ensure that no other field will be update in database.
Answer you on your future needs. Do you only need this field to be updated? Can it change in the future? What will be the inpact of both method on the application?
Then you will have your answer.
I have a following line in my code:
String productName = Utils.getProductName(productId, productRepository, language);
This util method retrieves the product using method findOne(productId), but has some additional logic well. It is used in multiple places in my code.
In one place, a few lines lower, I need the Product object, so I do following:
Product product = productRepository.findOne(productId);
Here I retrieve the Product again, so we have the same action on the database again. But I believe that JPA (Hibernate) caches the object so I don't have to worry about it, the performance won't be affected. Am I right?
Of course, I try to avoid such a duplicity. But in this case refactoring getProductName method would have an impact on other places where I use this method. So I'd like to just leave it as it is. But if the performance cost would be noticeable, I'd better tweak the code.
Yes, there is a first level cache enabled on the entity manager. "In first level cache CRUD operations are performed per transaction basis to reduce the number of queries fired to the database."
http://www.developer.com/java/using-second-level-caching-in-a-jpa-application.html
Just be sure not to create "inconsistent" states without informing the entity manager or flush the changes to the DB.
In our application we have configured Hibernate to work with EHcache. Aim is that once object is loaded in Cache, no db call should every be invoked unless object is changed.
In order to test this, i am making call to object and printing identityhashcode [using call System.identityHashCode(this)] of object. And i notice that identityhashcode of object is changing in every call which makes us feel that object is loading everytime.
But in logs, we donot see Hibernate making any sql calls to database.
Can someone please guide, if my test is correct or not?
There are many things that might explain the difference. Also, not hitting the database might also mean that you are getting objects from the session cache (aka, first level cache). Make sure you create the object in one session and retrieve it twice in another session (the first might hit the database, the second shouldn't).
The ideal would be to ask Hibernate if the object was retrieved from the cache. The easiest way, is to enable the statistics collection and then print the hits/misses:
Statistics stats = sessionFactory.getStatistics();
stats.setStatisticsEnabled(true);
... // do your work
long hitCount = stats.getQueryCacheHitCount();
long missCount = stats.getQueryCacheMissCount();
Since you don't see any calls to the database, it's pretty safe to say that the cache is working.
The reason you see different identity hashcodes is because EHCache doesn't store the objects as is. Rather it stores a serialized version, which it will deserialize when there's a cache hit. The deserialized version is a new object, hence the different identityHashCode.
It's still faster than going to the database.
I use the Google App Engine datastore and I need to update one my entities which has been saved on it (I use the Objectify framework to perform operations on the datastore).
However, I only need to update one field of the entity.
For now I load (calling load()) the entity, use a getter/setter to modify the field and then call save() to persist it. I am sure there is a better way. What is the preferred method to do so?
That's normal way.
I guess you're looking for SQL-like UPDATE? Google Datastore doesn't have such things. Update and Insert are same operation - you're putting an Entity for a key.
See docs: https://cloud.google.com/appengine/docs/java/datastore/entities#Java_Updating_an_entity