Follow my code:
Company cc = em.find(Company.class, clientUser.getCompany().getId());
System.out.println(cc.getCompany_code());
HashMap findProperties = new HashMap();
findProperties.put(QueryHints.CACHE_RETRIEVE_MODE, CacheRetrieveMode.BYPASS);
Company oo = em.find(Company.class, clientUser.getCompany().getId(), findProperties);
System.out.println(oo.getCompany_code());
Just like the example "Used as EntityManager properties". here
But, there are nothing different between the two outputs.
What are you expecting to be different and why?
Note that CACHE_RETRIEVE_MODE only affects the shared (2nd level) cache, not the persistence context (1st level cache/transactional cache), object identity must always be maintained in the persistence context for objects that have already been read.
If you have changed your database, and expect the new data then try the BYPASS using a new EntityManager, or try using refresh().
EclipseLink also provides the query hint "eclipselink.maintain-cache"="false" to bypass the persistence context as well.
What version of EclipseLink are you using? I believe there was a bug in BYPASS in the 2.0 release that was fixed in 2.1. Try the latest release.
Related
This question already has answers here:
Spring Data JPA Update #Query not updating?
(5 answers)
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Let's suppose to have this situation:
We have Spring Data configured in the standard way, there is a Respository object, an Entity object and all works well.
Now for some complex motivations I have to use EntityManager (or JdbcTemplate, whatever is at a lower level than Spring Data) directly to update the table associated to my Entity, with a native SQL query. So, I'm not using Entity object, but simply doing a database update manually on the table I use as entity (it's more correct to say the table from which I get values, see next rows).
The reason is that I had to bind my spring-data Entity to a MySQL view that makes UNION of multiple tables, not directly to the table I need to update.
What happens is:
In a functional test, I call the "manual" update method (on table from which the MySQL view is created) as previously described (through entity-manager) and if I make a simple Respository.findOne(objectId), I get the old object (not updated one). I have to call Entitymanager.refresh(object) to get the updated object.
Why?
Is there a way to "synchronize" (out of the box) objects (or force some refresh) in spring-data? Or am I asking for a miracle?
I'm not ironical, but maybe I'm not so expert, maybe (or probably) is my ignorance. If so please explain me why and (if you want) share some advanced knowledge about this amazing framework.
If I make a simple Respository.findOne(objectId) I get old object (not
updated one). I've to call Entitymanager.refresh(object) to get
updated object.
Why?
The first-level cache is active for the duration of a session. Any object entity previously retrieved in the context of a session will be retrieved from the first-level cache unless there is reason to go back to the database.
Is there a reason to go back to the database after your SQL update? Well, as the book Pro JPA 2 notes (p199) regarding bulk update statements (either via JPQL or SQL):
The first issue for developers to consider when using these [bulk update] statements
is that the persistence context is not updated to reflect the results
of the operation. Bulk operations are issued as SQL against the
database, bypassing the in-memory structures of the persistence
context.
which is what you are seeing. That is why you need to call refresh to force the entity to be reloaded from the database as the persistence context is not aware of any potential modifications.
The book also notes the following about using Native SQL statements (rather than JPQL bulk update):
■ CAUTION Native SQL update and delete operations should not be
executed on tables mapped by an entity. The JP QL operations tell the
provider what cached entity state must be invalidated in order to
remain consistent with the database. Native SQL operations bypass such
checks and can quickly lead to situations where the inmemory cache is
out of date with respect to the database.
Essentially then, should you have a 2nd level cache configured then updating any entity currently in the cache via a native SQL statement is likely to result in stale data in the cache.
In Spring Boot JpaRepository:
If our modifying query changes entities contained in the persistence context, then this context becomes outdated.
In order to fetch the entities from the database with latest record.
Use #Modifying(clearAutomatically = true)
#Modifying annotation has clearAutomatically attribute which defines whether it should clear the underlying persistence context after executing the modifying query.
Example:
#Modifying(clearAutomatically = true)
#Query("UPDATE NetworkEntity n SET n.network_status = :network_status WHERE n.network_id = :network_id")
int expireNetwork(#Param("network_id") Integer network_id, #Param("network_status") String network_status);
Based on the way you described your usage, fetching from the repo should retrieve the updated object without the need to refresh the object as long as the method which used the entity manager to merge has #transactional
here's a sample test
#DirtiesContext(classMode = ClassMode.AFTER_CLASS)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ApplicationConfig.class)
#EnableJpaRepositories(basePackages = "com.foo")
public class SampleSegmentTest {
#Resource
SampleJpaRepository segmentJpaRepository;
#PersistenceContext
private EntityManager entityManager;
#Transactional
#Test
public void test() {
Segment segment = new Segment();
ReflectionTestUtils.setField(segment, "value", "foo");
ReflectionTestUtils.setField(segment, "description", "bar");
segmentJpaRepository.save(segment);
assertNotNull(segment.getId());
assertEquals("foo", segment.getValue());
assertEquals("bar",segment.getDescription());
ReflectionTestUtils.setField(segment, "value", "foo2");
entityManager.merge(segment);
Segment updatedSegment = segmentJpaRepository.findOne(segment.getId());
assertEquals("foo2", updatedSegment.getValue());
}
}
I have a Spring Boot 1.3.M1 web application using Spring Data JPA. For optimistic locking, I am doing the following:
Annotate the version column in the entity: #Version private long version;. I confirmed, by looking at the database table, that this field is incrementing properly.
When a user requests an entity for editing, sending the version field as well.
When the user presses submit after editing, receiving the version field as a hidden field or something.
Server side, fetching a fresh copy of the entity, and then updating the desired fields, along with the version field. Like this:
User user = userRepository.findOne(id);
user.setName(updatedUser.getName());
user.setVersion(updatedUser.getVersion());
userRepository.save(user);
I was expecting this to throw exception when the versions wouldn't match. But it doesn't. Googling, I found some posts saying that we can't set the #Vesion property of an attached entity, like I'm doing in the third statement above.
So, I am guessing that I'll have to manually check for the version mismatch and throw the exception myself. Would that be the correct way, or I am missing something?
Unfortunately, (at least for Hibernate) changing the #Version field manually is not going to make it another "version". i.e. Optimistic concurrency checking is done against the version value retrieved when entity is read, not the version field of entity when it is updated.
e.g.
This will work
Foo foo = fooRepo.findOne(id); // assume version is 2 here
foo.setSomeField(....);
// Assume at this point of time someone else change the record in DB,
// and incrementing version in DB to 3
fooRepo.flush(); // forcing an update, then Optimistic Concurrency exception will be thrown
However this will not work
Foo foo = fooRepo.findOne(id); // assume version is 2 here
foo.setSomeField(....);
foo.setVersion(1);
fooRepo.flush(); // forcing an update, no optimistic concurrency exception
// Coz Hibernate is "smart" enough to use the original 2 for comparison
There are some way to workaround this. The most straight-forward way is probably by implementing optimistic concurrency check by yourself. I used to have a util to do the "DTO to Model" data population and I have put that version checking logic there. Another way is to put the logic in setVersion() which, instead of really setting the version, it do the version checking:
class User {
private int version = 0;
//.....
public void setVersion(int version) {
if (this.version != version) {
throw new YourOwnOptimisticConcurrencyException();
}
}
//.....
}
You can also detach entity after reading it from db, this will lead to version check as well.
User user = userRepository.findOne(id);
userRepository.detach(user);
user.setName(updatedUser.getName());
user.setVersion(updatedUser.getVersion());
userRepository.save(user);
Spring repositories don't have detach method, you must implement it. An example:
public class BaseRepositoryImpl<T, PK extends Serializable> extends QuerydslJpaRepository<T, PK> {
private final EntityManager entityManager;
public BaseRepositoryImpl(JpaEntityInformation entityInformation, EntityManager entityManager) {
super(entityInformation, entityManager);
this.entityManager = entityManager;
}
public void detach(T entity) {
entityManager.detach(entity);
}
...
}
Part of the #AdrianShum answer is correct.
The version comparing behavior follows basically this steps:
Retrieve the versioned entity with its version number, lets called V1.
Suppose you modify some entity's property, then Hibernate increments the version number to V2 "in memory". It doesn't touch the database.
You commit the changes or they are automatically commited by the environment, then Hibernate will try to update the entity including its version number with V2 value. The update query generated by Hibernate will modify the registry of the entity only if it match the ID and previous version number (V1).
After the entity registry is successfully modified, the entity takes V2 as its actual version value.
Now suppose that between steps 1 and 3 the entity was modified by another transaction so its version number at step 3 isn't V1. Then as the version number are different the update query won't modify any registry, hibernate realize that and throw the exception.
You can simply test this behavior and check that the exception is thrown altering the version number directly on your database between steps 1 and 3.
Edit.
Don't know which JPA persistence provider are you using with Spring Data JPA but for more details about optimistic locking with JPA+Hibernate I suggest you to read chapter 10, section Controlling concurrent access, of the book Java Persistence with Hibernate (Hibernate in Action)
In addition to #Adrian Shum answer, I want to show how I solved this problem. If you want to manually change a version of Entity and perform an update to cause OptimisticConcurrencyException you can simply copy Entity with all its field, thus causing an entity to leave its context (same as EntityManager.detach()). In this way, it behaves in a proper way.
Entity entityCopy = new Entity();
entityCopy.setId(id);
... //copy fields
entityCopy.setVersion(0L); //set invalid version
repository.saveAndFlush(entityCopy); //boom! OptimisticConcurrencyException
EDIT:
the assembled version works, only if hibernate cache does not contain entity with the same id. This will not work:
Entity entityCopy = new Entity();
entityCopy.setId(repository.findOne(id).getId()); //instance loaded and cached
... //copy fields
entityCopy.setVersion(0L); //will be ignored due to cache
repository.saveAndFlush(entityCopy); //no exception thrown
I got to know about possibility of Dynamic entity creation in eclipselink from here. And I'm trying to create Dynamic entities and map them to static entities which are already present in the same persistence unit as described in the examples given here.
I'm using refreshMetadata(with empty map of properties) of EntityManagerFactoryImpl to refresh metadata.
But the the dynamic entities are not getting listed in the metamodel of entitymanager factory.
Can somebody let me know where am I going wrong?
I expect they won't, as the Dynamic entity api adds mappings to the native EclipseLink session, while the JPA metamodel is build from JPA mappings. refreshMetadata is used to rebuild the native EclipseLink session using any new JPA metadata (orm.xml etc), but does not go the other way.
I was able to refresh the metamodel by adding a new metamodel with the current session by the following code snippet:
Metamodel metamodel = new MetamodelImpl((AbstractSession) dynamicHelper.getSession());
((EntityManagerFactoryImpl) emf).setMetamodel(metamodel);
Though this didn't solved my main problem, it solved the problem I've asked here.
I have an entity and one of my properties is an ArrayList of objects, which is serialized. I am trying to delete one of the elements of the list and persist the entity. Everything works fine locally, but not when deployed.
My code:
#Inject
public Repository<User> userRepo;
...
Leader leader = (Leader) item.getModelObject();
...
MySession.get().getUser().getLeaders().remove(leader);
JDOHelper.makeDirty(MySession.get().getUser(), "leaders");
userRepo.persist(MySession.get().getUser());
property definition in User entity:
#Persistent(defaultFetchGroup = "true", serialized = "true")
#Extension(vendorName = "datanucleus", key = "gae.unindexed", value = "true")
private ArrayList<Leader> leaders = new ArrayList<Leader>();
I am using datanucleus-core version 1.1.6, jdo2-api 2.3-eb and datanucleus-appengine 1.0.10
It works fine when I add new items to the list, but not when I remove something - why is it so? And how can I make it work?
Making something dirty makes it dirty and nothing more; persist/flush happens after ... start of next transaction (as per JDO/JPA spec) or close of PM/EM; no call to makePersistent/persist will change that. This isn't DataNucleus "deciding for itself" not to persist an object, it's simply following the spec.
If you use recent GAE releases (v2.0) you can have non-transactional atomic persist/delete (extension to the specs). If you use SVN trunk (v2.1) you can also have nontransactional atomic updates (extending it further still). i.e with latest code you have the equivalent of JDBC "autocommit"
I am developing a web application using JSF2, JPA2, EJB3 via JBoss7.1.
I have an Entity(Forum) which contains a list of child entities(Topic).
When I tried to get the list of Topics by forumId for the first time the data is being loaded from DB.
List<Topic> topics = entityManager.find(Forum.class, 1).getTopics();
After that I am adding few more child entities(Topics) to Forum and then again I am trying to retrieve list of Topics by forumId. Nut I am getting the old cached results only. The newly inserted child records are not being loaded from DB.
I am able to load the child entities(Topics) by using following methods:
Method1: Calling entityManager.clear() before entityManager.find()
Method2: Using
em.createQuery("select t from Topic t where t.forum.forumId=?1", Topic.class);
or
em.createQuery("SELECT t FROM Topic t JOIN t.forum f WHERE f.forumId = ?1", Topic.class);
I am aware of setting the QueryHints on NamedQueries. But em.find() method is in a super CrudService which is being extended by all DAOs(Stateless EJBs). So setting QueryHints won't work for me.
So I want to know how can i make em.find() method to load data from DB instead of Cache?
PS: I am using Extended Persistence Context type.
#PersistenceContext(unitName="forum", type=PersistenceContextType.EXTENDED)
protected EntityManager em;
You can specify the behavior of individual find operations by setting additional properties that control the entity managers interaction with the second level cache.
Map<String, Object> props = new HashMap<String, Object>();
props.put("javax.persistence.cache.retrieveMode", CacheRetrieveMode.BYPASS);
entityMgr.find(Forum.class, 1, props).getTopics();
Is it possible that the relation between Forum and Topic was only added in one direction in your entity beans? If you set the forum id on the topic, you should also add this topic to the Forum object to have consistent data inside the first level cache. You should also make sure that you are not using two different entity managers for the update and find. The first level cache is only kept per entity manager, another em can still contain an older version of the entitiy.
Probably unrelated, but with JPA2 you also have a minimal api to evict entities from the second level cache, which could be used after an update:
em.getEntityManagerFactory().getCache().evict(Forum.class, forumId);
Put #Cacheable(false) within the Forum.class.