I'm using Hibernate with the JPA API in Java and I'm running the following test:
public void test1()
{
A a = new A();
a.setValue(1);
getDao().store(a);
a.setValue(2);
A loaded = getDao().load(a.getId());
assertEquals(1, loaded.getValue());
}
I am expecting the test to pass because the second part of the code with the load call may be called by a different method that loads the object for a different purpose. I am expecting the state of the object to reflect the state of the database and this is not the case here!
So I found out that the reason this happens is that the persistence context is caching the object unless we detach it. So the way I implemented the store method is:
void store(T object)
{
EntityManager em = getEntityManager();
em.getTransaction().begin();
if (object.getId() == 0)
{
em.persist(object);
}
else
{
em.merge(object);
}
em.getTransaction.commit();
em.detach(object); // Detaches this very object from 1st level cache so that it always reflects DB state
}
public T load(long id)
{
return getEntityManager().find(getPersistentClass(), id);
}
The detach call made the test pass but since then I have a different problem: When I'm storing an entity that contains a reference to a different object and a getter which is using that referenced object is annotated to be validated the validation fails because the persistence context (or the Hibernate session) is passing a different object to the validator (a copy which is not having the same state). Let me give you an example:
class B
{
private int value;
#Transient
private C c = new C(); // Shall not be persisted
#Size(min = 1)
public String getName()
{
return c.getName();
}
public void setName(String name)
{
c.setName(name);
}
}
public void test2()
{
B b = new B();
b.setName("name");
getDao().store(b); // Includes detach, as above
// Validation has passed. c.name was "name".
getDao().store(b);
// Validation has failed. c.name was empty.
}
So c.name was empty in the second attempt although I haven't changed the object. The only thing inbetween is the detach() call. If I remove it it test2 will pass but test1 will fail again.
What kind of magic is going on here? How do I fix this?
Related
I have a service (which I for some reason call controller) that is injected into the Jersey resource method.
#Named
#Transactional
public class DocCtrl {
...
public void changeDocState(List<String> uuids, EDocState state, String shreddingCode) throws DatabaseException, WebserviceException, RepositoryException, ExtensionException, LockException, AccessDeniedException, PathNotFoundException, UnknowException {
List<Document2> documents = doc2DAO.getManyByUUIDs(uuids);
for (Document2 doc : documents) {
if (EDocState.SOFT_DEL == state) {
computeShreddingFor(doc, shreddingCode); //here the state change happens and it is persisted to db
}
if (EDocState.ACTIVE == state)
unscheduleShredding(doc);
}
}
}
doc2DAO.getManyByUUIDs(uuids); gets an Entity object from the database.
#Repository
public class Doc2DAO {
#PersistenceContext(name = Vedantas.PU_NAME, type = PersistenceContextType.EXTENDED)
private EntityManager entityManager;
public List<Document2> getManyByUUIDs(List<String> uuids) {
if (uuids.isEmpty())
uuids.add("-3");
TypedQuery<Document2> query = entityManager.createNamedQuery("getManyByUUIDs", Document2.class);
query.setParameter("uuids", uuids);
return query.getResultList();
}
}
However When I do second request to my API, I see state of this entity object unchanged, that means the same as before the logic above occoured.
In DB there is still changed status.
After the api service restart, I will get the entity in the correct state.
As I understand it, Hibernate uses it's L2 cache for the managed objects.
So can you, please point me to what I am doing wrong here? Obviously, I need to get cached entity with the changed state without service restart and I would like to keep entities attached to the persistence context for the performance reasons.
Now, can you tell me what I am
In the logic I am making some changes to this object. After the completition of the changeDocState method, the state is properly changed and persisted in the database.
Thanks for the answers;
I try to use objectify transaction, but I have some issues when I need to reload an object created in the same transaction.
Take this sample code
#Entity
public class MyObject
{
#Parent
Key<ParentClass> parent;
#Index
String foo;
}
ofy().transact(new VoidWork()
{
#Override
public void vrun()
{
ParentClass parent = load();// load the parent
String fooValue = "bar";
Key<ParentClass> parentKey = Key.create(ParentClass.class, parent.getId())
MyObject myObject = new MyObject(parentKey);
myObject.setFoo(fooValue);
ofy().save().entity(myObject).now();
MyObject reloaded = ofy().load().type(MyObject.class).ancestor(parentKey).filter("foo", fooValue).first().now();
if(reloaded == null)
{
throw new RuntimeException("error");
}
}
});
My object reloaded is always null, maybe I miss something, but logically within a transaction I can query an object which was created in the same transaction?
Thanks
Cloud Datastore differs from relational databases in this particular case. The documentation states that -
Unlike with most databases, queries and gets inside a Cloud Datastore
transaction do not see the results of previous writes inside that
transaction. Specifically, if an entity is modified or deleted within
a transaction, a query or lookup returns the original version of the
entity as of the beginning of the transaction, or nothing if the
entity did not exist then.
https://cloud.google.com/datastore/docs/concepts/transactions#isolation_and_consistency
(This is a simplification of the real problem)
Let's start with the following little class:
#Entity
class Test {
Test(int id, String name) {
this.id = id;
this.name = name;
}
#Id
private int id;
#Column
private String name;
#Override
public int hashCode() {
return id;
}
#Override
public boolean equals(Object obj) {
if (obj instanceof Test) {
return id == ((Test) obj).id;
}
return false;
}
}
If we execute the following, no exception occurs:
EntityManagerFactory factory = Persistence.createEntityManagerFactory("local_h2_persistence");
EntityManager theManager = factory.createEntityManager();
EntityTransaction t = theManager.getTransaction();
Test obj1 = new Test(1, "uno");
tA.begin();
AtheManager.persist(obj1);
AtheManager.persist(obj1); // <-- No exception
tA.commit();
I guess the second call is ignored, or maybe the object is saved to the DB again. The thing is there is no problem in saving the same entity twice. Now let's try the following:
EntityManagerFactory factory = Persistence.createEntityManagerFactory("local_h2_persistence");
EntityManager theManager = factory.createEntityManager();
EntityTransaction t = theManager.getTransaction();
Test obj1 = new Test(1, "uno");
Test obj1_ = new Test(1, "uno");
tA.begin();
AtheManager.persist(obj1);
AtheManager.persist(obj1_); // <-- javax.persistence.EntityExistsException: a different object with the same identifier value was already associated with the session
tA.commit();
What? How could it possibly be relevant that the object is in a different memory location? Somehow it is and the code throws an exception.
How can I make the second example work just like the first?
I am just rewriting what #jb-nizet wrote in the comments, which feels like the answer to me:
Hibernate doesn't use ==. It simply does what you're telling it to do.
persist's contract is: associate this object with the session. If it's
already associated to the session, it's a noop. If it isn't, it is
associated to the session to be inserted in the database later. If
what yo want to do is make sure the state of this object is copied to
a persistent entity, and give me back that persistent entity, then
you're looking for merge().
So the solution was to just use
AtheManager.merge(obj1);
instead of
AtheManager.persist(obj1);
In first case, you save the same object twice, which is allowed.
But in second case, you save two different object to database, but both has the same primary key. It is database constraint violation.
In the first example you pass a reference to an object to save it and in the second call you pass exactly the same reference; they both point to the same object in memory.
However, in the second example you allocated two objects with two new calls which creates the objects at two different memory addresses; they are two different objects. The first reference points to some other memory address then the second object's reference. If you tried this in the second example it would return false: obj1 == obj1_
I'm new to using JPA and trying to transition my code from JdbcTemplate to JPA. Originally I updated a subset of my columns by taking in a map of the columns with their values and created the SQL Update string myself and executed it using a DAO. I was wondering what would be the best way to do something similar using JPA?
EDIT:
How would I transform this code from my DAO to something equivalent in JPA?
public void updateFields(String userId, Map<String, String> fields) {
StringBuilder sb = new StringBuilder();
for (Entry<String, String> entry : fields.entrySet()) {
sb.append(entry.getKey());
sb.append("='");
sb.append(StringEscapeUtils.escapeEcmaScript(entry.getValue()));
sb.append("', ");
}
String str = sb.toString();
if (str.length() > 2) {
str = str.substring(0, str.length() - 2); // remove ", "
String sql = "UPDATE users_table SET " + str + " WHERE user_id=?";
jdbcTemplate.update(sql, new Object[] { userId },
new int[] { Types.VARCHAR });
}
}
You have to read more about JPA for sure :)
Once entity is in Persistence Context it is tracked by JPA provider till the end of persistence context life or until EntityManager#detach() method is called. When transaction finishes (commit) - the state of managed entities in persistence context is synchronized with database and all changes are made.
If your entity is new, you can simply put it in the persistece context by invoking EntityManager#persist() method.
In your case (update of existing entity), you have to get a row from database and somehow change it to entity. It can be done in many ways, but the simpliest is to call EntityManager#find() method which will return managed entity. Returned object will be also put to current persistence context, so if there is an active transaction, you can change whatever property you like (not the primary key) and just finish transaction by invoking commit (or if this is container managed transaction just finish method).
update
After your comment I can see your point. I think you should redesign your app to fit JPA standards and capabilities. Anyway - if you already have a map of pairs <Attribute_name, Attrbute_value>, you can make use of something called Metamodel. Simple usage is shown below. This is naive implementation and works good only with basic attributes, you should take care of relationships etc. (access to more informations about attributes can be done via methods attr.getJavaType() or attr.getPersistentAttributeType())
Metamodel meta = entityManager.getMetamodel();
EntityType<User> user_ = meta.entity(User.class);
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaUpdate<User> update = cb.createCriteriaUpdate(User.class);
Root e = update.from(User.class);
for( Attribute<? super User, ?> attr : user_.getAttributes() ) {
if (map.containsKey(attr.getName())) {
update.set(attr, map.get(attr));
}
}
update.where(cb.equal(e.get("id"), idOfUser));
entityManager.createQuery(update).executeUpdate();
Please note that Update Criteria Queries are available in JPA since 2.1 version.
Here you can find more informations about metamodel generation.
Alternatively to metamodel you can just use java reflection mechanisms.
JPA handles the update. Retrieve a dataset as entity using the entitymanager, change the value and call persist. This will store the changed data in your db.
In case you are using Hibernate(as JPA provider), here's an example
Entity
#Entity
#Table(name="PERSON")
public class Person {
#Id #GeneratedValue(strategy=GenerationType.IDENTITY)
private int id;
#Column(name="NAME", nullable=false)
private String name;
other fields....
}
DAO
public interface PersonDao {
Person findById(int id);
void persist(Person person);
...
}
DaoImpl
#Repository("personDao")
public class PersonDaoImpl extends AnAbstractClassWithSessionFactory implements PersonDao {
public Person findById(int id) {
return (Person) getSession().get(Person.class, id);
}
public void persist(Person person){
getSession().persist(person);
}
}
Service
#Service("personService")
#Transactional
public class PersonServiceImpl implements PersonService {
#Autowired
PersonDao personDao;
#Override
public void createAndPersist(SomeSourceObject object) {
//create Person object and populates with the source object
Person person = new Person();
person.name = object.name;
...
personDao.persist(person);
}
#Override
public Person findById(int id) {
return personDao.findById(id);
}
public void doSomethingWithPerson(Person person) {
person.setName(person.getName()+" HELLO ");
//here since we are in transaction, no need to explicitly call update/merge
//it will be updated in db as soon as the methods completed successfully
//OR
//changes will be undone if transaction failed/rolledback
}
}
JPA documentation are indeed good resource for details.
From design point of view, if you have web interfacing, i tends to say include one more service delegate layer(PersonDelegateService e.g.) which maps the actual data received from UI to person entity (and viceversa, for display, to populate the view object from person entity) and delegate to service for actual person entity processing.
Can somebody explain this behaviour?
Given an Entity MyEntity below, the following code
EntityManagerFactory emf = Persistence.createEntityManagerFactory("greetingPU");
EntityManager em = emf.createEntityManager();
MyEntity e = new MyEntity();
e.setMessage1("hello"); e.setMessage2("world");
em.getTransaction().begin();
em.persist(e);
System.out.println("-- Before commit --");
em.getTransaction().commit();
System.out.println("-- After commit --");
results in an output indicating multiple calls to the "setter" methods of MyEntity by the EclipseLinks EntityManager or its associates. Is this behaviour to be expected? Possibly for some internal performance or structural reasons? Do other JPA implementations show the same behaviour?
-- Before commit --
setId
setId
setMessage1
setMessage2
setId
setMessage1
setMessage2
-- After commit --
There seem to be two different kinds of reassignments. First, an initial set of the Id. Second, two consecutive settings of the whole Entity.
Debugging shows that all calls of a given "setter" have the same object as their parameter.
#Entity
public class MyEntity {
private Long id;
private String message1;
private String message2;
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE)
public Long getId(){ return id; }
public void setId(Long i) {
System.out.println("setId");
id = i;
}
public String getMessage1() { return message1; }
public void setMessage1(String m) {
message1 = m;
System.out.println("setMessage1");
}
public String getMessage2() { return message2; }
public void setMessage2(String m) {
message2 = m;
System.out.println("setMessage2");
}
}
Are you using weaving? http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Advanced_JPA_Development/Performance/Weaving
EclipseLink must call setId once to set the generated ID in the managed entity instance. It will also create an instance and set its values for the shared cache, explaining another setId and set values calls. If you are not using weaving, because the EntityManager still exists, EclipseLink will also create a backup instance to use to compare for future changes - any changes to the managed entity after the transaction commits still need tracked.
If this isn't desirable, weaving allows attribute change tracking to be used instead so that backup copies aren't needed to track changes. You can also turn off the shared cache, but unless you are running into performance or stale data issues, this is not recommended.