Following tutorial on Java Spring, I'm trying to understand how does #Transactional work with setters, and from other question/sources, I can't find a beginner-friendly explanation for it.
Let's say I have a user entity with getters and setters:
#Entity
public class User {
// Id set up
private Long id;
private String name;
private String email;
private String password;
// Other constructors, setters and getters
public void setName(String name) {
this.name = name;
}
}
And in the UserService I have a getUserName method:
#Service
public class UserService {
private final UserRepository userRepository;
#Autowired
public UserService(UserRepository userRepository) {
this.userRepository = userRepository;
}
#Transactional
public void getUserName(Long id) {
User user = userRepository.findById(id).orElseThrow();
user.setName("new user name"); // Why will this update db?
}
}
With #Transactional annotated, the setter function does update db, is this the spring way of updating data? Can someone help explain in layman term, how the Transactional work with setters under the hood?
Edit:
Without #Transactional, setter function won't update db, but in
order to mutate db, will have to call userRepository.save(user). And from the video, the instructor simply says the Transactional will handle jpql for us, and use setters along with it to update db.
Resource update:
Spring Transaction Management: #Transactional In-Depth, hope this is helpful.
Firstly, it is the underlying JPA provider (assume it is Hibernate) to be responsible for updating the entity but not Spring. Spring just provides the integration support with Hibernate.
To update an entity loaded from the DB , generally you need to make sure the following happens in order.
Begin a DB transaction
Use EntityManager to load the entity that you want to update.The loaded entity is said to be managed by this EntityManager such that it will keep track all the changes made on its state and will generate the necessary update SQL to update this entity in (4) automatically.
Make some changes to the entity 's state. You can do it through any means such as calling any methods on it , not just restricting to calling it by setter
Flush the EntityManager. It will then generate update SQL and send to DB.
Commit the DB transaction
Also note the followings:
Spring provides #Transactional which is a declarative way to execute (1) and (5) by annotating it to a method.
By default , Hibernate will call (4) automatically before executing (5) such that you do not need to call (4) explicitly.
Spring Data JPA repository internally use EntityManager to load the user. So the user return from the repository will be managed by this EntityManager.
So in short , #Transactional is necessary to update the entity. And updating the entity is nothing to do with setter as it just care if there are state changes on the entity in the end , and you can do it without using setter.
Spring uses Hibernate as ORM under the hood.
When you call userRepository.findById, Hibernate entity manager is called under the hood, it retrieves entity from database and at the same time makes this entity manageable (you can read separately about Hibernate managed entities).
What it means, in a simple words, the Hibernate 'remembers' the reference to this entity in its internal structures, in the so-called session. It, actually, 'remembers' all entities which it retrieves from database (even the list of entities obtained by queries) during single transaction (in the very basic case).
When you make some method #Transactional, by default Hibernate session is flushed when such method is finished. session.flush() is called under the hood.
Once session gets flushed, Hibernate pushes all changes made to these managed entities back into the database.
That is why your changes got to the database, once method was finished, without any additional calls.
To dig deeper into the topic, you can read more about Hibernate managed entities , session flush mode, repository.save(), repository.saveAndFlush() in Spring Data.
Related
I need to implement an MVC web service. I selected Spring MVC/Data JPA for this purpose
So my service need to:
Load some entities
Make some business logic on it
Update the entities and store it
All above need to be in atomic manner
Some code snippet to clarify:
#Service
public class AService {
#Autowired
private Repository1 repository1;
#Autowired
private Repository2 repository2;
#Autowired
private Repository3 repository3;
#Transactional
public Result getResult(Long id) {
Entity1 e1 = repository1.findById(id);
Entity2 e2 = repository2.findById(id);
Entity3 e3 = repository3.findById(id);
e1.setField(doSomeLogic(...)));
e2.setField(doSomeLogic(...)));
e3.setField(doSomeLogic(...)));
repository1.save(e1);
repository2.save(e2);
repository3.save(e3);
return Result.combine(e1,e2,e3);
}
}
I guess ACID is guaranteed here (depends on isolation level?).
How about lock rows which Entities 1-3 represent for the method execution time? Is it possible some other transaction update rows which Entities 1-3 represent while doSomeLogic(...) works? How to improve it?
What data is locked by #Transactional annotation?
None. #Transactional in combination of the proper transaction support setup just starts/joins a transaction and commits it or rolls it back at the end of a method call.
Locking is done by the JPA implementation and the database.
What you normally want to use is optimistic locking.
To enable it all you have to do is add a numeric attribute with the #Version annotation to all your entities.
This will make a transaction fail when another transaction changed the data written after it was read.
If you actually want to block the operation you need to look into pessimistic locks.
You can make operations in Spring Data JPA acquire pessimistic locks by adding a #Lock annotation to the repository method.
In my web-apllication, in service-layout, I'm using proxy for the "restaurant" entity (FetchType.Lazy on "restaurant" field).
User user = userRepository.get(userId);
/*
Getting proxy here, not restaurant object
*/
Restaurant userRestaurantRef = user.getRestaurant();
if (userRestaurantRef != null){
restaurantRepository.decreaseRating(userRestaurantRef.getId());
}
restaurantRepository.increaseRating(restaurantId);
/*
"getReference" invokes "getOne()"
*/
user.setRestaurant(restaurantRepository.getReference(restaurantId));
userRepository.save(user);
After calling this method via controller in tests, all other RestaurantRepository's getting methods (such as findById()) returns
proxy also.
But, if I called "findById()" method before my service's method, it's all OK.
For example:
mockMvc.perform(put(REST_URL + RESTAURANT1_ID)
.param("time", "10:30")
.with(userHttpBasic(USER)))
.andExpect(status().isNoContent());
Restaurant restaurant = restaurantRepository.get(RESTAURANT1_ID);
"restaurant" is PROXY
Restaurant restaurantBefore = restaurantRepository.get(RESTAURANT1_ID);
mockMvc.perform(put(REST_URL + RESTAURANT1_ID)
.param("time", "10:30")
.with(userHttpBasic(USER)))
.andExpect(status().isNoContent());
Restaurant restaurantAfter = restaurantRepository.get(RESTAURANT1_ID);
"restaurantAfter" is real Object
"get()" into repository:
#Override
public Restaurant get(int id) {
return repository.findById(id).orElse(null);
}
Do you have #Transactional annotation on the method or service class itself?
This could explain the observed behavior.
When a method is executed in a transaction, entities acquired or merged/saved from/to the database are cached until the end of the transaction (usually the end of the method). That means that any call for entity with same ID will be returned directly from the cache and will not hit the database.
Here are some articles on Hibernate's caching and proxies:
Understanding Hibernate First Level Cache with Example
How does a JPA Proxy work and how to unproxy it with Hibernate
The best way to initialize LAZY entity and collection proxies with JPA and Hibernate
Back to your example:
call findById(id) first and then getOne(id) returns the same entity object for both
call getOne(id) first and then findById(id) returns the same proxy for both
That's because they share the same id and are executed in the same transaction.
Documentation on getOne() states that it could return an instance instead of reference (HibernateProxy), so having it returning an entity could be expected:
T getOne(ID id)
Returns a reference to the entity with the given identifier.
Depending on how the JPA persistence provider is implemented this is very likely
to always return an instance and throw an EntityNotFoundException on
first access. Some of them will reject invalid identifiers immediately.
Parameters:
id - must not be null.
Returns:
a reference to the entity with the given identifier.
Documentation on findById() from the other hand does not have any hints in the direction that it could return anything but Optional of entity or empty Optional:
Optional findById(ID id)
Retrieves an entity by its id.
Parameters: id - must not be null.
Returns: the entity with the given id or Optional#empty() if none found
I've spend some time looking for a better explanation, but failed to find one so I'm not sure if it is a bug in the implementation of findById() or just a not (well) documented feature.
As workarounds to the problem I could suggest:
Do not acquire the same entity twice in the same transactional method. :)
Avoid using #Transactional when not need. Transactions can be managed manually too. Here are some good articles on that subject:
5 common Spring #Transactional pitfalls
Spring Transactional propagation modes
Spring pitfalls: transactional tests considered harmful.
Detach first loaded entity/proxy before (re-)loading using the other method:
import javax.persistence.EntityManager;
import org.springframework.transaction.annotation.Transactional;
#Transactional
#Service
public class SomeServiceImpl implements SomeService {
private final SomeRepository repository;
private final EntityManager entityManager;
// constructor, autowiring
#Override
public void someMethod(long id) {
SomeEntity getOne = repository.getOne(id); // Proxy -> added to cache
entityManager.detach(getOne); // removes getOne from the cache
SomeEntity findById = repository.findById(id).get(); // Entity from the DB
}
Similar to the 3rd approach, but instead of removing a single object from the cache, remove all at once using the clear() method:
import javax.persistence.EntityManager;
import org.springframework.transaction.annotation.Transactional;
#Transactional
#Service
public class SomeServiceImpl implements SomeService {
private final SomeRepository repository;
private final EntityManager entityManager;
// constructor, autowiring
#Override
public void someMethod(long id) {
SomeEntity getOne = repository.getOne(id); // Proxy -> added to cache
entityManager.clear(); // clears the cache
SomeEntity findById = repository.findById(id).get(); // Entity from the DB
}
Related articles:
When use getOne and findOne methods Spring Data JPA
Hibernate Session: evict() and merge() Example
clear(), evict() and close() methods in Hibernate
JPA - Detaching an Entity Instance from the Persistence Context
Difference between getOne and findById in Spring Data JPA?
EDIT:
Here is a simple project demonstrating the problem or the feature (depending on the point of view).
Some extension to the - already accepted - answer:
If you use Spring Boot then it automatically enable the Open Session In View filter, which basically works as a transaction for each request.
If you want to turn off this feature add the following line to the application.properties:
spring.jpa.open-in-view=false
OSIV is really a bad idea from a performance and scalability perspective.
I have been using Spring 4's UserDetailsManager to create users, the schema is the one suggested by their docs for USERS and AUTHORITIES tables.
I have also been using the Spring Data #Repository annotated interfaces to manage data in a separate REGISTRATIONS table which is defined to have a relation on the username field in the USERS table.
The problem I've been facing is that when I wish to delete a user, I first delete the record from the REGISTRATIONS table using the injected Spring Data repository, followed by a call to deleteUser() using the UserDetailsManager. (This is simply two consecutive calls in an #Transactional method in an #Service annotated class).
For example
registrationsRepository.delete(uuid);
userDetailsManager.deleteUser(registration.getUsername());
However, the deletion of the user fails as the record in the REGISTRATIONS table (1st line) has not been deleted. Subsequently I get an exception (2nd line) complaining about not being able to delete the user as there are foreign key constrains in the REGISTRATIONS table preventing it from being deleted.
If these updates happen in the same transaction, why does this fail?
EDIT:
#Repository
public interface RegistrationsRepository extends CrudRepository<Registration, UUID>
{
// No EntityManager injected - uses Spring Data method queries
// No additional methods defined
}
Registrations table defined as follows:
CREATE TABLE Registrations (
username varchar(64) NOT NULL REFERENCES Users (username),
uuid UUID NOT NULL PRIMARY KEY
);
So, as I understand it, Spring's UserDetailsManager uses JDBC calls to delete the 'users' and 'authorities' from the respective tables.
My 'registrations' entities were being managed by an EntityManager which had no defined relationship (at the ORM level) to the 'user' records. This relationship was specified purely at the DB level.
The EntityManager would mark the 'registration' entity to be deleted, while the UserDetailsManager would actually delete the 'user', which happens before the EntityManager has been flushed at the end of the transaction. This fails as the 'registration' entity hasn't yet been deleted, the transaction is still not complete, but the JDBC calls had already attempted to delete the 'user's and 'authorities'.
To fix this I did the following.
class DefaultService implements MyService {
private final EntityManagerFactory emf;
// Inject RegistrationRepository and UserDetailsManager...
#Inject
public DefaultService(EntityManagerFactory emf, ...) {
// ...
this.emf = emf;
}
#Override
#Transactional
public void serviceMethod(UUID uuid, String username) {
registrationsRepository.delete(uuid);
// Flush the entity manager to remove this record from the DB first.
EntityManagerHolder entityManagerHolder = (EntityManagerHolder) TransactionSynchronizationManager.getResource(emf);
entityManagerHolder.getEntityManager().flush();
// These will be JDBC calls, 'users' are not managed entities
userDetailsManager.deleteUser(username);
}
}
I obtained the EntityManager in this way to ensure I get the correct one bound to this thread for this transaction. If this is overkill or there is a better way of doing this, please comment!
Hope that helps someone. And is correct!
I am using Spring with Hibernate. I am running jUnit test like this:
String number = invoiceNumberService.nextInvoiceNumber();
and invoiceNumberService method is:
InvoiceNumber invoiceNumber = invoiceNumberRepository.findOne(1L);
it is using simple spring data repository method, and it's working well. But when I override this method to use locking:
#Lock(LockModeType.PESSIMISTIC_READ)
#Override
InvoiceNumber findOne(Long id);
I am getting "javax.persistence.OptimisticLockException: Row was updated or deleted by another transaction"
I can't understand why its optimistic lock exception, while I am using pessimistic locking? And where is this part when another transaction is changing this entity?
I have already dig a lot similar questions and I am quite desperate about this. Thanks for any help
Solution:
The problem was in my init function in test class:
#Before
public void init() {
InvoiceNumber invoiceNumber = new InvoiceNumber(1);
em.persist(invoiceNumber);
em.flush();
}
There was lack of
em.flush();
Which saves the data into database, so findOne() can now retreive it
Question: Have you given the #transcational annotation in dao or service layer?
it happens due to the reason that two transaction is simultaneously trying to change the data of same table..So if you remove all the annotation from the dao layer and put in the service layer it should solve the problem i this..because i faced similar kind of problem.
Hope it helps.
Just for the sake of it I'll post the following, if any one disagrees please correct me. In general in java you are advised to use Spring/hibernate and JPA. Hibernate implements JPA so you will need dependencies for Spring and Hibernate.
Next let Spring/hibernate manage your transactions and the committing part. It is bad practice to flush/commit your data yourself.
For instance let assume the following method:
public void changeName(long id, String newName) {
CustomEntity entity = dao.find(id);
entity.setName(newName);
}
Nothing will happen after this method (you could call merge and commit). But if you annotate it with #Transactional, your entity will be managed and at the end of the #Transactional method Spring/hibernate will commit your changes. So this is enough:
#Transactional
public void changeName(long id, String newName) {
CustomEntity entity = dao.find(id);
entity.setName(newName);
}
No need to call flush, Spring/Hibernate will handle all the mess for you. Just don't forget that your tests have to call #Transactional methods, or should be #Transactional themselves.
I am writing an application that has typical two entities: User and UserGroup. The latter may contain one or more instances of the former. I have following (more/less) mapping for that:
User:
public class User {
#Id
#GeneratedValue
private long id;
#ManyToOne(cascade = {CascadeType.MERGE})
#JoinColumn(name="GROUP_ID")
private UserGroup group;
public UserGroup getGroup() {
return group;
}
public void setGroup(UserGroup group) {
this.group = group;
}
}
User group:
public class UserGroup {
#Id
#GeneratedValue
private long id;
#OneToMany(mappedBy="group", cascade = {CascadeType.REMOVE}, targetEntity = User.class)
private Set<User> users;
public void setUsers(Set<User> users) {
this.users = users;
}
}
Now I have a separate DAO class for each of these entities (UserDao and UserGroupDao). All my DAOs have EntityManager injected using #PersistenceContext annotation, like this:
#Transactional
public class SomeDao<T> {
private Class<T> persistentClass;
#PersistenceContext
private EntityManager em;
public T findById(long id) {
return em.find(persistentClass, id);
}
public void save(T entity) {
em.persist(entity);
}
}
With this layout I want to create a new user and assign it to existing user group. I do it like this:
UserGroup ug = userGroupDao.findById(1);
User u = new User();
u.setName("john");
u.setGroup(ug);
userDao.save(u);
Unfortunately I get following exception:
object references an unsaved transient instance - save the transient
instance before flushing: x.y.z.model.User.group ->
x.y.z.model.UserGroup
I investigated it and I think it happens becasue each DAO instance has different entityManager assigned (I checked that - the references in each DAO to entity manager are different) and for user entityManager does not manager the passed UserGroup instance.
I've tried to merge the user group assigned to user into UserDAO's entity manager. There are two problems with that:
It still doesn't work - the entity manager wants to overwrite the existing UserGroup and it gets exception (obviously)
even if it worked I would end up writing merge code for each related entity
Described case works when both find and persist are made using the same entity manager. This points to a question(s):
Is my design broken? I think it is pretty similar to recommended in this answer. Should there be single EntityManager for all DAOs (the web claims otherwise)?
Or should the group assignment be done inside the DAO? in this case I would end up writing a lot of code in the DAOs
Should I get rid of DAOs? If yes, how to handle data access nicely?
any other solution?
I am using Spring as container and Hibernate as JPA implementation.
Different instances of EntityManager are normal in Spring. It creates proxies that dynamically use the entity manager that is currently in a transaction if one exists. Otherwise, a new one will be created.
The problem is that your transactions are too short. Retrieving your user group executes in a transaction (because the findById method is implicitly #Transactional ). But then the transaction commits and the group is detached. When you save the new user, it will create a new transaction which fails because the user references a detached entity.
The way to solve this (and to do such things in general) is to create a method that does the whole operation in a single transaction. Just create that method in a service class (any Spring-managed component will work) and annotate it with #Transactional as well.
I don't know Spring, but the JPA issue is that you are persisting a User that has a reference to a UserGroup, but JPA thinks the UserGroup is transient.
transient is one of the life-cycle states a JPA entity can be in. It means it's just created with the new operator, but has not been persisted yet (does not have a persistent identity yet).
Since you obtain your UserGroup instance via a DAO, it seems like something is wrong there. Your instance should not be transient, but detached. Can you print the Id of the UserGroup instance just after your received it from the DAO? And perhaps also show the findById implementation?
You don't have cascade persist on the group relation, so this normally should just work if the entity was indeed detached. Without a new entity, JPA simply has no way to set the FK correctly, since it would need the Id of the UserGroup instance here but that (seemingly) doesn't exist.
A merge should also not "overwrite" your detached entity. What is the exception that you're getting here?
I only partially agree with the answers being given by the others here about having to put everything in one transaction. Yes, this indeed may be more convenient as the UserGroup instance will still be 'attached', but it should not be -necessary-. JPA is perfectly capable of persisting new entities with references to either other new entities or existing (detached) entities that were obtained in another transaction. See e.g. JPA cascade persist and references to detached entities throws PersistentObjectException. Why?
I am not sure how but I've managed to solve this. The user group I was trying to assign the user to had NULL version field in database (the field annotated with #Version). I figured out it was an issue when I was testing GWT RequestFactory that was using this table. When I set the field to 1 everything started to work (no changes in transaction handling were needed).
If the NULL version field really caused the problem then this would be one of the most misleading exception messages I have ever got.