Using transactions when combining a RepositoryRestResource with another repository - java

I have a simple JpaRepository annotated with #RepositoryRestResource:
#RepositoryRestResource
public interface ItemRepository extends JpaRepository<Item, UUID> { }
Whenever something is changed in the database, I want to update a file. I do this using a RepositoryEventHandler:
#Component
#RepositoryEventHandler
public class ItemRepositoryEventHandler {
#HandleAfterCreate
#HandleAfterSave
#HandleAfterDelete
public void itemChanged(Item item) {
writeToFile();
}
}
What I want to do is if there is an error while writing the contents to file, then the database should be rolled back.
I've tried by adding the #Transactional annotation to the ItemRepository but it didn't work. Debugging revealed that the RepositoryRestResource does three steps: emitting the BeforeXXX events, persisting to the database, then emitting the AfterXXX events. It only uses a transaction during the persistence step, not one across all three.
So I see no way to use a transaction across the whole operation and the only alternative I see is to not use #RepositoryRestResource, but to implement the web layer manually then use a service which employs a transaction across both repositories. Is there an easier way?

One approach is to implement your business logic with a custom controller and a service. But this way is neutralizing the 'advantages' of Spring Data REST.
Another option (in my opinion it's more natural for SDR) is to use published events from aggregate roots. In this case you should extend your entities from AbstractAggregateRoot and implement a method that will be publish some 'event'. Then you will be able to handle this event (with help of #EventListener) in the same transaction during the process of saving your entity. For example:
#Entity
public class Order extends AbstractAggregateRoot {
//...
public void registerItems(List<Item> items) {
this.registerEvent(new RegisterItemsEvent(this, items));
}
}
#Getter
#RequiredArgsConstructor
public class RegisterItemsEvent {
private final Order order;
private final List<Item> items;
}
#RequiredArgsConstructor
#Component
public class EventHandler {
private final ItemRepo itemRepo;
#EventListener
#Transactional(propagation = MANDATORY)
public void handleRegisterItemsEvent(RegisterItemsEvent e) {
Order order = e.getOrder();
List<Item> items = e.getItems();
// update items with order - skipped...
itemRepo.saveAll(items);
}
}
Usage example:
#Component
#RepositoryEventHandler
public class OrderEventHandler {
#BeforeCreate
public void handleOrderCreate(Order order) {
// prepare a List of items - skipped...
order.registerItems(items);
}
}
When SDR saves the Order then it emits RegisterItemsEvent, which is handled in handleRegisterItemsEvent method of your EventHandler that saves prepared items in the same transaction (we use propagation = MANDATORY parameter of #Transaction annotation to make sure that transaction is present).
Additional info: Domain event publication from aggregate roots
UPDATED
Regarding your particular task, you can create class ItemChangedEvent:
#Getter
#RequiredArgsConstructor
public class ItemChangedEvent {
private final Item item;
}
Implement method markAsChanged in the Item entity:
#Entity
public class Item extends AbstractAggregateRoot {
//...
public void markAsChanged() {
this.registerEvent(new ItemChangedEvent(this));
}
}
When the item changes, you will mark it as "changed":
#Component
#RepositoryEventHandler
public class ItemRepositoryEventHandler {
#BeforeCreate
#BeforeSave
#BeforeDelete
public void itemChanged(Item item) {
item.markAsChanged();
}
}
And write it to a file in the ItemChangedEvent handler in the same transaction:
#Component
public class EventHandler {
#EventListener
#Transactional(propagation = MANDATORY)
public void handleItemChangedEvent(ItemChangedEvent e) {
Item item = e.getItem();
writeToFile(item);
}
}

Related

check if entity exist and then delete it in one transaction in spring app

i have a services layer and a repository layer in my spring boot application (i use also spring data, mvc etc)
before deleting an entity from the database, I want to check if such an entity exists and if not, then throw an EntityNotFoundException
for example my repository:
public interface RoomRepository extends CrudRepository<Room, Long> {
#Query("from Room r left join fetch r.messages where r.id = :rId")
Optional<Room> findByIdWithMessages(#Param("rId") long id);
#Override
List<Room> findAll();
}
and service:
#Service
#Loggable
public class RoomService implements GenericService<Room> {
private final RoomRepository roomRepository;
private final RoomDtoMapper roomMapper;
public RoomService(RoomRepository roomRepository, RoomDtoMapper roomMapper) {
this.roomRepository = roomRepository;
this.roomMapper = roomMapper;
}
#Override
public Room getById(long id) {
return roomRepository.findById(id).orElseThrow(
() -> new EntityNotFoundException(String.format("room with id = %d wasn't found", id)));
}
#Override
public void delete(Room room) {
getById(room.getId());
roomRepository.delete(room);
}
}
In this example in the delete method, I call the
getById(room.getId())
(so that it throws an EntityNotFoundException if the entity does not exist.)
before
roomRepository.delete(room);
it seems to me that such code is not thread-safe and the operation is not atomic
(because at the moment when in this thread at the moment of checking another request from another thread may already delete the same entity)
and I don't know if I'm doing the right thing
maybe i should add the #Transactional annotation?
would it allow me to make the method atomic?
like this:
#Override
#Transactional
public void delete(Room room) {
getById(room.getId());
roomRepository.delete(room);
}
maybe i should set some kind of isolation level?
you can test if your object needed, exist or not by autowiring the repository injected (in your case is RoomRepository e.g) and (insted User in my exmaple you can use Room): for example:
public ResponseEntity<Object> deletUserById(Long id) {
if (userrRepository.findById(id).isPresent()) {
userrRepository.deleteById(id);
return ResponseEntity.ok().body("User deleted with success");
} else {
return ResponseEntity.unprocessableEntity().body("user to be deleted not exist");
}
}

How to use Mongo Auditing and a UUID as id with Spring Boot 2.2.x?

I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions

Hibernate LazyInitializationException if entity is fetched in JWTAuthorizationFilter

I'm using Spring Rest. I have an Entity called Operator that goes like this:
#Entity
#Table(name = "operators")
public class Operator {
//various properties
private List<OperatorRole> operatorRoles;
//various getters and setters
#LazyCollection(LazyCollectionOption.TRUE)
#OneToMany(mappedBy = "operator", cascade = CascadeType.ALL)
public List<OperatorRole> getOperatorRoles() {
return operatorRoles;
}
public void setOperatorRoles(List<OperatorRole> operatorRoles) {
this.operatorRoles = operatorRoles;
}
}
I also have the corresponding OperatorRepository extends JpaRepository
I defined a controller that exposes this API:
#RestController
#RequestMapping("/api/operators")
public class OperatorController{
private final OperatorRepository operatorRepository;
#Autowired
public OperatorController(OperatorRepository operatorRepository) {
this.operatorRepository = operatorRepository;
}
#GetMapping(value = "/myApi")
#Transactional(readOnly = true)
public MyResponseBody myApi(#ApiIgnore #AuthorizedConsumer Operator operator){
if(operator.getOperatorRoles()!=null) {
for (OperatorRole current : operator.getOperatorRoles()) {
//do things
}
}
}
}
This used to work before I made the OperatorRoles list lazy; now if I try to iterate through the list it throws LazyInitializationException.
The Operator parameter is fetched from the DB by a filter that extends Spring's BasicAuthenticationFilter, and is then somehow autowired into the API call.
I can get other, non-lazy initialized, properties without problem. If i do something like operator = operatorRepository.getOne(operator.getId());, everything works, but I would need to change this in too many points in the code.
From what I understand, the problem is that the session used to fetch the Operator in the BasicAuthenticationFilter is no longer open by the time i reach the actual API in OperatorController.
I managed to wrap everything in a OpenSessionInViewFilter, but it still doesn't work.
Anyone has any ideas?
I was having this very same problem for a long time and was using FetchType.EAGER but today something has clicked in my head ...
#Transactional didn't work so I thought "if declarative transactions don't work? Maybe programmatically do" And they do!
Based on Spring Programmatic Transactions docs:
public class JwtAuthorizationFilter extends BasicAuthenticationFilter {
private final TransactionTemplate transactionTemplate;
public JwtAuthorizationFilter(AuthenticationManager authenticationManager,
PlatformTransactionManager transactionManager) {
super(authenticationManager);
this.transactionTemplate = new TransactionTemplate(transactionManager);
// Set your desired propagation behavior, isolation level, readOnly, etc.
this.transactionTemplate.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
private void doSomething() {
transactionTemplate.execute(transactionStatus -> {
// execute your queries
});
}
}
It could be late for you, but I hope it helps others.

how to design storage for transactions involving multiple tables in dynamodb?

I am trying to add transaction support to an existing dynamodb storage which looks like this:
public interface Storage<T>{
T put(T entity);
...
}
public abstract class AbstractDynamoStorage<T> implements Storage<T> {
#Override
public T put(T entity) {
...
}
}
public class DynamoOrderStorage extends AbstractDynamoStorage<CoreOrder> {
...
}
public class DynamoCustomerStorage extends AbstractDynamoStorage<CoreCustomer> {
...
}
Now, I want to add transaction support to this using the newly launched DDB transactions to be able to commit multiple operations(put, write, update..) across multiple tables.
Here's my approach:
interface TransactDAO{
void commitWriteTransaction(TransactWriteRequest writeReq);
}
class DynamoTransactImpl implements TransactDAO{
#Override:
commitWriteTransaction(TransactWriteRequest request){
//dynamodb.transactWriteItems();
}
}
class DynamoDBTransactWriteItem implements TransactWriteRequest{
List<DynamoTransactWriteItem<T>> transactWriteItems;
}
class DynamoTransactWritePutItem<T> implements DynamoTransactionWriteItem<T>{
String tableName;
String data;
...
}
My worry is that the concrete storage classes(DynamoOrderStorage and DynamoCustomerStorage) are of different type and my approach might not work here. Is there any other better way to achieve this?

Java: How to handle multiple Hibernate transactions in one request?

I'm not sure where to open my Transaction object. Inside the service layer? Or the controller layer?
My Controller basically has two services, let's call them AService and BService. Then my code goes something like:
public class Controller {
public AService aService = new AService();
public BService bService = new BService();
public void doSomething(SomeData data) {
//Transaction transaction = HibernateUtil.getSession().openTransaction();
if (data.getSomeCondition()) {
aService.save(data.getSomeVar1());
bService.save(data.getSomeVar2());
}
else {
bService.save(data.getSomeVar2());
}
//transaction.commit(); or optional try-catch with rollback
}
}
The behavior I want is that if bService#save fails, then I could invoke a transaction#rollback so that whatever was saved in aService would be rolled back as well. This only seems possible if I create one single transaction for both saves.
But looking at it in a different perspective, it looks really ugly that my Controller is dependent on the Transaction. It would be better if I create the Transaction inside the respective services, (something like how Spring #Transactional works), but if I do it that way, then I don't know how to achieve what I want to happen...
EDIT: Fixed code, added another condition. I am not using any Spring dependencies so the usage of #Transactional is out of the question.
You can accomplish what you're asking with another layer of abstraction and using composition.
public class CompositeABService {
#Autowired
private AService aservice;
#Autowired
private BService bservice;
#Transactional
public void save(Object value1, Object value2) {
aservice.save( value1 );
bservice.save( value2 );
}
}
public class AService {
#Transactional
public void save(Object value) {
// joins an existing transaction if one exists, creates a new one otherwise.
}
}
public class BService {
#Transactional
public void save(Object value) {
// joins an existing transaction if one exists, creates a new one otherwise.
}
}
This same pattern is typically used when you need to interact with multiple repositories as a part of a single unit of work (e.g. transaction).
Now all your controller needs to depend upon is CompositeABService or whatever you wish to name it.

Categories