Limiting number of transactions with Spring Data Repository - java

I am writing Spring Boot application using Spring Data repositories. I have method that resets database and fills it with sample data. It works, however Spring uses hundreds of transactions to do this. Is there any way to limit number of transactions created by repositories to 1 or to not use them at all?
I would like to reuse same transaction within fillApples and fillBananas methods. I've tried using different combinations of #Transactional(propagation = Propagation.SUPPORTS) but it does not change anything.
interface AppleRepository extends CrudRepository<Apple, Long>
interface BananaRepository extends JpaRepository<Banana, Long>
#Service
public class FruitService{
#Autowired
private final AppleRepository appleRepository;
#Autowired
private final BananaRepository bananaRepository;
public void reset(){
clearDb();
fillApples();
fillBananas();
//more fill methods
}
private void clearDb(){
appleRepository.deleteAll();
bananaRepository.deleteAll();
}
private void fillApples(){
for(int i = 0; i < n; i++){
Apple apple = new Apple(...);
appleRepository.save(apple);
}
}
private void fillBananas(){
for(int i = 0; i < n; i++){
Banana banana = new Banana(...);
bananaRepository.save(banana);
}
}
}
#RestController
public class FruitController{
#Autowired
private FruitService fruitService;
#RequestMapping(...)
public void reset(){
fruitService.reset();
}
}

You have to annotate your reset() method with #Transaction and a propagation configuration that makes sure that the method runs in an transaction (create or reuse the existing annotation - for example Propagation REQUIRED (that the default for #Transactional))
Your code has no #Transactional but in your comment you wrote that you have one, but you use the "wrong" Propagation = SUPPORTS. Because the meaning of SUPPORTS is:
SUPPORTS Support a current transaction, execute non-transactionally if none exists.
So you will not create a new transaction if there is none (#Transactional(propagation = SUPPORTS) will never do any thing, it mean just do nothing with the transaction)
So you have to use#Transactional(propagation = REQUIRED)
#Transactional(propagation = REQUIRED)
public void reset(){
clearDb();
fillApples();
fillBananas();
//more fill methods
}
#see: Propagation java doc

Related

Hibernate LazyInitializationException if entity is fetched in JWTAuthorizationFilter

I'm using Spring Rest. I have an Entity called Operator that goes like this:
#Entity
#Table(name = "operators")
public class Operator {
//various properties
private List<OperatorRole> operatorRoles;
//various getters and setters
#LazyCollection(LazyCollectionOption.TRUE)
#OneToMany(mappedBy = "operator", cascade = CascadeType.ALL)
public List<OperatorRole> getOperatorRoles() {
return operatorRoles;
}
public void setOperatorRoles(List<OperatorRole> operatorRoles) {
this.operatorRoles = operatorRoles;
}
}
I also have the corresponding OperatorRepository extends JpaRepository
I defined a controller that exposes this API:
#RestController
#RequestMapping("/api/operators")
public class OperatorController{
private final OperatorRepository operatorRepository;
#Autowired
public OperatorController(OperatorRepository operatorRepository) {
this.operatorRepository = operatorRepository;
}
#GetMapping(value = "/myApi")
#Transactional(readOnly = true)
public MyResponseBody myApi(#ApiIgnore #AuthorizedConsumer Operator operator){
if(operator.getOperatorRoles()!=null) {
for (OperatorRole current : operator.getOperatorRoles()) {
//do things
}
}
}
}
This used to work before I made the OperatorRoles list lazy; now if I try to iterate through the list it throws LazyInitializationException.
The Operator parameter is fetched from the DB by a filter that extends Spring's BasicAuthenticationFilter, and is then somehow autowired into the API call.
I can get other, non-lazy initialized, properties without problem. If i do something like operator = operatorRepository.getOne(operator.getId());, everything works, but I would need to change this in too many points in the code.
From what I understand, the problem is that the session used to fetch the Operator in the BasicAuthenticationFilter is no longer open by the time i reach the actual API in OperatorController.
I managed to wrap everything in a OpenSessionInViewFilter, but it still doesn't work.
Anyone has any ideas?
I was having this very same problem for a long time and was using FetchType.EAGER but today something has clicked in my head ...
#Transactional didn't work so I thought "if declarative transactions don't work? Maybe programmatically do" And they do!
Based on Spring Programmatic Transactions docs:
public class JwtAuthorizationFilter extends BasicAuthenticationFilter {
private final TransactionTemplate transactionTemplate;
public JwtAuthorizationFilter(AuthenticationManager authenticationManager,
PlatformTransactionManager transactionManager) {
super(authenticationManager);
this.transactionTemplate = new TransactionTemplate(transactionManager);
// Set your desired propagation behavior, isolation level, readOnly, etc.
this.transactionTemplate.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
private void doSomething() {
transactionTemplate.execute(transactionStatus -> {
// execute your queries
});
}
}
It could be late for you, but I hope it helps others.

Creating Spring #Repository and #Controller for every item I'm working with(from database)

While working with a project that involves requesting multiple data types from a database I came to a following question:
Lets say I have 2 java classes that correspond to database entities:
Routes
public class Route {
public Route(int n, int region, Date fdate, boolean changed, int points,
int length) {
super();
this.n = n;
this.region = region;
this.fdate = fdate;
this.changed = changed;
this.points = points;
this.length = length;
}
}
Carrier
public class Carrier {
public Carrier(...) {
this.id = src.getId();
this.name = src.getName();
this.instId = src.getInstId();
this.depotId = src.getDepotId();
}
If so, what's the correct approach of creating Dao interfaces and classes? I'm doing it like this -
#Repository
public class CarrierDaoImpl implements CarrierDao{
#Autowired
DataSource dataSource;
public List<Carrier> getAllOrgs() { ... }
}
#Repository
public class RoutesDaoImpl implements RoutesDao {
#Autowired
DataSource dataSource;
public ArrayList<AtmRouteItem> getRoutes(AtmRouteFilter filter) { ... }
}
I'm creating a #Repository DAO for every java class item\db entity and then 2 separate controllers for requests about carriers and routes. Like this:
#RestController
#RequestMapping(path = "/routes")
public class RoutesController {
#Autowired
RoutesDao routesDao;
#GetMapping(value = {"/getRoutes/", "/getRoutes"})
public ArrayList<Route> getRoutes() { ... } }
And same for controller Carriers. Is it correct and if not what's the correct approach?
Sorry for styling issues, that's my first question on stackoverflow :)
I would suggest creating services marked with #Service annotation (i.e. CarrierService interface and CarrierServiceImpl implementation). Than inject them into controllers. Use repositories within services because some database operations will require transactions and a better place for managing transactions are services. Also services can do more specialized job which will require access to multiple repositories so you can inject them. And don’t forget to mark your services with #Transactional annotation.
It's correct to have a DAO for each entity.
When working with JPA repositories you have no choice but to provide the entity. For instance:
public interface FooRepository extends JpaRepository<Foo,Long>{}
Same for the REST controllers, you have to bring together functionalities by object as you do.
You can improve your mapping to be more RESTful. To retrieve all routes, don't specify a path:
#GetMapping
public ArrayList<RouteResource> getRoutes() { ... }
(I never use #GetMapping yet but it should work like that)
And if you want specific route:
#GetMapping("/get/{id}")
public RouteResource getRoute() {...}
You should return resources instead of entities to client.

Java: How to handle multiple Hibernate transactions in one request?

I'm not sure where to open my Transaction object. Inside the service layer? Or the controller layer?
My Controller basically has two services, let's call them AService and BService. Then my code goes something like:
public class Controller {
public AService aService = new AService();
public BService bService = new BService();
public void doSomething(SomeData data) {
//Transaction transaction = HibernateUtil.getSession().openTransaction();
if (data.getSomeCondition()) {
aService.save(data.getSomeVar1());
bService.save(data.getSomeVar2());
}
else {
bService.save(data.getSomeVar2());
}
//transaction.commit(); or optional try-catch with rollback
}
}
The behavior I want is that if bService#save fails, then I could invoke a transaction#rollback so that whatever was saved in aService would be rolled back as well. This only seems possible if I create one single transaction for both saves.
But looking at it in a different perspective, it looks really ugly that my Controller is dependent on the Transaction. It would be better if I create the Transaction inside the respective services, (something like how Spring #Transactional works), but if I do it that way, then I don't know how to achieve what I want to happen...
EDIT: Fixed code, added another condition. I am not using any Spring dependencies so the usage of #Transactional is out of the question.
You can accomplish what you're asking with another layer of abstraction and using composition.
public class CompositeABService {
#Autowired
private AService aservice;
#Autowired
private BService bservice;
#Transactional
public void save(Object value1, Object value2) {
aservice.save( value1 );
bservice.save( value2 );
}
}
public class AService {
#Transactional
public void save(Object value) {
// joins an existing transaction if one exists, creates a new one otherwise.
}
}
public class BService {
#Transactional
public void save(Object value) {
// joins an existing transaction if one exists, creates a new one otherwise.
}
}
This same pattern is typically used when you need to interact with multiple repositories as a part of a single unit of work (e.g. transaction).
Now all your controller needs to depend upon is CompositeABService or whatever you wish to name it.

Spring cannot propagate transaction to ForkJoin's RecursiveAction

I am trying to implement a multi-threaded solution so I can parallelize my business logic that includes reading and writing to a database.
Technology stack: Spring 4.0.2, Hibernate 4.3.8
Here is some code to discuss on:
Configuration
#Configuration
public class PartitionersConfig {
#Bean
public ForkJoinPoolFactoryBean forkJoinPoolFactoryBean() {
final ForkJoinPoolFactoryBean poolFactory = new ForkJoinPoolFactoryBean();
return poolFactory;
}
}
Service
#Service
#Transactional
public class MyService {
#Autowired
private OtherService otherService;
#Autowired
private ForkJoinPool forkJoinPool;
#Autowired
private MyDao myDao;
public void performPartitionedActionOnIds() {
final ArrayList<UUID> ids = otherService.getIds();
MyIdPartitioner task = new MyIdsPartitioner(ids, myDao, 0, ids.size() - 1);
forkJoinPool.invoke(task);
}
}
Repository / DAO
#Repository
#Transactional(propagation = Propagation.MANDATORY)
public class IdsDao {
public MyData getData(List<UUID> list) {
// ...
}
}
RecursiveAction
public class MyIdsPartitioner extends RecursiveAction {
private static final long serialVersionUID = 1L;
private static final int THRESHOLD = 100;
private ArrayList<UUID> ids;
private int fromIndex;
private int toIndex;
private MyDao myDao;
public MyIdsPartitioner(ArrayList<UUID> ids, MyDao myDao, int fromIndex, int toIndex) {
this.ids = ids;
this.fromIndex = fromIndex;
this.toIndex = toIndex;
this.myDao = myDao;
}
#Override
protected void compute() {
if (computationSetIsSamllEnough()) {
computeDirectly();
} else {
int leftToIndex = fromIndex + (toIndex - fromIndex) / 2;
MyIdsPartitioner leftPartitioner = new MyIdsPartitioner(ids, myDao, fromIndex, leftToIndex);
MyIdsPartitioner rightPartitioner = new MyIdsPartitioner(ids, myDao, leftToIndex + 1, toIndex);
invokeAll(leftPartitioner, rightPartitioner);
}
}
private boolean computationSetIsSamllEnough() {
return (toIndex - fromIndex) < THRESHOLD;
}
private void computeDirectly() {
final List<UUID> subList = ids.subList(fromIndex, toIndex);
final MyData myData = myDao.getData(sublist);
modifyTheData(myData);
}
private void modifyTheData(MyData myData) {
// ...
// write to DB
}
}
After executing this I get:
No existing transaction found for transaction marked with propagation 'mandatory'
I understood that this is perfectly normal since the transaction doesn't propagate through different threads. So one solution is to create a transaction manually in every thread as proposed in another similar question. But this was not satisfying enough for me so I kept searching.
In Spring's forum I found a discussion on the topic. One paragraph I find very interesting:
"I can imagine one could manually propagate the transaction context to another thread, but I don't think you should really try it. Transactions are bound to single threads with a reason - the basic underlying resource - jdbc connection - is not threadsafe. Using one single connection in multiple threads would break fundamental jdbc request/response contracts and it would be a small wonder if it would work in more then trivial examples."
So the first question arise: Is it worth it to pararellize the reading/writing to the database and can this really hurt the DB consistency?
If the quote above is not true, which I doubt, is there a way to achieve the following:
MyIdPartitioner to be Spring managed - with #Scope("prototype") - and pass the needed arguments for the recursive calls to it and that way leave the transaction management to Spring?
After further readings I managed to solve my problem. Kind of (as I see it now there wasn't a problem at the first place).
Since the reading I do from the DB is in chunks and I am sure that the results won't get edited during that time I can do it outside transaction.
The writing is also safe in my case since all values I write are unique and no constraint violations can occur. So I removed the transaction from there too.
What I mean by saying "I removed the transaction" just override the method's Propagation mode in my DAO like:
#Repository
#Transactional(propagation = Propagation.MANDATORY)
public class IdsDao {
#Transactional(propagation = Propagation.SUPPORTS)
public MyData getData(List<UUID> list) {
// ...
}
}
Or if you decide you need the transaction for some reason then you can still leave the transaction management to Spring by setting the propagation to REQUIRED.
So the solution turns out to be much much simpler than I thought.
And to answer my other questions:
Is it worth it to pararellize the reading/writing to the database and can this really hurt the DB consistency?
Yes, it's worth it. And as long as you have transaction per thread you are cool.
Is there a way to achieve the following: MyIdPartitioner to be Spring managed - with #Scope("prototype") - and pass the needed arguments for the recursive calls to it and that way leave the transaction management to Spring?
Yes there is a way by using pool (another stackoverflow question). Or you can define your bean as #Scope(value = "prototype", proxyMode = ScopedProxyMode.TARGET_CLASS) but then it won't work if you need to set parameters to it since every usage of the instance will give you a new instance. Ex.
#Autowire
MyIdsPartitioner partitioner;
public void someMethod() {
...
partitioner.setIds(someIds);
partitioner.setFromIndex(fromIndex);
partitioner.setToIndex(toIndex);
...
}
This will create 3 instances and you won't be able to use the object beneficial since the fields won't be set.
So in short - there is a way but I didn't need to go for it at first place.
This should be possible with atomikos (http://www.atomikos.com) and optionally with nested transactions.
If you do this, then take care to avoid deadlocks if multiple threads of a same root transaction write to the same tables in the database.

Java Spring #Transactional method not rolling back as expected

Below is a quick outline of what I'm trying to do. I want to push a record to two different tables in the database from one method call. If anything fails, I want everything to roll back. So if insertIntoB fails, I want anything that would be committed in insertIntoA to be rolled back.
public class Service {
MyDAO dao;
public void insertRecords(List<Record> records){
for (Record record : records){
insertIntoAAndB(record);
}
}
#Transactional (rollbackFor = Exception.class, propagation = Propagation.REQUIRES_NEW)
public void insertIntoAAndB(Record record){
insertIntoA(record);
insertIntoB(record);
}
#Transactional(propagation = Propagation.REQUIRED)
public void insertIntoA(Record record){
dao.insertIntoA(record);
}
#Transactional(propagation = Propagation.REQUIRED)
public void insertIntoB(Record record){
dao.insertIntoB(record);
}
public void setMyDAO(final MyDAO dao) {
this.dao = dao;
}
}
Where MyDAO dao is an interface that is mapped to the database using mybatis and is set using Spring injections.
Right now if insertIntoB fails, everything from insertIntoA still gets pushed to the database. How can I correct this behavior?
EDIT:
I modified the class to give a more accurate description of what I'm trying to achieve. If I run insertIntoAAndB directly, the roll back works if there are any issues, but if I call insertIntoAAndB from insertRecords, the roll back doesn't work if any issues arise.
I found the solution!
Apparently Spring can't intercept internal method calls to transactional methods. So I took out the method calling the transactional method, and put it into a separate class, and the rollback works just fine. Below is a rough example of the fix.
public class Foo {
public void insertRecords(List<Record> records){
Service myService = new Service();
for (Record record : records){
myService.insertIntoAAndB(record);
}
}
}
public class Service {
MyDAO dao;
#Transactional (rollbackFor = Exception.class, propagation = Propagation.REQUIRES_NEW)
public void insertIntoAAndB(Record record){
insertIntoA(record);
insertIntoB(record);
}
#Transactional(propagation = Propagation.REQUIRED)
public void insertIntoA(Record record){
dao.insertIntoA(record);
}
#Transactional(propagation = Propagation.REQUIRED)
public void insertIntoB(Record record){
dao.insertIntoB(record);
}
public void setMyDAO(final MyDAO dao) {
this.dao = dao;
}
}
I think the behavior you encounter is dependent on what ORM / persistence provider and database you're using. I tested your case using hibernate & mysql and all my transactions rolled back alright.
If you do use hibernate enable SQL and transaction logging to see what it's doing:
log4j.logger.org.hibernate.SQL=DEBUG
log4j.logger.org.hibernate.transaction=DEBUG
// for hibernate 4.2.2
// log4j.logger.org.hibernate.engine.transaction=DEBUG
If you're on plain jdbc (using spring JdbcTemplate), you can also debug SQL & transaction on Spring level
log4j.logger.org.springframework.jdbc.core=DEBUG
log4j.logger.org.springframework.transaction=DEBUG
Double check your autocommit settings and database specific peciular (eg: most DDL will be comitted right away, you won't be able to roll it back although spring/hibernate did so)
Just because jdk parses aop annotation not only with the method, also parse annotation with the target class.
For example, you have method A with #transactional, and method B which calls method A but without #transactional, When you invoke the method B with reflection, Spring AOP will check the B method with the target class has any annotations.
So if your calling method in this class is not with the #transactional, it will not parse any other method in this method.
At last, show you the source code:
org.springframework.aop.framework.jdkDynamicAopProxy.class
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
......
// Get the interception chain for this method.
List<Object> chain = this.advised.getInterceptorsAndDynamicInterceptionAdvice(method, targetClass);
// Check whether we have any advice. If we don't, we can fallback on direct
// reflective invocation of the target, and avoid creating a MethodInvocation.
if (chain.isEmpty()) {
// We can skip creating a MethodInvocation: just invoke the target directly
// Note that the final invoker must be an InvokerInterceptor so we know it does
// nothing but a reflective operation on the target, and no hot swapping orfancy proxying.
retVal = AopUtils.invokeJoinpointUsingReflection(target, method, args);
}
else {
// We need to create a method invocation...
invocation = new ReflectiveMethodInvocation(proxy, target, method, args, targetClass, chain);
// Proceed to the joinpoint through the interceptor chain.
retVal = invocation.proceed();
}
}

Categories