I am trying to implement a multi-threaded solution so I can parallelize my business logic that includes reading and writing to a database.
Technology stack: Spring 4.0.2, Hibernate 4.3.8
Here is some code to discuss on:
Configuration
#Configuration
public class PartitionersConfig {
#Bean
public ForkJoinPoolFactoryBean forkJoinPoolFactoryBean() {
final ForkJoinPoolFactoryBean poolFactory = new ForkJoinPoolFactoryBean();
return poolFactory;
}
}
Service
#Service
#Transactional
public class MyService {
#Autowired
private OtherService otherService;
#Autowired
private ForkJoinPool forkJoinPool;
#Autowired
private MyDao myDao;
public void performPartitionedActionOnIds() {
final ArrayList<UUID> ids = otherService.getIds();
MyIdPartitioner task = new MyIdsPartitioner(ids, myDao, 0, ids.size() - 1);
forkJoinPool.invoke(task);
}
}
Repository / DAO
#Repository
#Transactional(propagation = Propagation.MANDATORY)
public class IdsDao {
public MyData getData(List<UUID> list) {
// ...
}
}
RecursiveAction
public class MyIdsPartitioner extends RecursiveAction {
private static final long serialVersionUID = 1L;
private static final int THRESHOLD = 100;
private ArrayList<UUID> ids;
private int fromIndex;
private int toIndex;
private MyDao myDao;
public MyIdsPartitioner(ArrayList<UUID> ids, MyDao myDao, int fromIndex, int toIndex) {
this.ids = ids;
this.fromIndex = fromIndex;
this.toIndex = toIndex;
this.myDao = myDao;
}
#Override
protected void compute() {
if (computationSetIsSamllEnough()) {
computeDirectly();
} else {
int leftToIndex = fromIndex + (toIndex - fromIndex) / 2;
MyIdsPartitioner leftPartitioner = new MyIdsPartitioner(ids, myDao, fromIndex, leftToIndex);
MyIdsPartitioner rightPartitioner = new MyIdsPartitioner(ids, myDao, leftToIndex + 1, toIndex);
invokeAll(leftPartitioner, rightPartitioner);
}
}
private boolean computationSetIsSamllEnough() {
return (toIndex - fromIndex) < THRESHOLD;
}
private void computeDirectly() {
final List<UUID> subList = ids.subList(fromIndex, toIndex);
final MyData myData = myDao.getData(sublist);
modifyTheData(myData);
}
private void modifyTheData(MyData myData) {
// ...
// write to DB
}
}
After executing this I get:
No existing transaction found for transaction marked with propagation 'mandatory'
I understood that this is perfectly normal since the transaction doesn't propagate through different threads. So one solution is to create a transaction manually in every thread as proposed in another similar question. But this was not satisfying enough for me so I kept searching.
In Spring's forum I found a discussion on the topic. One paragraph I find very interesting:
"I can imagine one could manually propagate the transaction context to another thread, but I don't think you should really try it. Transactions are bound to single threads with a reason - the basic underlying resource - jdbc connection - is not threadsafe. Using one single connection in multiple threads would break fundamental jdbc request/response contracts and it would be a small wonder if it would work in more then trivial examples."
So the first question arise: Is it worth it to pararellize the reading/writing to the database and can this really hurt the DB consistency?
If the quote above is not true, which I doubt, is there a way to achieve the following:
MyIdPartitioner to be Spring managed - with #Scope("prototype") - and pass the needed arguments for the recursive calls to it and that way leave the transaction management to Spring?
After further readings I managed to solve my problem. Kind of (as I see it now there wasn't a problem at the first place).
Since the reading I do from the DB is in chunks and I am sure that the results won't get edited during that time I can do it outside transaction.
The writing is also safe in my case since all values I write are unique and no constraint violations can occur. So I removed the transaction from there too.
What I mean by saying "I removed the transaction" just override the method's Propagation mode in my DAO like:
#Repository
#Transactional(propagation = Propagation.MANDATORY)
public class IdsDao {
#Transactional(propagation = Propagation.SUPPORTS)
public MyData getData(List<UUID> list) {
// ...
}
}
Or if you decide you need the transaction for some reason then you can still leave the transaction management to Spring by setting the propagation to REQUIRED.
So the solution turns out to be much much simpler than I thought.
And to answer my other questions:
Is it worth it to pararellize the reading/writing to the database and can this really hurt the DB consistency?
Yes, it's worth it. And as long as you have transaction per thread you are cool.
Is there a way to achieve the following: MyIdPartitioner to be Spring managed - with #Scope("prototype") - and pass the needed arguments for the recursive calls to it and that way leave the transaction management to Spring?
Yes there is a way by using pool (another stackoverflow question). Or you can define your bean as #Scope(value = "prototype", proxyMode = ScopedProxyMode.TARGET_CLASS) but then it won't work if you need to set parameters to it since every usage of the instance will give you a new instance. Ex.
#Autowire
MyIdsPartitioner partitioner;
public void someMethod() {
...
partitioner.setIds(someIds);
partitioner.setFromIndex(fromIndex);
partitioner.setToIndex(toIndex);
...
}
This will create 3 instances and you won't be able to use the object beneficial since the fields won't be set.
So in short - there is a way but I didn't need to go for it at first place.
This should be possible with atomikos (http://www.atomikos.com) and optionally with nested transactions.
If you do this, then take care to avoid deadlocks if multiple threads of a same root transaction write to the same tables in the database.
Related
I'm using Spring boot with jetty embedded web server for one Web application.
I want to be 100% sure that the repo class is thread safety.
The repo class
#Repository
#Scope("prototype")
public class RegistrationGroupRepositoryImpl implements RegistrationGroupRepository {
private RegistrationGroup rg = null;
Integer sLastregistrationTypeID = 0;
private UserAccountRegistration uar = null;
private List<RegistrationGroup> registrationGroup = new ArrayList<>();
private NamedParameterJdbcTemplate jdbcTemplate;
#Autowired
public RegistrationGroupRepositoryImpl(DataSource dataSource) {
this.jdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}
public List<RegistrationGroup> getRegistrationGroups(Integer regId) {
// Some logic here which is stored in stored in the instance variables and registrationGroup is returned from the method
return this.registrationGroup;
}
And the Service class which invoke the getRegistrationGroups method from the repo.
#Service
public class RegistrationService {
#Autowired
private Provider<RegistrationGroupRepository> registrationGroupRepository;
public List<RegistrationGroup> getRegistrationGroup() {
return registrationGroupRepository.getRegistrationGroups(1);
}
}
Can I have race condition situation if two or more request execute the getRegistrationGroups(1) method?
I guess I'm on the safety side because I'm using Method injection (Provider) with prototype bean, and every time I'm getting new instance from the invocation?
First of all, making your Bean a prototype Bean doesn't ensure an instance is created for every method invocation (or every usage, whatever).
In your case you're okay on that point, thanks to the Provider usage.
I noticed however that you're accessing the getRegistrationGroups directly.
return registrationGroupRepository.getRegistrationGroups(1);
How can this code compile? You should call get() on the Provider instance.
return registrationGroupRepository.get().getRegistrationGroups(1);
Answering your question, you should be good to go with this code. I don't like the fact that you're maintaining some sort of state inside RegistrationGroupRepositoryImpl, but that's your choice.
I always prefer having all my fields as final. If one of them requires me to remove the final modifier, there is something wrong with the design.
I am writing Spring Boot application using Spring Data repositories. I have method that resets database and fills it with sample data. It works, however Spring uses hundreds of transactions to do this. Is there any way to limit number of transactions created by repositories to 1 or to not use them at all?
I would like to reuse same transaction within fillApples and fillBananas methods. I've tried using different combinations of #Transactional(propagation = Propagation.SUPPORTS) but it does not change anything.
interface AppleRepository extends CrudRepository<Apple, Long>
interface BananaRepository extends JpaRepository<Banana, Long>
#Service
public class FruitService{
#Autowired
private final AppleRepository appleRepository;
#Autowired
private final BananaRepository bananaRepository;
public void reset(){
clearDb();
fillApples();
fillBananas();
//more fill methods
}
private void clearDb(){
appleRepository.deleteAll();
bananaRepository.deleteAll();
}
private void fillApples(){
for(int i = 0; i < n; i++){
Apple apple = new Apple(...);
appleRepository.save(apple);
}
}
private void fillBananas(){
for(int i = 0; i < n; i++){
Banana banana = new Banana(...);
bananaRepository.save(banana);
}
}
}
#RestController
public class FruitController{
#Autowired
private FruitService fruitService;
#RequestMapping(...)
public void reset(){
fruitService.reset();
}
}
You have to annotate your reset() method with #Transaction and a propagation configuration that makes sure that the method runs in an transaction (create or reuse the existing annotation - for example Propagation REQUIRED (that the default for #Transactional))
Your code has no #Transactional but in your comment you wrote that you have one, but you use the "wrong" Propagation = SUPPORTS. Because the meaning of SUPPORTS is:
SUPPORTS Support a current transaction, execute non-transactionally if none exists.
So you will not create a new transaction if there is none (#Transactional(propagation = SUPPORTS) will never do any thing, it mean just do nothing with the transaction)
So you have to use#Transactional(propagation = REQUIRED)
#Transactional(propagation = REQUIRED)
public void reset(){
clearDb();
fillApples();
fillBananas();
//more fill methods
}
#see: Propagation java doc
In my application, I have a scenario where I have to refresh cache each 24hrs.
I'm expecting database downtime so I need to implement a use case to refresh cache after 24hrs only if the database is up running.
I'm using spring-ehache and I did implement simple cache to refresh for each 24 hrs, but unable to get my head around to make the retention possible on database downtime .
Conceptually you could split the scheduling and cache eviction into two modules and only clear your cache if certain condition (in this case, database's healthcheck returns true) is met:
SomeCachedService.java:
class SomeCachedService {
#Autowired
private YourDao dao;
#Cacheable("your-cache")
public YourData getData() {
return dao.queryForData();
}
#CacheEvict("your-cache")
public void evictCache() {
// no body needed
}
}
CacheMonitor.java
class CacheMonitor {
#Autowired
private SomeCachedService service;
#Autowired
private YourDao dao;
#Scheduled(fixedDelay = TimeUnit.DAYS.toMillis(1))
public conditionallyClearCache() {
if (dao.isDatabaseUp()) {
service.evictCache();
}
}
}
Ehcache also allows you to create a custom eviction algorithm but the documentation doesn't seem too helpful in this case.
Essence:
How can I auto-rollback my hibernate transaction in a JUnit Test run with JBehave?
The problem seems to be that JBehave wants the SpringAnnotatedEmbedderRunner but annotating a test as #Transactional requires the SpringJUnit4ClassRunner.
I've tried to find some documentation on how to implement either rollback with SpringAnnotatedEmbedderRunner or to make JBehave work using the SpringJUnit4ClassRunner but I couldn't get either to work.
Does anyone have a (preferably simple) setup that runs JBehave storries with Spring and Hibernate and transaction auto-rollback?
Further infos about my setup so far:
Working JBehave with Spring - but not with auto-rollback:
#RunWith(SpringAnnotatedEmbedderRunner.class)
#Configure(parameterConverters = ParameterConverters.EnumConverter.class)
#UsingEmbedder(embedder = Embedder.class, generateViewAfterStories = true, ignoreFailureInStories = false, ignoreFailureInView = false)
#UsingSpring(resources = { "file:src/main/webapp/WEB-INF/test-context.xml" })
#UsingSteps
#Transactional // << won't work
#TransactionConfiguration(...) // << won't work
// both require the SpringJUnit4ClassRunner
public class DwStoryTests extends JUnitStories {
protected List<String> storyPaths() {
String searchInDirectory = CodeLocations.codeLocationFromPath("src/test/resources").getFile();
return new StoryFinder().findPaths(searchInDirectory, Arrays.asList("**/*.story"), null);
}
}
In my test steps I can #Inject everything nicely:
#Component
#Transactional // << won't work
public class PersonServiceSteps extends AbstractSmockServerTest {
#Inject
private DatabaseSetupHelper databaseSetupHelper;
#Inject
private PersonProvider personProvider;
#Given("a database in default state")
public void setupDatabase() throws SecurityException {
databaseSetupHelper.createTypes();
databaseSetupHelper.createPermission();
}
#When("the service $service is called with message $message")
public void callServiceWithMessage(String service, String message) {
sendRequestTo("/personService", withMessage("requestPersonSave.xml")).andExpect(noFault());
}
#Then("there should be a new person in the database")
public void assertNewPersonInDatabase() {
Assert.assertEquals("Service did not save person: ", personProvider.count(), 1);
}
(yes, the databaseSetupHelper methods are all transactional)
PersonProvider is basicly a wrapper around org.springframework.data.jpa.repository.support.SimpleJpaRepository. So there is access to the entityManager but taking control over the transactions (with begin/rollback) didn't work, I guess because of all the #Transactionals that are done under the hood inside that helper class.
Also I read that JBehave runs in a different context?session?something? which causes loss of controll over the transaction started by the test? Pretty confusing stuff..
edit:
Editet the above rephrasing the post to reflect my current knowledge and shortening the whole thing so that the question becomes more obvious and the setup less obstrusive.
I think you can skip the SpringAnnotatedEmbedderRunner and provide the necessary configuration to JBehave yourself. For example instead of
#UsingEmbedder(embedder = Embedder.class, generateViewAfterStories = true, ignoreFailureInStories = false, ignoreFailureInView = false)
you can do
configuredEmbedder()
.embedderControls()
.doGenerateViewAfterStories(true)
.doIgnoreFailureInStories(false)
.doIgnoreFailureInView(false);
Besides: why do you want to rollback the transaction? Typically you are using JBehave for acceptance tests, which run in a production-like environment. For example you first setup some data in the database, access it via Browser/Selenium and check for the results. For that to work the DB transaction has to be committed. Do you need to clean-up manually after your tests, which you can do in #AfterStories or #AfterScenario annotated methods.
I made it work by controlling transaction scope manually, rolling it back after each scenario. Just follow the official guide how to use Spring with JBehave and then do the trick as shown below.
#Component
public class MySteps
{
#Autowired
MyDao myDao;
#Autowired
PlatformTransactionManager transactionManager;
TransactionStatus transaction;
#BeforeScenario
public void beforeScenario() {
transaction = transactionManager.getTransaction(new DefaultTransactionDefinition());
}
#AfterScenario
public void afterScenario() {
if (transaction != null)
transactionManager.rollback(transaction);
}
#Given("...")
public void persistSomething() {
myDao.persist(new Foo());
}
}
I'm not familiar with JBehave, but it appears you're searching for this annotation.
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true).
You could also set defaultRollback to true in your testContext.
My previous question How to wrap Wicket page rendering in a Spring / Hibernate transaction? has led me to thinking about transaction demarcation in Wicket.
Whilst the example there was easily solved by moving business logic down into a Spring-managed layer, there are other places where this is not possible.
I have a generic DAO class, implemented by Hibernate, with
public class HibernateDAO<T> implements DAO<T> {
protected final Class<T> entityClass;
private final SessionFactory sessionFactory;
#Transactional
public T load(Serializable id) {
return (T) getSession().get(entityClass, id);
}
#Transactional
public void saveOrUpdate(T object) {
getSession().saveOrUpdate(object);
}
}
and a generic model to fetch it
public class DAOEntityModel<T> extends LoadableDetachableModel<T>{
private DAO<T> dao;
private final Serializable id;
public DAOEntityModel(DAO<T> dao, Serializable id) {
this.dao = dao;
this.id = id;
}
public <U extends Entity> DAOEntityModel(DAO<T> dao, U entity) {
this(dao, entity.getId());
}
public Serializable getId() {
return id;
}
#Override
protected T load() {
return dao.load(id);
}
}
Now I have a minimal form that changes an entity
public class ScreenDetailsPanel extends Panel {
#SpringBean(name="screenDAO") private DAO<Screen> dao;
public ScreenDetailsPanel(String panelId, Long screenId) {
super(panelId);
final IModel<Screen> screenModel = new DAOEntityModel<Screen>(dao, screenId);
Form<Screen> form = new Form<Screen>("form") {
#Override protected void onSubmit() {
Screen screen = screenModel.getObject();
dao.saveOrUpdate(screen);
}};
form.add(
new TextField<String>("name", new PropertyModel<String>(screenModel, "name")));
add(form);
}
}
So far so good - thanks for sticking with it!
So my issue is this - when the form is submitted, the PropertyModel will load the screenModel, which will happen in the transaction delineated by the #Transactional dao.load(id). The commit of the changes will when the (different) transaction started for dao.saveOrUpdate(object) is committed. In between these times all bets are off, so that the object may no longer exist in the DB to be committed.
I'm never entirely sure with DB code and transactions. Should I just shrug this off as unlikely, although I could construct other more complicated but more dangerous scenarios? If not I can't see how to demarcate the whole page logic in a single transaction, which is what my instinct tells me I should be aiming for.
Typically you would solve this by putting the #Transactional annotation on a service-level class, used by your front-end layer code, which wraps around the DAO operations - so that the load and save happens within the same transaction. In other words, you can solve this by creating a layer of code between the form and the DAO code, a "service layer", which provides the business-level logic and hides the presence of DAOs from the presentation layer.
I've not yet implemented it, but I'm pretty sure that #ireddick solution in How to control JPA persistence in Wicket forms? of lazily starting a tx in in the Wicket request cycle is the best solution here. I'm going to accept this proxy for it to stop Stack Overflow nagging me to accept an answer.