I use Spring Boot Jpa in a standalone GUI (Swing) java application with an embedded H2 database.
I use Spring Boot 1.3.0 and this is my additional configuration:
private static final String dataSourceUrl = "jdbc:h2:./databse;DB_CLOSE_ON_EXIT=FALSE";
#Bean
public DataSource dataSource() {
return DataSourceBuilder.create().url(dataSourceUrl).username("user").password("pwd").build();
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource) {
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource);
em.setPackagesToScan(new String[] { "packages.to.scan" });
JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
em.setJpaVendorAdapter(vendorAdapter);
Properties properties = new Properties();
properties.setProperty("hibernate.dialect", "org.hibernate.dialect.H2Dialect");
properties.setProperty("hibernate.hbm2ddl.auto", "update");
em.setJpaProperties(properties);
return em;
}
In my application.properties file I have only one line: spring.aop.proxy-target-class=true.
For my repositories I extend JpaRepository.
Everything is working, the only problem I had recently: On a MAC which was running the application the MAC had some kind of problems and crashed. Afterwards none of the modification which was done before was actually stored in the database. I use the #Transactional annotation to modify data in the database.
I'm not very experienced with databases but after googling around I guess the transactions are cached by the persistence context (not sure if the terminology is correct) and is actually persisted when the application is closed. I checked the database file and made some manipluation through the GUI (includes also some queries) but the modification date of the database file changed only when I closed the application.
As this is a standalone GUI application there will be no performance issues if every transaction will be directly perisisted in the database. Am I on the correct way and how could I achieve that every transaction is directly persisted in the database? Are there any configuration I have to do or do I have to add any code after every call of the save() method of a repository?
If not, I have absolutely no idea how to debug this kind of problems as I have to admit that I'm not pretty sure whats actually going on under the hood..
Hibernate decides on it's own when to write to database (flushing the persistence context) based on optimization parameters and configured flushing strategy.
Maybe you can take a look here and adjust the behavior according to your needs:
https://docs.jboss.org/hibernate/orm/4.0/devguide/en-US/html/ch03.html
Information about the flush modes will also help you:
http://docs.jboss.org/hibernate/orm/4.3/javadocs/org/hibernate/FlushMode.html
Springs #Transactional follows the container managed transaction paradigm. By default if one #Transactional invokes a #Transactional method in another Componet/service/repository the transaction is propagated. When the outermost #Transactional method completes the transaction will be committed to the database.
JPA may flush data to the database multiple time within the same transaction, but everything in the transaction is either committed or rolled back when the transaction completed. If you have #Transactional on a #Controller, the transaction completes after the DispatchServlet has called the handler method (More specifically it happens indside the GCLIB or JDK Proxy which is created using Spring AOP)
Related
My application works with multi datasources and 2 databases Oracle and PostgreSQL (I dont need global transaction) .
I dont know which transaction manager to use. Both have some advantages and disadvantages.
Atomikos suppport global transaction which I dont need and log some information about transaction to file system which I want to avoid:
public void setEnableLogging(boolean enableLogging)
Specifies if disk logging should be enabled or not. Defaults to true.
It is useful for JUnit testing, or to profile code without seeing the
transaction manager's activity as a hot spot but this should never be
disabled on production or data integrity cannot be guaranteed.
advantages is that it use just one transaction manager
When using DataSourceTransactionManager I need one per dataSource
#Bean
#Primary
DataSourceTransactionManager transactionManager1() {
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource1());
return transactionManager;
}
#Bean
DataSourceTransactionManager transactionManager2() {
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource2());
return transactionManager;
}
this is problem because I need to specify name of tm in annotation:
#Transactional("transactionManager1")
public void test() {
}
but I dont know it because in runtime I can switch in application which database to use.
is there some other options or I am missing something in this two transaction manager ?
You should solve this as option 2, using one DataSourceTransactionManager per data source. You will need to keep track of the transaction manager for each data source.
One thing additionally, if you need to be able to rollback transactions on both databases, you will have to set up a ChainedTransactionManager for both.
I recently migrated my spring boot/batch Java application from spring-boot/spring-framework (respectively) 1.x.x/4.x.x to => 2.x.x/5.x.x (2.2.4/5.2.3 to be specific). The problem is something is definitely wrong (in my opinion) with the transaction/entity manager, as when the .saveAll() method is called from the JpaRepository class of my database persistance layer, it jumps into the SpringAOP framework/libarary code and into a infinite loop. I see it returning a "DefaulTransaction" object from a method (invoke()). My application on 1.x.x/4.x.x when it worked, would return the actual ArrayList here of my entities. I am using spring-boot-starter, spring-boot-starter-web, spring-boot-starter-data-jpa, spring-boot-starter-batch, and hibernate/hibernate-envers/hibernate-entitymanager (also of course many other dependencies, let me know if you would like me to list them).
After some research, I'm finding people are saying that Spring Batch #EnableBatchProcessing annotation sets up a default transaction manager, which if I'm using JPA could be causing issues. Reference:
https://github.com/spring-projects/spring-boot/issues/2363
wilkinsona suggested defining this Bean in my #Configuration class:
#Bean
public BatchConfigurer batchConfigurer(DataSource dataSource, EntityManagerFactory entityManagerFactory) {
return new BasicBatchConfigurer(dataSource, entityManagerFactory);
}
I'm getting an error when I do this because its saying the BasicBatchConfigurer() has protected access. What is the best way to instantiate this?
I also saw some people saying removing the #EnableBatchProcessing annotation fixes the persistance to database issue, but when I remove this, I lose the ability to Autowire my JobBuilderFactory and StepBuilderFactory. Is there a way to remove the annotation and get these objects in my code so I can at-least test if this works? Sorry, I'm not completely a master with Spring Batch/Spring.
In my #Configuration class, I am using the PlatformTransactionManager. I am setting up my JobRepository something like this.:
#Bean
public JobRepository jobRepository(PlatformTransactionManager transactionManager,
#Qualifier("dataSource") DataSource dataSource) throws Exception {
JobRepositoryFactoryBean jobRepositoryFactoryBean = new JobRepositoryFactoryBean();
jobRepositoryFactoryBean.setDataSource(dataSource);
jobRepositoryFactoryBean.setTransactionManager(transactionManager);
jobRepositoryFactoryBean.setDatabaseType("POSTGRES");
return jobRepositoryFactoryBean.getObject();
}
I can provide any other information if needed. Another question is - if I was using the same code basically, transaction manager, entity manager etc.. how was old my code working on 1.x.x? Could I have a wrong dependency somewhere in my pom.xml such that my new migrated code is using a wrong method or something from the wrong dependency?
By default, #EnableBatchProcessing configures Spring Batch to use a DataSourceTransactionManager if you provide a DataSource. This transaction manager knows nothing about your JPA context. So if you want to use a JPA repository to save data, you need to configure Spring Batch to use a JpaTransactionManager.
Now in order to provide a custom transaction manager, you need to register a BatchConfigurer and override the getTransactionManager() method, something like:
#Bean
public BatchConfigurer batchConfigurer(DataSource dataSource) {
return new DefaultBatchConfigurer(dataSource) {
#Override
public PlatformTransactionManager getTransactionManager() {
return new JpaTransactionManager();
}
};
}
This is explained in the Configuring A Job section and in the Javadoc of #EnableBatchProcessing.
I am working with Spring Batch and JPA and I experienced the TransactionManager bean conflict. I found a solution by setting the TransactionManager as JpaTransactionManager in a step. But according to this link (https://github.com/spring-projects/spring-batch/issues/961), it is not correct even though it works for me.
#Autowired
private JpaTransactionManager transactionManager;
private Step buildTaskletStep() {
return stepBuilderFactory.get("SendCampaignStep")
.<UserAccount, UserAccount>chunk(pushServiceConfiguration.getCampaignBatchSize())
.reader(userAccountItemReader)
.processor(userAccountItemProcessor)
.writer(userAccountItemWriter)
.transactionManager(transactionManager)
.build();
}
}
I tried the suggested solution of implementing the BatchConfigurer but it conflicts with me disabling the metadata tables using this code:
#Configuration
#EnableAutoConfiguration
#EnableBatchProcessing
public class BatchConfiguration extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource) {
// override to do not set datasource even if a datasource exist.
// initialize will use a Map based JobRepository (instead of database)
}
}
What would be the problem using the first solution of setting the TransactionManager in a Step?
In Spring Batch, there are two places where a transaction manager is used:
In the proxy created around the JobRepository to create transactional methods when interacting with the job repository
In each step definition to drive the step's transaction
Typically, the same transaction manager is used in both places, but this is not a requirement. It is perfectly fine to use a ResourcelessTransactionManager with the job repository to not store any meta-data and a JpaTransactionManager in the step to persist data in a database.
By default, when you use #EnableBatchProcessing and you provide a DataSource bean, Spring Batch will create a DataSourceTransactionManager and set it in both places, because this is the most typical case. But nothing prevents you from using a different transaction manager for the step. In this case, you should accept that business data and technical meta-data can get out of sync when things go wrong.
That's why the expected way to provide a custom transaction manager is via a custom BatchConfigurer#getTransactionManager, in which case your custom transaction manager is set it in both places. This was not clearly documented, but it has been fixed since v4.1. Here is the section that mentions that: Configuring a JobRepository. This is also mentioned in the Javadoc of #EnableBatchProcessing:
In order to use a custom transaction manager, a custom BatchConfigurer should be provided.
I am having trouble finding information about this issue I am running into. I am interested in implementing row level security on my Postgres db and I am looking for a way to be able to set postgres session variables automatically through some form of an interceptor. Now, I know that with hibernate you are able to do row-level-security using #Filter and #FilterDef, however I would like to additionally set policies on my DB.
A very simple way of doing this would be to execute the SQL statement SET variable=value prior to every query, though I have not been able to find any information on this.
This is being used on a spring-boot application and every request is expected to will have access to a request-specific value of the variable.
Since your application uses spring, you could try accomplishing this in one of a few ways:
Spring AOP
In this approach, you write an advice that you ask spring to apply to specific methods. If your methods use the #Transactional annotation, you could have the advice be applied to those immediately after the transaction has started.
Extended TransactionManager Implementation
Lets assume your transaction is using JpaTransactionManager.
public class SecurityPolicyInjectingJpaTransactionManager extends JpaTransactionManager {
#Autowired
private EntityManager entityManager;
// constructors
#Override
protected void prepareSynchronization(DefaultTransactionStatus status, TransactionDefinition definition) {
super.prepareSynchronization(status, definition);
if (status.isNewTransaction()) {
// Use entityManager to execute your database policy param/values
// I would suggest you also register an after-completion callback synchronization
// This after-completion would clear all the policy param/values
// regardless of whether the transaction succeeded or failed
// since this happens just before it gets returned to the connection pool
}
}
}
Now simply configure your JPA environment to use your custom JpaTransactionManager class.
There are likely others, but these are the two that come to mind that I've explored.
I was working on a project using Spring boot, Spring MVC, and Hibernate. I encountered this problem which had already taken me 2 days.
My project was an imitation of twitter. When I started to work on the project, I used the JPA to get the Hibernate Session. Here is the code in my BaseDaoImpl class:
#Autowired
private EntityManagerFactory entityManagerFactory;
public Session getSession(){
return entityManagerFactory.createEntityManager().unwrap(Session.class);
}
In my Service class, I used the #Transactional annotation:
#Service("userServ")
#Transactional(propagation=Propagation.REQUIRED, readOnly=false,rollbackFor={Exception.class, RuntimeException.class})
public class UserServImpl implements IUserServ {}
And finally, an overview of my main class:
#SpringBootApplication
#EnableTransactionManagement
#EntityScan(basePackages = {"edu.miis.Entities"})
#ComponentScan({"edu.miis.Controllers","edu.miis.Service","edu.miis.Dao"})
#EnableAutoConfiguration
#Configuration
public class FinalProjectSpringbootHibernateDruidApplication {
public static void main(String[] args) {
SpringApplication.run(FinalProjectSpringbootHibernateDruidApplication.class, args);
}
}
When I was using this setting, everything seemed fine - until I was able to move up to a the extent where I started to add "post" function. I could add posts and comments into the database. However, I could not do this a lot of times. Every time I added up to 4 posts, the program ceased to run - no exceptions, no errors - the page just got stuck there.
I looked up online, realizing that the problem was probably due to the entityManagerFactory. I was told that entityManagerFactory.createEntityManager().unwrap(Session.class)opens new Hibernate sessions, instead of the traditional sessionFactory.getCurrentSession() that returns an existing session.
So I started to work on it. I changed my Dao configuration into this:
#Autowired
private EntityManagerFactory entityManagerFactory;
public Session getSession(){
Session session = entityManagerFactory.unwrap(SessionFactory.class).getCurrentSession();
return session;
}
My idea was to use the autowired EntityManagerFactory to return a Hibernate SessionFactory so that the getCurrentSession method can be then used.
But then I got problem:
Since I configured to this setting, any operation that involves input from controller to the service-dao-database invokes an exception: No Transaction Is In Progress
But the weird thing is: although the system broke due to no visible transaction in progress, Hibernate still generates new SQL statements, and data still get synchronized into the database.
Can anybody help me over how to get this issue resolved?
Sincerely thanks!
Following #M. Deinum's suggestion, I finally had this issue resolved.
The reason why the #Transactional annotation didn't work in my code in the first place, was because in my original code, I used plain Hibernate features - Session, SessionFactory, getCurrentSession() etc.
In order for these features to work, I need to specifically configure the transaction manager into a Hibernate Transaction Manager (under default setting, Spring boot autowires a JPA transaction manager).
But the problem is: most of methods that were used to support Hibernate features are now deprecated. Now, the JPA method is the mainstream.
Use EntityManager instead of Session/
Use EntityManager.persist instead of Session.save
Use EntityManager.merge instead of Session.update
Use EntityManager.remove instead of Session.remove.
That's all.
Thanks!