I am trying to update a node on Neo4J, but what ends up happening is that it creates a duplicate Node. I read that the update has to be in a single transaction and I added #Transactional, but still same result. Here is what I have. I tried the approach of reading and deleting the old node, and saving the new one and it appears to be working. But, I think that is not the right approach. Why the #Transactional annotation not working. Thank you.
#EnableNeo4JRepositories(com.example.graph.repo)
#EnableTransactionManagement
#org.springframework.contect.annotation.Configuration
public class Neo4JConfig {
#Bean
public Configuration configuration() {
Configuration cfg = new Configuration();
cfg.driverConfiguration()
.setDriverClassName("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setURI("http://neo4j:neo4j#localhost:7474");
return cfg;
}
#Bean
public SessionFactory sessionFactory() {
return new SessionFactory(configuration(), "com.example");
}
#Bean
public Neo4jTransactionManager transactionManager() {
return new Neo4JTransactionManager(sessionFactory());
}
}
#Service
public class UserService{
#Autowired
UserRepository userRepository;
#Transactional
public void updateUser(User user) {
User existingUser = userRepository.getExistingUser(user.getUserName());
if(existingUser != null ) {
user.setSomeValue(existingUser.getSomeValue());
userRepository.save(user);
}
}
}
Spring AOP uses JDK Proxy mechanism by default. It means that you must invoke #Transactional method via interface method.
So you should split your service into interface UserService and implementation (say UserServiceImpl), autowire the interface into the code where you currently autowire the impementation, and then invoke transactional method via interface.
P.S. Another approach is to force Spring to use CGLIB as long as this mechanism is not limited to interfaces. More details for both mechanisms https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/ch08s06.html
Related
I was setting up a basic CRUD web app with JPA in plain Spring (no Spring Boot or Spring Data JPA) for educational purposes and faced a strange problem: Spring doesn't translate exceptions for my repository. According to the Spring documentation (here and here), it is sufficient to mark the repository with the #Repository annotation and Spring will automatically enable exception translation for this repository.
However, when I did so and triggered a UNIQUE constraint violation, I still was getting a JPA PersistenceException (with a Hibernate ConstraintViolationException inside) instead of the Spring DataIntegrityViolationException.
I used pure Java Spring configuration and it took me quite some time to realize that I should compare it with the XML configuration in the documentation. Compared to the pure Java configuration, the XML configuration adds a PersistenceExceptionTranslationPostProcessor into the context. When I added it manually with #Bean, it worked, but now I have a question.
Have I misconfigured something? The Spring documentation doesn't require registering that post-processor manually for pure Java configuration. Maybe there is another way to register it, say an #EnableXXX annotation?
Here is the summary of my configuration.
#Configuration
#ComponentScan("com.example.secured_crm")
public class SpringConfiguration {
// the problem is solved if I uncomment this
//#Bean
//public PersistenceExceptionTranslationPostProcessor exceptionTranslation() {
// return new PersistenceExceptionTranslationPostProcessor();
//}
}
#Configuration
#PropertySource("classpath:db.properties")
#EnableTransactionManagement
public class DataSourceConfiguration {
#Value("${jdbc.driver}")
private String driverClass;
#Value("${jdbc.url}")
private String url;
// ...
#Value("${hibernate.debug}")
private String hibernateDebug;
#Bean
public DataSource dataSource() {
var dataSource = new ComboPooledDataSource();
// ...
return dataSource;
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
var emFactory = new LocalContainerEntityManagerFactoryBean();
emFactory.setDataSource(dataSource());
emFactory.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
emFactory.setPackagesToScan("com.example.secured_crm.entities");
var properties = new Properties();
properties.setProperty("hibernate.dialect", hibernateDialect);
properties.setProperty("hibernate.show_sql", hibernateDebug);
properties.setProperty("hibernate.format_sql", "true");
emFactory.setJpaProperties(properties);
return emFactory;
}
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
var txManager = new JpaTransactionManager();
txManager.setEntityManagerFactory(entityManagerFactory);
return txManager;
}
}
public interface UserRepository {
User findByName(String username);
List<User> findAll();
void save(User user);
boolean deleteById(int id);
User findById(int id);
}
#Repository
public class UserJpaRepository implements UserRepository {
#PersistenceContext
EntityManager em;
#Override
public void save(User user) {
if (user.getId() == null) {
em.persist(user);
} else {
em.merge(user);
}
}
// and so on...
}
By the way, when I tried to add the post-processor in DataSourceConfiguration, it disabled #PropertySource effect. So far my impression of Spring is that it's one big hack...
It requires to manually register PersistenceExceptionTranslationPostProcessor in order for the exception translation to take effect.
The documentation you mentioned simply does not updated yet to show a fully working java configuration. It should mention to register this post processor. ( So feel free to provide a PR to update the docs.).
If you check from its javadoc , it already mentioned PersistenceExceptionTranslationPostProcessor is necessary to be registered :
As a consequence, all that is usually needed to enable automatic
exception translation is marking all affected beans (such as
Repositories or DAOs) with the #Repository annotation, along with
defining this post-processor as a bean in the application context.
P.S. If you are using spring-boot , and if it detects PersistenceExceptionTranslationPostProcessor is in the class-path , it will automatically register it by default such that you do not need to register manually.
In our application we are using both MySQL server and Redis databases. we use Redis as a database and not just a cache. we use both of them in a service method and I want to make the method #Transactional to let the spring manages my transactions. Hence if in the middle of an transactional method a RuntimeException is thrown all works on both Redis and MySQL are rolled back. I have followed the spring docs and configured my #SpringBootApplication class as following:
#SpringBootApplication
#EnableTransactionManagement
public class TransactionsApplication {
#Autowired
DataSource dataSource;
public static void main(String[] args) {
SpringApplication.run(TransactionsApplication.class, args);
}
#Bean
public StringRedisTemplate redisTemplate() {
StringRedisTemplate template = new StringRedisTemplate(redisConnectionFactory());
// explicitly enable transaction support
template.setEnableTransactionSupport(true);
return template;
}
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
return new LettuceConnectionFactory(new RedisStandaloneConfiguration("localhost", 6379));
}
#Bean
public PlatformTransactionManager transactionManager() throws SQLException, IOException {
return new DataSourceTransactionManager(dataSource);
}
}
and this is my service method:
#Service
#RequiredArgsConstructor
#Slf4j
public class FooService {
private final StringRedisTemplate redisTemplate;
private final FooRepository fooRepository;
#Transactional
public void bar() {
Foo foo = Foo.builder()
.id(1001)
.name("fooName")
.email("foo#mail.com")
.build();
fooRepository.save(foo);
ValueOperations<String, String> values = redisTemplate.opsForValue();
values.set("foo-mail", foo.getEmail());
}
However after the test method of TestService is called there is no user in MySQL db and I think it's because there is no active transaction for it.Is there any solution for this problem? Should I use spring ChainedTransactionManager class and then how? or I can only manage Redis transactions manually through MULTI?
After playing around with FooService class I found that using #Transactional in a service method that is intended to work with Redis and specially read a value from Redis (which I think most service methods are supposed to read some value from DB) is somehow useless since any read operation would result a null value and that is because Redis queues all operations of a transaction and executes them at the end. Summing up, I think using MULTI and EXEC operations is more preferable since it gives more control to use data in Redis.
After all, any suggestions to use Redis transactions is appreciated.
This question already has answers here:
Spring and hibernate: No Session found for current thread
(6 answers)
Closed 4 years ago.
I'm getting this error when I try to upload a picture in my project. The project executes fine until it has to effectively upload the picture to the database (I'm using postgresql), but this last step never works.
The following code was updated having considered the answers below.
Here's my controller (a part of it):
#Autowired
private FileUploadImpl fileUploadImpl;
...
#RequestMapping(value = "publish4" ,method = RequestMethod.POST)
public ModelAndView publish4(#Valid #ModelAttribute("fourthPublicationForm") final FourthPublicationForm form, final BindingResult errors,
#RequestParam("type") String type, #RequestParam("operation") String operation , #RequestParam CommonsMultipartFile[] fileUpload) {
if (errors.hasErrors()) {
//return helloPublish3(form,operation,type);
}
System.out.println("operation: "+ operation);
System.out.println("type: "+ type);
ps.create(form.getTitle(), form.getAddress(), operation, form.getPrice(), form.getDescription(),
type, form.getBedrooms(), form.getBathrooms(), form.getFloorSize(), form.getParking());
if (fileUpload != null && fileUpload.length > 0) {
for (CommonsMultipartFile aFile : fileUpload){
System.out.println("Saving file: " + aFile.getOriginalFilename());
UploadFile uploadFile = new UploadFile();
uploadFile.setAddress(form.getAddress());
uploadFile.setData(aFile.getBytes());
fileUploadImpl.save(uploadFile);
}
}
return new ModelAndView("redirect:/hello/home");
}
This is fileUploadDao in interface:
public interface FileUploadDao {
void save(UploadFile uploadFile);
}
This is in services:
#Service
public class FileUploadImpl {
#Autowired
private FileUploadDao fileUploadDao;
public FileUploadImpl() {
}
#Transactional
public void save(UploadFile uploadFile) {
fileUploadDao.save(uploadFile);
}
}
THe following in persistence:
#Repository
public class FileUploadDAOImpl implements FileUploadDao {
#Autowired
private SessionFactory sessionFactory;
public FileUploadDAOImpl() {
}
public FileUploadDAOImpl(SessionFactory sessionFactory) {
this.sessionFactory = sessionFactory;
}
public void save(UploadFile uploadFile) {
sessionFactory.getCurrentSession().save(uploadFile);
}
}
I got this in WebConfig.java (among other stuff)
#Bean
public LocalSessionFactoryBean sessionFactory() {
LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();
sessionFactory.setDataSource(dataSource());
sessionFactory.setPackagesToScan(
new String[] { "ar.edu.itba.paw" }
);
//sessionFactory.setHibernateProperties(hibernateProperties());
return sessionFactory;
}
#Autowired
#Bean(name = "fileUploadDao")
public FileUploadDao getUserDao(SessionFactory sessionFactory) {
return new FileUploadDAOImpl(sessionFactory);
}
#Bean(name = "multipartResolver")
public CommonsMultipartResolver getCommonsMultipartResolver() {
CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver();
multipartResolver.setMaxUploadSize(20971520); // 20MB
multipartResolver.setMaxInMemorySize(1048576); // 1MB
return multipartResolver;
}
#Bean
#Autowired
public HibernateTransactionManager transactionManager(
SessionFactory sessionFactory) {
HibernateTransactionManager txManager = new HibernateTransactionManager();
txManager.setSessionFactory(sessionFactory);
return txManager;
}
A little bit more of the error:
org.hibernate.HibernateException: No Session found for current thread
at org.springframework.orm.hibernate4.SpringSessionContext.currentSession(SpringSessionContext.java:106)
at org.hibernate.internal.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:1014)
at ar.edu.itba.paw.persistence.FileUploadDAOImpl.save(FileUploadDAOImpl.java:25)
at ar.edu.itba.paw.webapp.controller.HelloWorldController.publish4(HelloWorldController.java:260)
I've seen other questions where the answer was the lack of use of "transactional". I'm using that annotation here, but I'm not sure if the way it's 100% correct.
First remove #Transactional from FileUploadDAOImpl.
Change base package accordingly,
sessionFactory.setPackagesToScan(
new String[] { "base.package.to.scan" }
);
base.package.to.scan seems like invalid base package naming, change it to ar.edu.itba.paw.
You need a transaction manager to get use of #Transactional. Add it to WebConfig
#Bean
#Autowired
public HibernateTransactionManager transactionManager(
SessionFactory sessionFactory) {
HibernateTransactionManager txManager = new HibernateTransactionManager();
txManager.setSessionFactory(sessionFactory());
return txManager;
}
This might get this code work, give it a try.
UPDATE: Also make sure following annotations present on WebConfig class,
#Configuration
#ComponentScan({"ar.edu.itba.paw"})
#EnableTransactionManagement(mode = AdviceMode.PROXY)
public class WebConfig {
// code
}
As you said from the first place, you have confused the actual layers. Still you could make it work properly in your situation but lets discuss a bit your implementation.
FileUploadDao is it a DAO or is it a Service ?
FileUploadImpl seems that you're confusing #Service with #Repository ,
maybe reading this out might help you. Spring Data Repositories , Spring Service Annotation
You ve made a transactional method , save in which i cannot say what you want to achieve exactly. You are also autowiring both FileUploadDao and SessionFactory, although you want to implement the first and inside the method you are trying to persist the object twice by first calling save upon the repository (thats a StackOverflowError from the first place, but you are lucky because Spring knows what to autowire) and then you are trying to call save a second time upon the Hibernate's SessionFactory , which breaks the abstract JPA contract. Also if you noticed , the error at the logs you posted , comes from the second save.
#Transactional not going to discuss how is this working , as you haven't posted your whole app-config. But again , you could read this for more info.
So based on the examples you shared , i am going to prepare 2 cases which might help you understand whats going on underneath.
First Case , Spring DATA , not really care if its Hibernate or another JPA provider underneath.
Your FileUploadImpl Becomes : FileUploadService
#Service
public class FileUploadService {
#Autowired
private FileUploadDao fileUploadDao;
public FileUploadService() {
}
#Transactional
public void save(UploadFile uploadFile) {
fileUploadDao.save(uploadFile);
}
}
Inside your controller , you are Autowiring the Service (layer) not directly the Repository/DAO(layer). There is not anything that stops you tho , its just a matter of design(if you still not get that point, raise another question).
A part of your part's Controller
#Autowired
private FileUploadService fileUploadService;
#RequestMapping(value = "publish4" ,method = RequestMethod.POST)
public ModelAndView publish4(#Valid #ModelAttribute("fourthPublicationForm") final FourthPublicationForm form, final BindingResult errors,
#RequestParam("type") String type, #RequestParam("operation") String operation , #RequestParam CommonsMultipartFile[] fileUpload) {
.........
fileUploadService.save(uploadFile);
}
Second Case , if you really want to use hibernate goodies , then there is not any reason autowiring the Repository , but simply implement those calls by yourself.
import org.springframework.stereotype.Component;
#Component
public class FileUploadDao {
#Autowired
private SessionFactory sessionFactory;
public FileUpload save(FileUpload obj) {
return sessionFactory.getCurrentSession().save(obj);
}
public FileUpload merge(FileUpload obj) {
return sessionFactory.getCurrentSession().merge(obj);
}
..... delete / update / or custom queries(SQL/JPQL/HQL) can be placed here
}
Your service simply exposes those methods , check the difference , i am applying the #Transactional annotation on this layer(ofc again you can put it in the DAO layer, but as i said its a matter of design).
#Service
public class FileUploadService {
#Autowired
private FileUploadDao fileUploadDao;
public FileUploadService() {
}
#Transactional
public UploadFile save(UploadFile uploadFile) {
fileUploadDao.save(uploadFile);
}
#Transactional
public UploadFile merge(UploadFile uploadFile) {
fileUploadDao.merge(uploadFile);
}
....rest of the methods you want to expose , or combinations of mulitple DAOs
}
Your controller remains the same , and thats the actual reason you need to have layers.
I'm trying to learn how to use Hibernate and Spring and came into some problem. Wanted to check on my own skin how Propagation.NESTED works. And there is my code:
#Component
#Transactional
public class CompanyServiceImpl implements CompanyService {
#Autowired
private CompanyDao dao;
...
#Override
public void testNested(int id,String name) {
User user=dao.getUser(id);
user.setName(name);
notValidNested(id,name+"new");
}
#Override
#Transactional(propagation=Propagation.NESTED)
public void notValidNested(int id,String name) {
dao.getUser(id).setName(name);
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
}
}
#Component
public class CompanyDaoImpl implements CompanyDao {
#PersistenceContext
private EntityManager em;
...
#Override
public User getUser(int id) {
return em.find(User.class, id);
}
}
And Spring's configuration
#Configuration
#ComponentScan(basePackages= {"util.spring.test","service","dao"})
#EnableAspectJAutoProxy
#EnableTransactionManagement
public class SpringConfiguration {
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
System.out.println("entityManagerFactory - initialization started");
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setPersistenceUnitName("ORM");
entityManagerFactoryBean.getJpaPropertyMap().put(BeanValidationIntegrator.MODE_PROPERTY, ValidationMode.NONE);
return entityManagerFactoryBean;
}
#Bean
public PlatformTransactionManager transactionManager() {
System.out.println("transactionManager - initialization started");
JpaTransactionManager transactionManager = new JpaTransactionManager(entityManagerFactory().getObject());
transactionManager.setRollbackOnCommitFailure(true);
return transactionManager;
}
}
I have read little bit about NESTED and thought that this code would (let's assume that i made companyService.testNested(7,"newName") change name of User with id 7 to "newName". Sadly name doesn't change at all. If I replace TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); with throw new RuntimeException(); and add to annotation rollbackFor=RuntimeException.class the result is the same. I have read a little bit about propagation, but sadly I have no idea whats wrong with my code.
One possibility that comes to my mind is that my driver doesn't support savepoints (that are used in NESTED) but the same thing happens when I change NESTED to REQUIRES_NEW
The problem is you are calling a method inside the same class. Spring does not have an opportunity to intercept the call and apply the #Transactional attributes. If you move it to a separate class, you should see the behavior you are looking for.
I think the issue is not with Transaction, it may be the case of detached entity. Try calling entityManager.save() or entityManager.merge() method after changing the value.
I have seen a lot of examples of Spring Batch projects where either (a) a dataSource is defined, or (b) no dataSource is defined.
However, in my project, I would like my business logic to have access to a dataSource, but I want Spring Batch to NOT use the dataSource. Is this possible?
This guy has a similar problem: Spring boot + spring batch without DataSource
Generally, using spring-batch without a database is not a good idea, since there could be concurrency issues depending on the kind of job you define. So at least an using an inmemory db is strongly advised, especially if you plan to use the job in production.
Using SpringBatch with SpringBoot will initialize an inmemory datasource, if you do not configure your own datasource(s).
Taking this into account, let me redefine your question as follows: Can my businesslogic use another datasource than springbatch is using to update its BATCH-tables?
Yes, it can. As a matter of fact, you can use as many datasources as you want inside your SpringBatch Jobs. Just use by-name autowiring.
Here is how I do it:
I always use Configuration class, which defines all the datasources I have to use in my Jobs
Configuration
public class DatasourceConfiguration {
#Bean
#ConditionalOnMissingBean(name = "dataSource")
public DataSource dataSource() {
// create datasource, that is used by springbatch
// for instance, create an inmemory datasource using the
// EmbeddedDatabaseFactory
return ...;
}
#Bean
#ConditionalOnMissingBean(name = "bl1datasource")
public DataSource bl1datasource() {
return ...; // your first datasource that is used in your businesslogic
}
#Bean
#ConditionalOnMissingBean(name = "bl2datasource")
public DataSource bl2datasource() {
return ...; // your second datasource that is used in your businesslogic
}
}
Three points to note:
SpringBatch is looking for a datasource with the name "dataSource", if you do not provide this EXACT (uppercase 'S') name as the name, spring batch will try to autowire by type and if it finds more than one instance of DataSource, it will throw an exception.
Put your datasource configuration in its own class. Do not put them in the same class as your jobdefinitions are. Spring needs to be able to instantiate the datasource-SpringBean with the name "dataSource" very early when it loads the context. Before it starts to instantiate your Job- and Step-Beans. Spring will not be able to do it correctly, if you put your datasource definitions in the same class as you have your job/step definitions.
Using #ConditionalOnMissingBean is not mandatory, but I found it a good practics. It makes it easy to change the datasources for unit/integration tests. Just provide an additional test configuration in the ContextConfiguration of your unit/IT test which, for instance, overwrites the "bl1Datasource" with an inMemoryDataSource:
Configuration
public class TestBL1DatasourceConfiguration {
// overwritting bl1datasource with an inMemoryDatasource.
#Bean
public DataSource bl1datasource() {
return new EmbeddedDatabaseFactory.getDatabase();
}
}
In order to use the businesslogic datasources, use injection by name:
#Component
public class PrepareRe1Re2BezStepCreatorComponent {
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private DataSource bl1datasource;
#Autowired
private DataSource bl2datasource;
public Step createStep() throws Exception {
SimpleStepBuilder<..., ...> builder =
stepBuilderFactory.get("astep") //
.<..., ...> chunk(100) //
.reader(createReader(bl1datasource)) //
.writer(createWriter(bl2datasource)); //
return builder.build();
}
}
Furthermore, you probably want to consider using XA-Datasources if you'd like to work with several datasources.
Edited:
Since it seems that you really don't want to use a datasource, you have to implement your own BatchConfigurer (http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/core/configuration/annotation/BatchConfigurer.html) (as Michael Minella - the SpringBatch project lead - pointed out above).
You can use the code of org.springframework.batch.core.configuration.annotation.DefaultBatchConfigurer as a starting point for your own implementation. Simply remove all the datasource/transactionmanager code and keep the content of the if (datasource === null) part in the initialize method. This will initialize a MapBasedJobRepository and MapBasedJobExplorer. But again, this is NOT a useable solution in a productive environment, since it is not threadsafe.
Edited:
How to implement it:
Configuration class that defines the "businessDataSource":
#Configuration
public class DataSourceConfigurationSimple {
DataSource embeddedDataSource;
#Bean
public DataSource myBusinessDataSource() {
if (embeddedDataSource == null) {
EmbeddedDatabaseFactory factory = new EmbeddedDatabaseFactory();
embeddedDataSource = factory.getDatabase();
}
return embeddedDataSource;
}
}
The implementation of a specific BatchConfigurer:
(of course, the methods have to be implemented...)
public class MyBatchConfigurer implements BatchConfigurer {
#Override
public JobRepository getJobRepository() throws Exception {
return null;
}
#Override
public PlatformTransactionManager getTransactionManager() throws Exception {
return null;
}
#Override
public JobLauncher getJobLauncher() throws Exception {
return null;
}
#Override
public JobExplorer getJobExplorer() throws Exception {
return null;
}
}
And finally the main configuration and launch class:
#SpringBootApplication
#Configuration
#EnableBatchProcessing
// Importing MyBatchConfigurer will install your BatchConfigurer instead of
// SpringBatch default configurer.
#Import({DataSourceConfigurationSimple.class, MyBatchConfigurer.class})
public class SimpleTestJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public Job job() throws Exception {
SimpleJobBuilder standardJob = this.jobs.get(JOB_NAME)
.start(step1());
return standardJob.build();
}
protected Step step1() throws Exception {
TaskletStepBuilder standardStep1 = this.steps.get("SimpleTest_step1_Step")
.tasklet(tasklet());
return standardStep1.build();
}
protected Tasklet tasklet() {
return (contribution, context) -> {
System.out.println("tasklet called");
return RepeatStatus.FINISHED;
};
}
public static void main(String[] args) throws Exception {
SpringApplication.run(SimpleTestJob.class, args);
}
}