How create table with spring data cassandara? - java

I have created my own repository like that:
public interface MyRepository extends TypedIdCassandraRepository<MyEntity, String> {
}
So the question how automatically create cassandra table for that? Currently Spring injects MyRepository which tries to insert entity to non-existent table.
So is there a way to create cassandra tables (if they do not exist) during spring container start up?
P.S. It would be very nice if there is just config boolean property without adding lines of xml and creation something like BeanFactory and etc. :-)

Overide the getSchemaAction property on the AbstractCassandraConfiguration class
#Configuration
#EnableCassandraRepositories(basePackages = "com.example")
public class TestConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "test_config";
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.RECREATE_DROP_UNUSED;
}
#Bean
public CassandraOperations cassandraOperations() throws Exception {
return new CassandraTemplate(session().getObject());
}
}

You can use this config in the application.properties
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS

You'll also need to Override the getEntityBasePackages() method in your AbstractCassandraConfiguration implementation. This will allow Spring to find any classes that you've annotated with #Table, and create the tables.
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.example"};
}

You'll need to include spring-data-cassandra dependency in your pom.xml file.
Configure your TestConfig.class as below:
#Configuration
#PropertySource(value = { "classpath:Your .properties file here" })
#EnableCassandraRepositories(basePackages = { "base-package name of your Repositories'" })
public class CassandraConfig {
#Autowired
private Environment environment;
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(env.getProperty("contactpoints from your properties file"));
cluster.setPort(Integer.parseInt(env.getProperty("ports from your properties file")));
return cluster;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(env.getProperty("keyspace from your properties file"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.CREATE_IF_NOT_EXISTS);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
#Bean
public CassandraMappingContext mappingContext() throws ClassNotFoundException {
CassandraMappingContext mappingContext= new CassandraMappingContext();
mappingContext.setInitialEntitySet(getInitialEntitySet());
return mappingContext;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"base-package name of all your entity annotated
with #Table"};
}
#Override
protected Set<Class<?>> getInitialEntitySet() throws ClassNotFoundException {
return CassandraEntityClassScanner.scan(getEntityBasePackages());
}
}
This last getInitialEntitySet method might be an Optional one. Try without this too.
Make sure your Keyspace, contactpoints and port in .properties file. Like :
cassandra.contactpoints=localhost,127.0.0.1
cassandra.port=9042
cassandra.keyspace='Your Keyspace name here'

Actually, after digging into the source code located in spring-data-cassandra:3.1.9, you can check the implementation:
org.springframework.data.cassandra.config.SessionFactoryFactoryBean#performSchemaAction
with implementation as following:
protected void performSchemaAction() throws Exception {
boolean create = false;
boolean drop = DEFAULT_DROP_TABLES;
boolean dropUnused = DEFAULT_DROP_UNUSED_TABLES;
boolean ifNotExists = DEFAULT_CREATE_IF_NOT_EXISTS;
switch (this.schemaAction) {
case RECREATE_DROP_UNUSED:
dropUnused = true;
case RECREATE:
drop = true;
case CREATE_IF_NOT_EXISTS:
ifNotExists = SchemaAction.CREATE_IF_NOT_EXISTS.equals(this.schemaAction);
case CREATE:
create = true;
case NONE:
default:
// do nothing
}
if (create) {
createTables(drop, dropUnused, ifNotExists);
}
}
which means you have to assign CREATE to schemaAction if the table has never been created. And CREATE_IF_NOT_EXISTS dose not work.
More information please check here: Why `spring-data-jpa` with `spring-data-cassandra` won't create cassandra tables automatically?

Related

spring data jdbc. Can't add custom converter for enum

I want to have enum as a field for my entity.
My application is look like:
Spring boot version
plugins {
id 'org.springframework.boot' version '2.6.2' apply false
repository:
#Repository
public interface MyEntityRepository extends PagingAndSortingRepository<MyEntity, UUID> {
...
entity:
#Table("my_entity")
public class MyEntity{
...
private FileType fileType;
// get + set
}
enum declaration:
public enum FileType {
TYPE_1(1),
TYPE_2(2);
int databaseId;
public static FileType byDatabaseId(Integer databaseId){
return Arrays.stream(values()).findFirst().orElse(null);
}
FileType(int databaseId) {
this.databaseId = databaseId;
}
public int getDatabaseId() {
return databaseId;
}
}
My attempt:
I've found following answer and try to follow it : https://stackoverflow.com/a/53296199/2674303
So I've added bean
#Bean
public JdbcCustomConversions jdbcCustomConversions() {
return new JdbcCustomConversions(asList(new DatabaseIdToFileTypeConverter(), new FileTypeToDatabaseIdConverter()));
}
converters:
#WritingConverter
public class FileTypeToDatabaseIdConverter implements Converter<FileType, Integer> {
#Override
public Integer convert(FileType source) {
return source.getDatabaseId();
}
}
#ReadingConverter
public class DatabaseIdToFileTypeConverter implements Converter<Integer, FileType> {
#Override
public FileType convert(Integer databaseId) {
return FileType.byDatabaseId(databaseId);
}
}
But I see error:
The bean 'jdbcCustomConversions', defined in class path resource
[org/springframework/boot/autoconfigure/data/jdbc/JdbcRepositoriesAutoConfiguration$SpringBootJdbcConfiguration.class],
could not be registered. A bean with that name has already been
defined in my.pack.Main and overriding is disabled.
I've tried to rename method jdbcCustomConversions() to myJdbcCustomConversions(). It helped to avoid error above but converter is not invoked during entity persistence and I see another error that application tries to save String but database type is bigint.
20:39:10.689 DEBUG [main] o.s.jdbc.core.StatementCreatorUtils: JDBC getParameterType call failed - using fallback method instead: org.postgresql.util.PSQLException: ERROR: column "file_type" is of type bigint but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Position: 174
I also tried to use the latest(currently) version of spring boot:
id 'org.springframework.boot' version '2.6.2' apply false
But it didn't help.
What have I missed ?
How can I map enum to integer column properly ?
P.S.
I use following code for testing:
#SpringBootApplication
#EnableJdbcAuditing
#EnableScheduling
public class Main {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(Main.class, args);
MyEntityRepositoryrepository = applicationContext.getBean(MyEntityRepository.class);
MyEntity entity = new MyEntity();
...
entity.setFileType(FileType.TYPE_2);
repository.save(entity);
}
#Bean
public ModelMapper modelMapper() {
ModelMapper mapper = new ModelMapper();
mapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT)
.setFieldMatchingEnabled(true)
.setSkipNullEnabled(true)
.setFieldAccessLevel(PRIVATE);
return mapper;
}
#Bean
public AbstractJdbcConfiguration jdbcConfiguration() {
return new MySpringBootJdbcConfiguration();
}
#Configuration
static class MySpringBootJdbcConfiguration extends AbstractJdbcConfiguration {
#Override
protected List<?> userConverters() {
return asList(new DatabaseIdToFileTypeConverter(), new FileTypeToDatabaseIdConverter());
}
}
}
UPDATE
My code is:
#SpringBootApplication
#EnableJdbcAuditing
#EnableScheduling
public class Main {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(Main.class, args);
MyEntityRepositoryrepository = applicationContext.getBean(MyEntityRepository.class);
MyEntity entity = new MyEntity();
...
entity.setFileType(FileType.TYPE_2);
repository.save(entity);
}
#Bean
public ModelMapper modelMapper() {
ModelMapper mapper = new ModelMapper();
mapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT)
.setFieldMatchingEnabled(true)
.setSkipNullEnabled(true)
.setFieldAccessLevel(PRIVATE);
return mapper;
}
#Bean
public AbstractJdbcConfiguration jdbcConfiguration() {
return new MySpringBootJdbcConfiguration();
}
#Configuration
static class MySpringBootJdbcConfiguration extends AbstractJdbcConfiguration {
#Override
protected List<?> userConverters() {
return asList(new DatabaseIdToFileTypeConverter(), new FileTypeToDatabaseIdConverter());
}
#Bean
public JdbcConverter jdbcConverter(JdbcMappingContext mappingContext,
NamedParameterJdbcOperations operations,
#Lazy RelationResolver relationResolver,
JdbcCustomConversions conversions,
Dialect dialect) {
JdbcArrayColumns arrayColumns = dialect instanceof JdbcDialect ? ((JdbcDialect) dialect).getArraySupport()
: JdbcArrayColumns.DefaultSupport.INSTANCE;
DefaultJdbcTypeFactory jdbcTypeFactory = new DefaultJdbcTypeFactory(operations.getJdbcOperations(),
arrayColumns);
return new MyJdbcConverter(
mappingContext,
relationResolver,
conversions,
jdbcTypeFactory,
dialect.getIdentifierProcessing()
);
}
}
static class MyJdbcConverter extends BasicJdbcConverter {
MyJdbcConverter(
MappingContext<? extends RelationalPersistentEntity<?>, ? extends RelationalPersistentProperty> context,
RelationResolver relationResolver,
CustomConversions conversions,
JdbcTypeFactory typeFactory,
IdentifierProcessing identifierProcessing) {
super(context, relationResolver, conversions, typeFactory, identifierProcessing);
}
#Override
public int getSqlType(RelationalPersistentProperty property) {
if (FileType.class.equals(property.getActualType())) {
return Types.BIGINT;
} else {
return super.getSqlType(property);
}
}
#Override
public Class<?> getColumnType(RelationalPersistentProperty property) {
if (FileType.class.equals(property.getActualType())) {
return Long.class;
} else {
return super.getColumnType(property);
}
}
}
}
But I experience error:
Caused by: org.postgresql.util.PSQLException: Cannot convert an instance of java.lang.String to type long
at org.postgresql.jdbc.PgPreparedStatement.cannotCastException(PgPreparedStatement.java:925)
at org.postgresql.jdbc.PgPreparedStatement.castToLong(PgPreparedStatement.java:810)
at org.postgresql.jdbc.PgPreparedStatement.setObject(PgPreparedStatement.java:561)
at org.postgresql.jdbc.PgPreparedStatement.setObject(PgPreparedStatement.java:931)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.setObject(HikariProxyPreparedStatement.java)
at org.springframework.jdbc.core.StatementCreatorUtils.setValue(StatementCreatorUtils.java:414)
at org.springframework.jdbc.core.StatementCreatorUtils.setParameterValueInternal(StatementCreatorUtils.java:231)
at org.springframework.jdbc.core.StatementCreatorUtils.setParameterValue(StatementCreatorUtils.java:146)
at org.springframework.jdbc.core.PreparedStatementCreatorFactory$PreparedStatementCreatorImpl.setValues(PreparedStatementCreatorFactory.java:283)
at org.springframework.jdbc.core.PreparedStatementCreatorFactory$PreparedStatementCreatorImpl.createPreparedStatement(PreparedStatementCreatorFactory.java:241)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:649)
... 50 more
Caused by: java.lang.NumberFormatException: For input string: "TYPE_2"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.postgresql.jdbc.PgPreparedStatement.castToLong(PgPreparedStatement.java:792)
... 59 more
Try the following instead:
#Bean
public AbstractJdbcConfiguration jdbcConfiguration() {
return new MySpringBootJdbcConfiguration();
}
#Configuration
static class MySpringBootJdbcConfiguration extends AbstractJdbcConfiguration {
#Override
protected List<?> userConverters() {
return List.of(new DatabaseIdToFileTypeConverter(), new FileTypeToDatabaseIdConverter());
}
}
Explanation:
Spring complains that JdbcCustomConversions in auto-configuration class is already defined (by your bean) and you don't have bean overriding enabled.
JdbcRepositoriesAutoConfiguration has changed a few times, in Spring 2.6.2 it has:
#Configuration(proxyBeanMethods = false)
#ConditionalOnMissingBean(AbstractJdbcConfiguration.class)
static class SpringBootJdbcConfiguration extends AbstractJdbcConfiguration {
}
In turn, AbstractJdbcConfiguration has:
#Bean
public JdbcCustomConversions jdbcCustomConversions() {
try {
Dialect dialect = applicationContext.getBean(Dialect.class);
SimpleTypeHolder simpleTypeHolder = dialect.simpleTypes().isEmpty() ? JdbcSimpleTypes.HOLDER
: new SimpleTypeHolder(dialect.simpleTypes(), JdbcSimpleTypes.HOLDER);
return new JdbcCustomConversions(
CustomConversions.StoreConversions.of(simpleTypeHolder, storeConverters(dialect)), userConverters());
} catch (NoSuchBeanDefinitionException exception) {
LOG.warn("No dialect found. CustomConversions will be configured without dialect specific conversions.");
return new JdbcCustomConversions();
}
}
As you can see, JdbcCustomConversions is not conditional in any way, so defining your own caused a conflict. Fortunately, it provides an extension point userConverters() which can be overriden to provide your own converters.
Update
As discussed in comments:
FileType.byDatabaseId is broken - it ignores its input param
as the column type in db is BIGINT, your converters must convert from Long, not from Integer, this addresses read queries
for writes, there is an open bug https://github.com/spring-projects/spring-data-jdbc/issues/629 There is a hardcoded assumption that Enums are converted to Strings, and only Enum -> String converters are checked.
As we want to convert to Long, we need to make amendments to BasicJdbcConverter by subclassing it and registering subclassed converter with as a #Bean.
You need to override two methods
public int getSqlType(RelationalPersistentProperty property)
public Class<?> getColumnType(RelationalPersistentProperty property)
I hardcoded the Enum type and corresponding column types, but you may want to get more fancy with that.
#Bean
public AbstractJdbcConfiguration jdbcConfiguration() {
return new MySpringBootJdbcConfiguration();
}
#Configuration
static class MySpringBootJdbcConfiguration extends AbstractJdbcConfiguration {
#Override
protected List<?> userConverters() {
return List.of(new DatabaseIdToFileTypeConverter(), new FileTypeToDatabaseIdConverter());
}
#Bean
public JdbcConverter jdbcConverter(JdbcMappingContext mappingContext,
NamedParameterJdbcOperations operations,
#Lazy RelationResolver relationResolver,
JdbcCustomConversions conversions,
Dialect dialect) {
JdbcArrayColumns arrayColumns = dialect instanceof JdbcDialect ? ((JdbcDialect) dialect).getArraySupport()
: JdbcArrayColumns.DefaultSupport.INSTANCE;
DefaultJdbcTypeFactory jdbcTypeFactory = new DefaultJdbcTypeFactory(operations.getJdbcOperations(),
arrayColumns);
return new MyJdbcConverter(
mappingContext,
relationResolver,
conversions,
jdbcTypeFactory,
dialect.getIdentifierProcessing()
);
}
}
static class MyJdbcConverter extends BasicJdbcConverter {
MyJdbcConverter(
MappingContext<? extends RelationalPersistentEntity<?>, ? extends RelationalPersistentProperty> context,
RelationResolver relationResolver,
CustomConversions conversions,
JdbcTypeFactory typeFactory,
IdentifierProcessing identifierProcessing) {
super(context, relationResolver, conversions, typeFactory, identifierProcessing);
}
#Override
public int getSqlType(RelationalPersistentProperty property) {
if (FileType.class.equals(property.getActualType())) {
return Types.BIGINT;
} else {
return super.getSqlType(property);
}
}
#Override
public Class<?> getColumnType(RelationalPersistentProperty property) {
if (FileType.class.equals(property.getActualType())) {
return Long.class;
} else {
return super.getColumnType(property);
}
}
}

Spring custom Scope with timed refresh of beans

I am working within an environment that changes credentials every several minutes. In order for beans that implement clients who depend on these credentials to work, the beans need to be refreshed. I decided that a good approach for that would be implementing a custom scope for it.
After looking around a bit on the documentation I found that the main method for a scope to be implemented is the get method:
public class CyberArkScope implements Scope {
private Map<String, Pair<LocalDateTime, Object>> scopedObjects = new ConcurrentHashMap<>();
private Map<String, Runnable> destructionCallbacks = new ConcurrentHashMap<>();
private Integer scopeRefresh;
public CyberArkScope(Integer scopeRefresh) {
this.scopeRefresh = scopeRefresh;
}
#Override
public Object get(String name, ObjectFactory<?> objectFactory) {
if (!scopedObjects.containsKey(name) || scopedObjects.get(name).getKey()
.isBefore(LocalDateTime.now().minusMinutes(scopeRefresh))) {
scopedObjects.put(name, Pair.of(LocalDateTime.now(), objectFactory.getObject()));
}
return scopedObjects.get(name).getValue();
}
#Override
public Object remove(String name) {
destructionCallbacks.remove(name);
return scopedObjects.remove(name);
}
#Override
public void registerDestructionCallback(String name, Runnable runnable) {
destructionCallbacks.put(name, runnable);
}
#Override
public Object resolveContextualObject(String name) {
return null;
}
#Override
public String getConversationId() {
return "CyberArk";
}
}
#Configuration
#Import(CyberArkScopeConfig.class)
public class TestConfig {
#Bean
#Scope(scopeName = "CyberArk")
public String dateString(){
return LocalDateTime.now().toString();
}
}
#RestController
public class HelloWorld {
#Autowired
private String dateString;
#RequestMapping("/")
public String index() {
return dateString;
}
}
When I debug this implemetation with a simple String scope autowired in a controller I see that the get method is only called once in the startup and never again. So this means that the bean is never again refreshed. Is there something wrong in this behaviour or is that how the get method is supposed to work?
It seems you need to also define the proxyMode which injects an AOP proxy instead of a static reference to a string. Note that the bean class cant be final. This solved it:
#Configuration
#Import(CyberArkScopeConfig.class)
public class TestConfig {
#Bean
#Scope(scopeName = "CyberArk", proxyMode=ScopedProxyMode.TARGET_CLASS)
public NonFinalString dateString(){
return new NonFinalString(LocalDateTime.now());
}
}

Can I change the default definition displayed by Swagger?

Can I change the default definition from 'default' to my own one. I would like the page to load and instead of it loading the 'default' it would load mine which is just called 'swagger' in this case:
I am using Spring fox and Spring boot. This is my Swagger Config class:
#Configuration
#EnableSwagger2WebMvc
#Import(SpringDataRestConfiguration.class)
public class SwaggerDocumentationConfig {
#Bean
public Docket api() {
return new Docket(DocumentationType.SWAGGER_2)
.select()
.apis(RequestHandlerSelectors.basePackage("com.openet.usage.trigger"))
.paths(PathSelectors.any())
.build();
}
private static Predicate<String> matchPathRegex(final String... pathRegexs) {
return new Predicate<String>() {
#Override
public boolean apply(String input) {
for (String pathRegex : pathRegexs) {
if (input.matches(pathRegex)) {
return true;
}
}
return false;
}
};
}
#Bean
WebMvcConfigurer configurer () {
return new WebMvcConfigurerAdapter() {
#Override
public void addResourceHandlers (ResourceHandlerRegistry registry) {
registry.addResourceHandler("/config/swagger.json").
addResourceLocations("classpath:/config");
registry
.addResourceHandler("swagger-ui.html")
.addResourceLocations("classpath:/META-INF/resources/");
registry
.addResourceHandler("/webjars/**")
.addResourceLocations("classpath:/META-INF/resources/webjars/");
}
};
}
}
It is possible to change this behavior, but it looks more like a hack.
The SwaggerResourcesProvider is responsible for providing info for the dropdown list. First, implement this interface. Second, add the Primary annotation to your class to become the main implementation that should be used instead of the default InMemorySwaggerResourcesProvider class. But it still makes sense to reuse definitions provided by InMemorySwaggerResourcesProvider and that is why it should be injected.
The last part is to implement the overridden get method and change to the list you want to display. This example should display only one definition named swagger.
// other annotations
#Primary
public class SwaggerDocumentationConfig implements SwaggerResourcesProvider {
private final InMemorySwaggerResourcesProvider resourcesProvider;
#Inject
public MySwaggerConfig(InMemorySwaggerResourcesProvider resourcesProvider) {
this.resourcesProvider = resourcesProvider;
}
#Override
public List<SwaggerResource> get() {
return resourcesProvider.get().stream()
.filter(r -> "swagger".equals(r.getName()))
.collect(Collectors.toList());
}
// the rest of the configuration
}
I just did a redirect in my controller:
#RequestMapping(value = "/", method = RequestMethod.GET)
public void redirectRootToSwaggerDocs(HttpServletResponse response) throws IOException {
response.sendRedirect("/my-api/swagger-ui.html?urls.primaryName=swagger");
}
The easiest way I found is just to make the groupName rank highly alphabetically. Such as "1 swagger", "a swagger" or "-> swagger".
...
return new Docket(DocumentationType.OAS_30)
.groupName("-> swagger");
...
...
return new Docket(DocumentationType.OAS_30)
.groupName("<what u want>")
...
just set a default group name.

Dynamic datasource routing - DataSource router not initialized

I'm referring to this article, in which we can use the AbstractRoutingDataSource from Spring Framework to dynamically change the data source used by the application. I'm using Mybatis (3.3.0) with Spring (4.1.6.RELEASE). I want to switch to the backup database if exception occurs while getting data from main db. In this example, i have used hsql and mysql db.
RoutingDataSource:
public class RoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return DataSourceContextHolder.getTargetDataSource();
}
}
DataSourceContextHolder:
public class DataSourceContextHolder {
private static final ThreadLocal<DataSourceEnum> contextHolder = new ThreadLocal<DataSourceEnum>();
public static void setTargetDataSource(DataSourceEnum targetDataSource) {
contextHolder.set(targetDataSource);
}
public static DataSourceEnum getTargetDataSource() {
return (DataSourceEnum) contextHolder.get();
}
public static void resetDefaultDataSource() {
contextHolder.remove();
}
}
ApplicationDataConfig:
#Configuration
#MapperScan(basePackages = "com.sample.mapper")
#ComponentScan("com.sample.config")
#PropertySource(value = {"classpath:app.properties"},
ignoreResourceNotFound = true)
public class ApplicationDataConfig {
#Bean
public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
PropertySourcesPlaceholderConfigurer configurer =
new PropertySourcesPlaceholderConfigurer();
return configurer;
}
#Bean
public SqlSessionFactoryBean sqlSessionFactoryBean() throws Exception {
SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
RoutingDataSource routingDataSource = new RoutingDataSource();
routingDataSource.setDefaultTargetDataSource(dataSource1());
Map<Object, Object> targetDataSource = new HashMap<Object, Object>();
targetDataSource.put(DataSourceEnum.HSQL, dataSource1());
targetDataSource.put(DataSourceEnum.BACKUP, dataSource2());
routingDataSource.setTargetDataSources(targetDataSource);
sessionFactory.setDataSource(routingDataSource);
sessionFactory.setTypeAliasesPackage("com.sample.common.domain");
sessionFactory.setMapperLocations(
new PathMatchingResourcePatternResolver()
.getResources("classpath*:com/sample/mapper/**/*.xml"));
return sessionFactory;
}
#Bean
public DataSource dataSource1() {
return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL).addScript(
"classpath:database/app-hsqldb-schema.sql").addScript(
"classpath:database/app-hsqldb-datascript.sql").build();
}
#Bean
public DataSource dataSource2() {
PooledDataSourceFactory pooledDataSourceFactory = new PooledDataSourceFactory();
pooledDataSourceFactory.setProperties(jdbcProperties());
return pooledDataSourceFactory.getDataSource();
}
#Bean
protected Properties jdbcProperties() {
//Get the data from properties file
Properties jdbcProperties = new Properties();
jdbcProperties.setProperty("url", datasourceUrl);
jdbcProperties.setProperty("driver", datasourceDriver);
jdbcProperties.setProperty("username", datasourceUsername);
jdbcProperties.setProperty("password", datasourcePassword);
jdbcProperties.setProperty("poolMaximumIdleConnections", maxConnectionPoolSize);
jdbcProperties.setProperty("poolMaximumActiveConnections", minConnectionPoolSize);
return jdbcProperties;
}
}
Client:
#Autowired
private ApplicationMapper appMapper;
public MyObject getObjectById(String Id) {
MyObject myObj = null;
try{
DataSourceContextHolder.setTargetDataSource(DataSourceEnum.HSQL);
myObj = appMapper.getObjectById(Id);
}catch(Throwable e){
DataSourceContextHolder.setTargetDataSource(DataSourceEnum.BACKUP);
myObj = appMapper.getObjectById(Id);
}finally{
DataSourceContextHolder.resetDefaultDataSource();
}
return getObjectDetails(myObj);
}
I'm getting the following exception
### Error querying database. Cause: java.lang.IllegalArgumentException: DataSource router not initialized
However i'm able to get things working if i'm using only one db at a time, this means there is no issue with data source configuration.
Can you try this last line once (in same order) :-
targetDataSource.put(DataSourceEnum.HSQL, dataSource1());
targetDataSource.put(DataSourceEnum.BACKUP, dataSource2());
routingDataSource.setTargetDataSources(targetDataSource);
routingDataSource.afterPropertiesSet();
I got the same issue and found a solution using the SchemaExport class of hibernate.
For each DataSourceEnum you can manually initialize the datasource.
here is my detailed answer to my own issue discription

org.hibernate.HibernateException: createQuery is not valid without active transaction #scheduled

I am using scheduled task to update my database like this:
public interface UserRatingManager {
public void updateAllUsers();
}
#Service
public class DefaultUserRatingManager implements UserRatingManager {
#Autowired
UserRatingDAO userRatingDAO;
#Override
#Transactional("txName")
public void updateAllUsers() {
List<String> userIds = userRatingDAO.getAllUserIds();
for (String userId : userIds) {
updateUserRating(userId);
}
}
}
public interface UserRatingDAO extends GenericDAO<UserRating, String> {
public void deleteAll();
public List<String> getAllUserIds();
}
#Repository
public class HibernateUserRatingDAO extends BaseDAO<UserRating, String> implements UserRatingDAO {
#Override
public List<String> getAllUserIds() {
List<String> result = new ArrayList<String>();
Query q1 = getSession().createQuery("Select userId from UserRating");
}
}
I configured the persistence like this:
#Configuration
#ComponentScan({ "com.estartup" })
#PropertySource("classpath:jdbc.properties")
#EnableTransactionManagement
#EnableScheduling
public class PersistenceConfig {
#Autowired
Environment env;
#Scheduled(fixedRate = 5000)
public void run() {
userRatingManager().updateAllUsers();
}
#Bean
public DataSource dataSource() {
DriverManagerDataSource driverManagerDataSource = new DriverManagerDataSource(env.getProperty("connection.url"), env.getProperty("connection.username"), env.getProperty("connection.password"));
driverManagerDataSource.setDriverClassName("com.mysql.jdbc.Driver");
return driverManagerDataSource;
}
public PersistenceConfig() {
super();
}
#Bean
public UserRatingUpdate userRatingUpdate() {
return new UserRatingUpdate();
}
#Bean
public UserRatingManager userRatingManager() {
return new DefaultUserRatingManager();
}
#Bean
public LocalSessionFactoryBean runnableSessionFactory() {
LocalSessionFactoryBean factoryBean = null;
try {
factoryBean = createBaseSessionFactory();
} catch (Exception e) {
e.printStackTrace();
}
return factoryBean;
}
private LocalSessionFactoryBean createBaseSessionFactory() throws IOException {
LocalSessionFactoryBean factoryBean;
factoryBean = new LocalSessionFactoryBean();
Properties pp = new Properties();
pp.setProperty("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
pp.setProperty("hibernate.max_fetch_depth", "3");
pp.setProperty("hibernate.show_sql", "false");
factoryBean.setDataSource(dataSource());
factoryBean.setPackagesToScan(new String[] { "com.estartup.*" });
factoryBean.setHibernateProperties(pp);
factoryBean.afterPropertiesSet();
return factoryBean;
}
#Bean(name = "txName")
public HibernateTransactionManager runnableTransactionManager() {
HibernateTransactionManager htm = new HibernateTransactionManager(runnableSessionFactory().getObject());
return htm;
}
}
However, when I get to:
Query q1 = getSession().createQuery("Select userId from UserRating");
in the above HibernateUserRatingDAO I get an exception:
org.hibernate.HibernateException: createQuery is not valid without active transaction
at org.hibernate.context.internal.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:352)
at com.sun.proxy.$Proxy63.createQuery(Unknown Source)
at com.estartup.dao.impl.HibernateUserRatingDAO.getAllUserIds(HibernateUserRatingDAO.java:36)
How can I configure to include my scheduled tasks in transactions ?
EDITED:
Here is the code for BaseDAO
#Repository
public class BaseDAO<T, ID extends Serializable> extends GenericDAOImpl<T, ID> {
private static final Logger logger = LoggerFactory.getLogger(BaseDAO.class);
#Autowired
#Override
public void setSessionFactory(SessionFactory sessionFactory) {
super.setSessionFactory(sessionFactory);
}
public void setTopAndForUpdate(int top, Query query){
query.setLockOptions(LockOptions.UPGRADE);
query.setFirstResult(0);
query.setMaxResults(top);
}
EDITED
Enabling Spring transaction prints the following log:
DEBUG [pool-1-thread-1] org.springframework.transaction.annotation.AnnotationTransactionAttributeSource - Adding transactional method 'updateAllUsers' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'txName'
What is happening in this case is that since you are using userRatingManager() inside the configuration (where the actual scheduled method exists), the proxy that Spring creates to handle the transaction management for UserRatingUpdate is not being used.
I propose you do the following:
public interface WhateverService {
void executeScheduled();
}
#Service
public class WhateverServiceImpl {
private final UserRatingManager userRatingManager;
#Autowired
public WhateverServiceImpl(UserRatingManager userRatingManager) {
this.userRatingManager = userRatingManager;
}
#Scheduled(fixedRate = 5000)
public void executeScheduled() {
userRatingManager.updateAllUsers()
}
}
Also change your transaction manager configuration code to:
#Bean(name = "txName")
#Autowired
public HibernateTransactionManager runnableTransactionManager(SessionFactory sessionFactory) {
HibernateTransactionManager htm = new HibernateTransactionManager();
htm.setSessionFactory(sessionFactory);
return htm;
}
and remove factoryBean.afterPropertiesSet(); from createBaseSessionFactory
As I already mentioned, I used your code and created a small sample that works for me. Judging by the classes used, I assumed you are using Hibernate Generic DAO Framework. It's a standalone sample, the main() class is Main. Running it you can see the transactional related DEBUG messages in logs that show when a transaction is initiated and committed. You can compare my settings, jars versions used with what you have and see if anything stands out.
Also, as I already suggested you might want to look in the logs to see if proper transactional behavior is being used and compare that with the logs my sample creates.
I tried to replicate your problem so I integrated it in my Hibernate examples on GitHub:
You can run my CompanySchedulerTest and see it's working so this is what I did to run it:
I made sure the application context is aware of our scheduler
<task:annotation-driven/>
The scheduler is defined in its own bean:
#Service
public class CompanyScheduler implements DisposableBean {
private static final Logger LOG = LoggerFactory.getLogger(CompanyScheduler.class);
#Autowired
private CompanyManager companyManager;
private volatile boolean enabled = true;
#Override
public void destroy() throws Exception {
enabled = false;
}
#Scheduled(fixedRate = 100)
public void run() {
if (enabled) {
LOG.info("Run scheduler");
companyManager.updateAllUsers();
}
}
}
My JPA/Hibernate configs are in applicationContext-test.xml and they are configured for JPA according to the Spring framework indications, so you might want to double check your Hibernate settings as well.

Categories