I am trying to integrate the Elastic Driver "org.elasticsearch.xpack.sql.jdbc.EsDriver" from
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>x-pack-sql-jdbc</artifactId>
<version>7.10.0</version>
</dependency>
into my spring boot app using Hibernate.
In my spring configuration bean I have the following:
#Bean
#ConfigurationProperties(prefix = "db.elastic")
#Qualifier("elasticDataSource")
#Primary
public DataSource elasticDataSource() {
return DataSourceBuilder.create()
.build();
}
public LocalContainerEntityManagerFactoryBean elasticEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
Map<String, Object> properties = new HashMap<>();
properties.put(AvailableSettings.HBM2DDL_AUTO, "none");
properties.put(AvailableSettings.HBM2DLL_CREATE_SCHEMAS, "false");
properties.put(AvailableSettings.DIALECT, org.elasticsearch.xpack.sql.jdbc.EsDriver.class.getName());
return builder
.dataSource(elasticDataSource())
.packages(Issuer.class)
.persistenceUnit("elastic")
.properties(properties)
.build();
}
However, when I run this code I get the following exception:
Caused by: org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:275)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214)
at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214)
at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:179)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:119)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:904)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:935)
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:57)
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:390)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:377)
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:341)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1837)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1774)
... 16 common frames omitted
Caused by: org.hibernate.HibernateException: Unable to construct requested dialect [org.elasticsearch.xpack.sql.jdbc.EsDriver]
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.constructDialect(DialectFactoryImpl.java:84)
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.buildDialect(DialectFactoryImpl.java:51)
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:137)
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35)
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:94)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263)
... 33 common frames omitted
Caused by: java.lang.ClassCastException: org.elasticsearch.xpack.sql.jdbc.EsDriver cannot be cast to org.hibernate.dialect.Dialect
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.constructDialect(DialectFactoryImpl.java:74)
... 38 common frames omitted
I assume this is because the driver isn't compatible with hibernate. Am I correct or is there some other configuration that must be done to work around the problem?
There is also a commercially available JDBC driver here: https://www.cdata.com/drivers/elasticsearch/jdbc/
Has anyone got any experience with this driver and it's compatibility with Hibernate?
I have no knowledge of ElasticSearch driver, however I will tell you why you are getting the error.
Your error is:Caused by: java.lang.ClassCastException: org.elasticsearch.xpack.sql.jdbc.EsDriver cannot be cast to org.hibernate.dialect.Dialect
This is because you have a problem in your properties:
properties.put(AvailableSettings.DIALECT, org.elasticsearch.xpack.sql.jdbc.EsDriver.class.getName());
EsDrver.class.getName() is not a Hibernate Dialect.
An example hibernate dialect is: org.hibernate.dialect.MySQL5Dialect
Kindly read https://www.elastic.co/guide/en/elasticsearch/reference/current/sql-jdbc.html
Seeing that you are using Springboot, you might not even have to manually configure the datasource/entitymanager. Simply adding the dependency will autoconfigure it.
There is no dialect for the ElasticSearch SQL dialect, but you can try using org.hibernate.dialect.SQLServerDialect like suggested in https://www.cdata.com/kb/tech/elasticsearch-jdbc-hibernate.rst
In the end, you will probably have to override a few configs in the dialect to fit the SQL dialect that is supported by ES.
So I've found that org.hibernate.dialect.H2Dialect or org.hibernate.dialect.PostgreSQL9Dialect work, but only up to a point.
Firstly you must also include double quotes in entity names:
#Data
#Table(name = "\"mytable\"")
#Entity
public class MyTable {
}
Furthermore I've found that setting maxRows doesn't work. For some reason this value isn't bound to limit in the generated Sql:
18:14:29.320 [http-nio-8081-exec-1] DEBUG org.hibernate.SQL -
select
mytabl_.NAME as col_0_0_
from
"mytable" mytabl_
order by
mytabl_.NAME asc limit ?
18:14:29.651 [http-nio-8081-exec-1] WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: null
18:14:29.651 [http-nio-8081-exec-1] ERROR o.h.e.jdbc.spi.SqlExceptionHelper - line 1:108: mismatched input '?' expecting {'ALL', INTEGER_VALUE}
18:14:29.658 [http-nio-8081-exec-1] WARN g.e.SimpleDataFetcherExceptionHandler - Exception while fetching data (/Issuers) : line 1:108: mismatched input '?' expecting {'ALL', INTEGER_VALUE}
org.hibernate.JDBCException: line 1:108: mismatched input '?' expecting {'ALL', INTEGER_VALUE}
Anyone have any ideas as to why this might be?
Related
I am using Spring Boot 2.4.4 and Spring Data Cassandra dependency to connect to the Cassandra database. During the application startup, I am getting a DriverTimeout error (I am using VPN).
I have gone through all the Stack Overflow questions similar to this and none of them worked for me. I have cross-posted the same question on the Spring Boot official page here.
I used below configuration properties below -
spring.data.cassandra.contact-points=xxxxxx
spring.data.cassandra.username=xxxx
spring.data.cassandra.password=xxxxx
spring.data.cassandra.keyspace-name=xxxx
spring.data.cassandra.port=9042
spring.data.cassandra.schema-action=NONE
spring.data.cassandra.local-datacenter=mydc
spring.data.cassandra.connection.connect-timeout=PT10S
spring.data.cassandra.connection.init-query-timeout=PT20S
spring.data.cassandra.request.timeout=PT10S
I also added DataStax properties in the application.properties to check if they can be picked up from there or not.
datastax-java-driver.basic.request.timeout = 10 seconds
datastax-java-driver.advanced.connection.init-query-timeout = 10 seconds
datastax-java-driver.advanced.control-connection.timeout = 10 seconds
Below is the configuration I used as suggested in the post here -
#EnableCassandraRepositories
public class CassandraConfig {
#Bean
DriverConfigLoaderBuilderCustomizer cassandraDriverCustomizer() {
return (builder) -> builder.withDuration(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT,
Duration.ofSeconds(30));
}
}
But I still get the same error
Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S
I also tried different approached like creating custom CqlSessionFactoryBean and provide all the DataStax properties programmatically to override -
#EnableCassandraRepositories
public class CassandraConfig extends AbstractCassandraConfiguration {
#Bean(name = "session")
#Primary
public CqlSessionFactoryBean cassandraSession() {
CqlSessionFactoryBean factory = new CqlSessionFactoryBean();
factory.setUsername(userName);
factory.setPassword(password);
factory.setPort(port);
factory.setKeyspaceName(keyspaceName);
factory.setContactPoints(contactPoints);
factory.setLocalDatacenter(dataCenter);
factory.setSessionBuilderConfigurer(getSessionBuilderConfigurer()); // my session builder configurer
return factory;
}
// And provided my own SessionBuilder Configurer like below
protected SessionBuilderConfigurer getSessionBuilderConfigurer() {
return new SessionBuilderConfigurer() {
#Override
public CqlSessionBuilder configure(CqlSessionBuilder cqlSessionBuilder) {
ProgrammaticDriverConfigLoaderBuilder config = DriverConfigLoader.programmaticBuilder()
.withDuration(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT, Duration.ofSeconds(30))
.withBoolean(DefaultDriverOption.RECONNECT_ON_INIT, true)
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(30))
.withDuration(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, Duration.ofSeconds(20));
return cqlSessionBuilder.withAuthCredentials(userName, password).withConfigLoader(config.build());
}
};
}
}
It didn't work same error. Also, I excluded the Cassandra auto-configuration classes like suggested here on StackOverflow
I also tried to customize custom session builder like below to see if that can work -
#Bean
public CqlSessionBuilderCustomizer cqlSessionBuilderCustomizer() {
return cqlSessionBuilder -> cqlSessionBuilder.withAuthCredentials(userName, password)
.withConfigLoader(DriverConfigLoader.programmaticBuilder()
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofMillis(15000))
.withDuration(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT, Duration.ofSeconds(30))
.withBoolean(DefaultDriverOption.RECONNECT_ON_INIT, true)
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(30))
.withDuration(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, Duration.ofSeconds(20)).build());
}
Still no luck.
Not only that I also added the application.conf file as DataStax documentation suggested putting that on the classpath, even though that file is getting parsed (after making the syntactical mistake I got to know that it is being read). It didn't work.
application.conf-
datastax-java-driver {
basic.request.timeout = 10 seconds
advanced.connection.init-query-timeout = 10 seconds
advanced.control-connection.timeout = 10 seconds
}
I also switched my Spring Boot version to 2.5.0.M3 to see property files works it does not. I have pushed my project to my GitHub account.
Update
As per the comment, I am pasting my whole stack trace. Also, this does not happen all the time sometimes it works sometimes it does not. I need to override the timeout from PT2S to PT10S or something.
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraConverter' defined in class path resource [com/example/demo/CassandraConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.cassandra.core.convert.CassandraConverter]: Factory method 'cassandraConverter' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession' defined in class path resource [com/example/demo/CassandraConfig.class]: Invocation of init method failed; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:656) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:484) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1338) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1177) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:895) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878) ~[spring-context-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:758) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:750) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.3.0.RELEASE.jar:2.3.0.RELEASE]
at com.example.demo.SpringCassandraTestingApplication.main(SpringCassandraTestingApplication.java:13) [classes/:na]
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.cassandra.core.convert.CassandraConverter]: Factory method 'cassandraConverter' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession' defined in class path resource [com/example/demo/CassandraConfig.class]: Invocation of init method failed; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:651) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
... 19 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession' defined in class path resource [com/example/demo/CassandraConfig.class]: Invocation of init method failed; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1796) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:227) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1174) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:422) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:352) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:345) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.data.cassandra.config.AbstractSessionConfiguration.requireBeanOfType(AbstractSessionConfiguration.java:100) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE]
at org.springframework.data.cassandra.config.AbstractSessionConfiguration.getRequiredSession(AbstractSessionConfiguration.java:200) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE]
at org.springframework.data.cassandra.config.AbstractCassandraConfiguration.cassandraConverter(AbstractCassandraConfiguration.java:73) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE]
at com.example.demo.CassandraConfig$$EnhancerBySpringCGLIB$$cec229ff.CGLIB$cassandraConverter$12(<generated>) ~[classes/:na]
at com.example.demo.CassandraConfig$$EnhancerBySpringCGLIB$$cec229ff$$FastClassBySpringCGLIB$$faa9c2c1.invoke(<generated>) ~[classes/:na]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at com.example.demo.CassandraConfig$$EnhancerBySpringCGLIB$$cec229ff.cassandraConverter(<generated>) ~[classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_275]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_275]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_275]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_275]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
... 20 common frames omitted
Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: query 'SELECT * FROM system_schema.tables' timed out after PT2S
at com.datastax.oss.driver.api.core.DriverTimeoutException.copy(DriverTimeoutException.java:34) ~[java-driver-core-4.6.1.jar:na]
at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149) ~[java-driver-core-4.6.1.jar:na]
at com.datastax.oss.driver.api.core.session.Session.refreshSchema(Session.java:140) ~[java-driver-core-4.6.1.jar:na]
at org.springframework.data.cassandra.config.CqlSessionFactoryBean.afterPropertiesSet(CqlSessionFactoryBean.java:437) ~[spring-data-cassandra-3.0.0.RELEASE.jar:3.0.0.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1855) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1792) ~[spring-beans-5.2.6.RELEASE.jar:5.2.6.RELEASE]
... 43 common frames omitted
I am answering my own question here to make this complete and let others know how I fixed this particular problem.
I am using Spring Boot 2.4.5. and I started facing this timeout issue when I upgraded to version 2.3+ onwards.
Based on my experience with this issue, below is what I found.
Irrespective of whatever timeout you provide in the application.properties or application.conf (DataStax notion), They all somehow getting overridden by the spring boot or maybe selecting the default value from the DataStax driver.
Even there is an issue created on the Spring Boot official project to handle this problem. Check here. Which got fixed later in the 2.5.0.M1 version.
My problem got fixed when I passed this as a VM argument.
$java -Ddatastax-java-driver.basic.request.timeout="15 seconds" application.jar
I passed other params as well like advanced.control-connection.timeout as I was suggested to use on a different forum but that didn't work for me. Check reference manual here for other config params.
I am getting this error at my local only so I passed this in the Eclipse VM argument and then I didn't see that error any more.
Also if I reduce this time to 7-8 seconds sometimes I see that error again PT2S. Seems like that exception message is hardcoded somewhere irrespective of whatever timeout value you pass. (That is my observation).
Update: Solution 2 - Which I figured out later and I see many of the people have answered that too
The actual key that DataStax provides is given below and this works.
#Bean
public DriverConfigLoaderBuilderCustomizer defaultProfile(){
return builder -> builder.withString(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, "3 seconds").build();
}
Here is an expanded answer in addition to the person who provided the right answer.
Increase the METADATA_SCHEMA_REQUEST_TIMEOUT because that query stated in the problem belongs to the METADATA_SCHEMA, which was not explicit.
#Override
protected SessionBuilderConfigurer getSessionBuilderConfigurer() {
return new SessionBuilderConfigurer() {
#Override
public CqlSessionBuilder configure(CqlSessionBuilder cqlSessionBuilder) {
logger.info("Configuring CqlSession Builder");
return cqlSessionBuilder
.withConfigLoader(DriverConfigLoader.programmaticBuilder()
// Resolves the timeout query 'SELECT * FROM system_schema.tables' timed out after PT2S
.withDuration(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, Duration.ofMillis(60000))
.withDuration(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT, Duration.ofMillis(60000))
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofMillis(15000))
.build());
}
};
}
The DriverTimeoutException gets thrown when the driver doesn't get a reply from the coordinator node. It uses the basic request timeout default of 2 seconds:
datastax-java-driver {
basic.request {
timeout = 2 seconds
The fact that the timeout is 2 seconds means that none of your overrides are getting picked up but I haven't quite figured out why yet.
More importantly, it's a different error to a read or write timeout exception which occur when not enough replicas responded to satisfy the required consistency level -- in either of these cases, the coordinator replies back to the driver with the exception.
In my experience, a DriverTimeoutException is caused by (a) unresponsive nodes, and/or (b) overloaded coordinator.
If the app is running an expensive query, that could be the reason that coordinator doesn't respond back in time. In this case, your overrides not working is not the problem you need to solve because in Cassandra terms, 2 seconds is an eternity for app requests -- you need to make sure you're not overloading your cluster and that's the problem you need to solve. Cheers!
Try to create a bean as below and try:
#Bean
public DriverConfigLoaderBuilderCustomizer defaultProfile(){
return builder -> builder.withString(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, "3 seconds").build();
}
or
#Bean
public DriverConfigLoaderBuilderCustomizer defaultProfile(){
return builder -> builder.withInt(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, 3000).build();
}
You need to add this in the session builder configurer:
withDuration(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, Duration.ofSeconds(XX))
So I see two possibilities here.
The most intriguing part of this question...
...is the SELECT statement:
SELECT * FROM system_schema.tables
The keyspace definition of system_schema is this:
CREATE KEYSPACE system_schema WITH replication = {'class': 'LocalStrategy'}
Was this changed to be Simple or NetworkTopology? If so, that could be causing the timeouts, specifically.
The system_schema keyspace is new to newer versions of Cassandra (2.2+, I think). Is it possible you're using an older version?
Basically, make sure the system_schema is set with its appropriate, default replication. Also, make sure to use a version of Spring Data Cassandra that is known to work with your version of Cassandra.
Also, I'd recommend trying this without Spring Data Cassandra. I'm curious to see if there's any difference between that and just using the pure DS Java Driver.
Increasing request timeout does solve the problem.
But the main reason we got timeout is the default value of "DefaultDriverOption.METADATA_SCHEMA_ENABLED" is true
By override the value to false will boost up the speed.
Example on kotlin as below :
class CassandraConfiguration(
private val cassandraProperties: CassandraProperties,
) : AbstractReactiveCassandraConfiguration() {
...
...
override fun getDriverConfigLoaderBuilderConfigurer(): DriverConfigLoaderBuilderConfigurer? {
return DriverConfigLoaderBuilderConfigurer{ builder: ProgrammaticDriverConfigLoaderBuilder ->
builder
.withString(DefaultDriverOption.METADATA_SCHEMA_ENABLED, "false")
.build()
}
}
}
I am using "Spring-boot + Hibernate4 + mysql" for my application. As part of which I have a requirement where my sprint-boot app should be able to start even when database is down. Currently it gives the below exception when I try to start my spring boot app without DB being up.
I researched a lot and found out that this exception has to do with hibernate.temp.use_jdbc_metadata_defaults property.
I tried setting this in "application.yml" of spring boot but this property's value is not being reflected at runtime.
Exception Stack Trace:
2014-05-25 04:09:43.193 INFO 59755 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {4.0.4.Final}
2014-05-25 04:09:43.250 WARN 59755 --- [ main] o.h.e.jdbc.internal.JdbcServicesImpl : HHH000342: Could not obtain connection to query metadata : Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
2014-05-25 04:09:43.263 INFO 59755 --- [ main] o.apache.catalina.core.StandardService : Stopping service Tomcat
Error starting ApplicationContext. To display the auto-configuration report enabled debug logging (start with --debug)
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1553)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:973)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:750)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:120)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:648)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:311)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:909)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:898)
at admin.Application.main(Application.java:36)
Caused by: org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.determineDialect(DialectFactoryImpl.java:104)
at org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl.buildDialect(DialectFactoryImpl.java:71)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:205)
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:89)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:206)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:178)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1885)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1843)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:850)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:843)
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.withTccl(ClassLoaderServiceImpl.java:399)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:842)
at org.hibernate.jpa.HibernatePersistenceProvider.createContainerEntityManagerFactory(HibernatePersistenceProvider.java:150)
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:336)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:318)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1612)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1549)
... 15 more
application.yml:
spring:
jpa:
show-sql: true
hibernate:
ddl-auto: none
naming_strategy: org.hibernate.cfg.DefaultNamingStrategy
temp:
use_jdbc_metadata_defaults: false
It was indeed a tough nut to crack.
After lot and lot of research and actually debugging the spring-boot, spring, hibernate, tomcat pool, etc to get it done.
I do think that it will save lot of time for people trying to achieve this type of requirement.
Below are the settings required to achieve the following requirement
Spring boot apps will start fine even if DB is down or there is no DB.
Apps will pick up the connections on the fly as DB comes up which means there is no need to restart the web server or redeploy the apps.
There is no need to start the tomcat or redeploy the apps, if DB goes down from running state and comes up again.
application.yml :
spring:
datasource:
driverClassName: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/schema
username: root
password: root
continueOnError: true
initialize: false
initialSize: 0
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 5000
minIdle: 0
jpa:
show-sql: true
hibernate:
ddl-auto: none
naming_strategy: org.hibernate.cfg.DefaultNamingStrategy
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5Dialect
hbm2ddl:
auto: none
temp:
use_jdbc_metadata_defaults: false
I am answering here and will close the issue that you have cross-posted
Any "native" property of the JPA implementation (Hibernate) can be set using the spring.jpa.properties prefix as explained here
I haven't looked much further in the actual issue here but to answer this particular question, you can set that hibernate key as follows
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults
Adding this alone worked for me:
spring.jpa.properties.hibernate.dialect: org.hibernate.dialect.Oracle10gDialect
Just replace the last part with your database dialect.
The solution is really useful for me. Thanks
i used file "application.properties" includes following lines:
app.sqlhost=192.168.10.11
app.sqlport=3306
app.sqldatabase=logs
spring.main.web-application-type=none
# Datasource
spring.datasource.url=jdbc:mysql://${app.sqlhost}:${app.sqlport}/${app.sqldatabase}
spring.datasource.username=user
spring.datasource.password=password
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.jpa.properties.hibernate.hbm2dll.auto = none
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults = false
spring.datasource.continue-on-error=true
spring.datasource.initialization-mode=never
spring.datasource.hikari.connection-timeout=5000
spring.datasource.hikari.idle-timeout=600000
spring.datasource.hikari.max-lifetime=1800000
spring.datasource.hikari.initialization-fail-timeout= -1
spring.jpa.hibernate.use-new-id-generator-mappings=true
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=true
spring.output.ansi.enabled=always
But, you can not use #Transactional annotation at class level
#Service
//#Transactional //do not use to touch the Repository
#EnableAsync
#Scope( proxyMode = ScopedProxyMode.TARGET_CLASS )
public class LogService {
.... }
#Async
#Transactional // you can use at function level
public void deleteLogs(){
logRepository.deleteAllBy ...
}
Add following config should be work:
spring.jpa.database-platform: org.hibernate.dialect.MySQL5Dialect
I'm using Spring Data MongoDB (spring-boot-starter-data-mongodb from Spring Boot 1.5.2.RELEASE) and MongoDB 3.4.9 and have defined a repository defined that looks like this:
interface MyMongoDBRepository extends CrudRepository<MyDTO, String> {
Stream<MyDTO> findAllByCategory(String category);
}
I then have a service, MyService that interacts with this repository:
#Service
class MyService {
#Autowired
MyMongoDBRepository repo;
public void doStuff() {
repo.findAllByCategory("category")
.map(..)
.filter(..)
.forEach(..)
}
}
There's quite a lot of data in the database and sometimes this error occur:
2018-01-01 18:16:56.631 ERROR 1 --- [ask-scheduler-6] o.s.integration.handler.LoggingHandler : org.springframework.dao.DataAccessResourceFailureException:
Query failed with error code -5 and error message 'Cursor 73973161000 not found on server <mongodb-server>' on server <mongodb-server>;
nested exception is com.mongodb.MongoCursorNotFoundException:
Query failed with error code -5 and error message 'Cursor 73973161000 not found on server <mongodb-server>' on server <mongodb-server>
at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:77)
at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2135)
at org.springframework.data.mongodb.core.MongoTemplate.access$1100(MongoTemplate.java:147)
at org.springframework.data.mongodb.core.MongoTemplate$CloseableIterableCursorAdapter.hasNext(MongoTemplate.java:2506)
at java.util.Iterator.forEachRemaining(Iterator.java:115)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at com.mycompany.MyService.doStuff(MyService.java:108)
at com.mycompany.AnotherService.doStuff(AnotherService.java:42)
at sun.reflect.GeneratedMethodAccessor2026.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:65)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748) Caused by: com.mongodb.MongoCursorNotFoundException: Query failed with error code -5 and error message 'Cursor 73973161000 not found on server <mongodb-server>' on server <mongodb-server>
at com.mongodb.operation.QueryHelper.translateCommandException(QueryHelper.java:27)
at com.mongodb.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:213)
at com.mongodb.operation.QueryBatchCursor.hasNext(QueryBatchCursor.java:103)
at com.mongodb.MongoBatchCursorAdapter.hasNext(MongoBatchCursorAdapter.java:46)
at com.mongodb.DBCursor.hasNext(DBCursor.java:145)
at org.springframework.data.mongodb.core.MongoTemplate$CloseableIterableCursorAdapter.hasNext(MongoTemplate.java:2504) ... 24 more
I've read at various places that when using the vanilla MongoDB Java client you can configure the MongoDB cursor to either have no timeout or set a batch size to hopefully mitigate this.
If this is the way to go, then how can I supply cursor options when returning a Stream from Spring Data MongoDB?
Your error is occurring because you are processing the stream too slowly, so the cursor is timing out before you get to the next batch.
Batch size can be set on the Spring Data Query object, or on a Repository using the #Meta annotation. For example:
Query query = query(where("firstname").is("luke"))
.batchSize(100);
Or when using repositories:
#Meta(batchSize = 100)
List<Person> findByFirstname(String firstname);
See Spring Data MongoDB documentation for more details.
The cursor timeout can also be disabled on a per query basis using the same configuration. e.g. #Meta(flags = {CursorOption.NO_TIMEOUT}).
The cursor timeout cannot be changed on a per-query basis. That is a server configuration. You need to use the cursorTimeoutMillis server parameter to change that server-wide.
Regarding the two options you mentioned.
Batch size, You cannot set batch size using Repository class. You can do it using MongoTemplate. Something like this
final DBCursor cursor = mongoTemplate
.getCollection(collectionName)
.find(queryBuilder.get(), projection)
.batchSize(readBatchSize);
while (cursor.hasNext()) {
......
......
}
But to use MongoTemplate you need to create a Custom Repository.
Regarding Cursor timeout. You can do something like this
#Configuration
public class MongoDbSettings {
#Bean
public MongoClientOptions setmongoOptions() {
return MongoClientOptions.builder().socketTimeout(5000).build();
}
}
There are many other options(heartbeat, connectiontimeout) you can set for Mongo. You can set those properties in your application.properties file, and then bind it using #Value in the above class and set(instead of hardcoding).
Unfortunately, spring-boot doesn't provide any way to specify these in application.properties file
You don't need to supply cursor options when returning a Stream from Spring Data MongoDB. The possible reason for this exception is how your service read data from Mongo. Possible reasons:
You are sharing a single cursor across multiple threads
You are requested too many elements at once
Load balancer before Mongo server
See this Jira topic's comments for some ideas an direction applicable to your application.
I have an application that runs locally with a bean in Application.java for Spring Boot called cacheManager
#Bean(name="cacheManager")
#Primary
public CacheManager getCacheManager() {
return new EhCacheCacheManager();
}
Since it worked locally I deployed to a server and apparently there is another application with a CacheManger that's competing for it's space
because I get following stacktrace:
Caused by: net.sf.ehcache.CacheException: Another unnamed CacheManager
already exists in the same VM. Please provide unique names for each
CacheManager in the config or do one of following:
1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary
2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is:
DefaultConfigurationSource [ ehcache.xml or ehcache-failsafe.xml ] at
net.sf.ehcache.CacheManager.assertNoCacheManagerExistsWithSameName(CacheManager.java:626)
at net.sf.ehcache.CacheManager.init(CacheManager.java:391) at
net.sf.ehcache.CacheManager.(CacheManager.java:269) at
org.springframework.cache.ehcache.EhCacheManagerUtils.buildCacheManager(EhCacheManagerUtils.java:54)
at
org.springframework.cache.ehcache.EhCacheCacheManager.afterPropertiesSet(EhCacheCacheManager.java:74)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1687)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1624)
... 32 common frames omitted
I attempted to put
#Bean(name="cacheManager")
#Primary
public CacheManager getCacheManager() {
return net.sf.ehcache.CacheManager.create();
}
but then net.sf.ehcache.CacheManger.create() doesn't return a spring CacheManger. I tried changing the returning CacheManager to net.sf.ehcache.CacheManager, but I get this locally:
Caused by: java.lang.IllegalStateException: No CacheResolver
specified, and no unique bean of type CacheManager found. Mark one as
primary (or give it the name 'cacheManager') or declare a specific
CacheManager to use, that serves as the default one. at
org.springframework.cache.interceptor.CacheAspectSupport.afterSingletonsInstantiated(CacheAspectSupport.java:212)
at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:781)
at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:866)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:542)
at
org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at
org.springframework.boot.SpringApplication.refresh(SpringApplication.java:737)
at
org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:370)
at
org.springframework.boot.SpringApplication.run(SpringApplication.java:314)
at
org.springframework.boot.web.support.SpringBootServletInitializer.run(SpringBootServletInitializer.java:151)
at
org.springframework.boot.web.support.SpringBootServletInitializer.createRootApplicationContext(SpringBootServletInitializer.java:131)
at
org.springframework.boot.web.support.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:86)
at
org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:169)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5156)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 42 more
I think converting is the answer, but the answer could also be some sly code move.
Suggestions?
Extra Information: This is in a webservice
Unless you deploy an ehcache.xml configuration file for Ehcache, you get the default embedded configuration. This configuration does not name the CacheManager and as the first exception indicates, you cannot have more than one in a single JVM.
The easiest solution is to have an ehcache.xml, not in a package, and then it will be picked up by your deployment.
The answer to my problem was to let Spring decide the cache manager, so all I needed to do was add #EnableCaching on my Application.java and then use #Cacheable on the methods I wanted to cache on the server.
We are trying to setup Arquillian for our projects to run automatic tests. We would like to make use of the arquillian persistence extension to write tests using the persistance layer. So we would like to seed the database using the #UsingDataSet and/or #CreateSchema# annotations.
All of our application components have own database users which have only access to those tables/attributes the component needs. None of the components has rights to execute delete or DDL statements. So we need to switch between a database user/datasource seed/clean the schema before/after the tests and executing the tests like this:
Seed database, drop and recreate sequences using datasaource A
Run the test using datasource B
Clean database using datasource A
It should be obvious that if we would grant the needed delete/DDL-rights to the component database user for the arquillian tests the test results would not be reliable per defintion.
So how can we use different datasources, definied in the arquillian.xml, to seed/clean the database and running the tests?
The lecturer of an arquillian training course I visited mentioned that it should be no problem to just define different datasources for seeding/cleaning and for the PersistenceContext of the EJBs. So I sat down to test this.
TL;DR: It's possible to just use two different datasources.
Here are my test setup and the results of my tests.
Local test setup
Database
As the database I installed an Oracle XE as we use Oracle databases in my company. As the database users of the components don't have their own schema but access the tables of the schema owner I created three database users:
User "bish" is the schema owner of the schema "bish" which contains the empty table "Emp" I use in the test
User "readinguser" which got "SELECT, INSERT, UPDATE" priviledges for the table "bish.Emp"
User "writinguser" which got "SELECT, INSERT, UPDATE, DELETE" priviledges for the table "bish.Emp"
Application server
As an application server I used an Wildfly 10.x and definied two data sources, one for each of my two testusers
<datasource jndi-name="java:/ReadingDS" pool-name="ReadingDS" enabled="true">
<connection-url>jdbc:oracle:thin:#localhost:1521:xe</connection-url>
<driver>oracle</driver>
<pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>5</max-pool-size>
<prefill>true</prefill>
</pool>
<security>
<user-name>readinguser</user-name>
<password>oracle</password>
</security>
</datasource>
<datasource jndi-name="java:/WritingDS" pool-name="WritingDS" enabled="true">
<connection-url>jdbc:oracle:thin:#localhost:1521:xe</connection-url>
<driver>oracle</driver>
<pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>5</max-pool-size>
<prefill>true</prefill>
</pool>
<security>
<user-name>writingguser</user-name>
<password>oracle</password>
</security>
</datasource>
Test application
Then I wrote a small application with an entity, and EJB, persistence.xml, arquillian.xml, dataSet and test class
Entity (Only table definition with explicit schema naming shown)
#Entity
#Table(name = "Emp", schema = "bish")
public class Emp implements Serializable {
// Straight forward entity...
}
EJB with two methods for selecting and deleting all entries
#Stateless
#Remote(IEmpService.class)
#LocalBean
public class EmpService implements IEmpService {
#PersistenceContext
private EntityManager em;
public void removeAllEmps() {
em.createQuery("DELETE FROM Emp").executeUpdate();
}
public List<Emp> getAllEmps() {
return em.createQuery("FROM Emp", Emp.class).getResultList();
}
}
Persistence unit inside persistence.xml to use the "ReadingDS" inside the EJB
<persistence-unit name="ReadingUnit" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/ReadingDS</jta-data-source>
<shared-cache-mode>NONE</shared-cache-mode>
</persistence-unit>
Arquillian.xml with definition of using the "WritingDS" to seed/clean the table and schema definition
<extension qualifier="persistence">
<property name="defaultDataSeedStrategy">CLEAN_INSERT</property>
<property name="defaultCleanupStrategy">USED_ROWS_ONLY</property>
<property name="defaultDataSource">java:/WritingDS</property>
</extension>
<extension qualifier="persistence-dbunit">
<property name="schema">bish</property>
</extension>
Dataset "empBefore.xml" used in test class
<?xml version="1.0" encoding="UTF-8"?>
<dataset>
<EMP EMPNO="9998" ENAME="TEst" JOB="Eins" HIREDATE="1982-01-23" SAL="1300" DEPTNO="10"/>
<EMP EMPNO="9999" ENAME="Test" JOB="Zwei" MGR="9998" HIREDATE="1982-01-23" SAL="1300" DEPTNO="10"/>
</dataset>
Test class:
#RunWith(Arquillian.class)
public class DataSourceTest {
#Deployment
public static JavaArchive createDeployment() {
// ...
}
#EJB
EmpService testclass;
#Rule
public ExpectedException thrown = ExpectedException.none();
#UsingDataSet("empBefore.xml")
#Test
public void GetAllEmps() {
List<Emp> allEmps = testclass.getAllEmps();
Assert.assertEquals(2, allEmps.size());
}
#UsingDataSet("empBefore.xml")
#Test
public void DeleteAllEmps() {
thrown.expect(EJBException.class);
thrown.expectCause(CoreMatchers.isA(PersistenceException.class));
testclass.removeAllEmps();
}
}
The test
I first executed the GetAllEmps test method to see if the table is correctly seeded with the data of the DataSet and if the select method of the EJB works. On my first execution I got the following exception. (Sorry for posting so much text but it's important, see below!)
19:15:51,553 WARN [com.arjuna.ats.arjuna] (default task-38) ARJUNA012140: Adding multiple last resources is disallowed. Trying to add LastResourceRecord(XAOnePhaseResource(LocalXAResourceImpl#666ebccc[connectionListener=11852abe connectionManager=3f58cd97 warned=false currentXid=< formatId=131077, gtrid_length=29, bqual_length=36, tx_uid=0:ffffc0a80002:d99c90f:59971e1c:4c, node_name=1, branch_uid=0:ffffc0a80002:d99c90f:59971e1c:50, subordinatenodename=null, eis_name=java:/ReadingDS > productName=Oracle productVersion=Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production jndiName=java:/ReadingDS])), but already have LastResourceRecord(XAOnePhaseResource(LocalXAResourceImpl#6027d87b[connectionListener=41a0034d connectionManager=329cdd5f warned=false currentXid=< formatId=131077, gtrid_length=29, bqual_length=36, tx_uid=0:ffffc0a80002:d99c90f:59971e1c:4c, node_name=1, branch_uid=0:ffffc0a80002:d99c90f:59971e1c:4e, subordinatenodename=null, eis_name=java:/WritingDS > productName=Oracle productVersion=Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production jndiName=java:/WritingDS]))
19:15:51,554 WARN [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (default task-38) SQL Error: 0, SQLState: null
19:15:51,554 ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (default task-38) javax.resource.ResourceException: IJ000457: Unchecked throwable in managedConnectionReconnected() cl=org.jboss.jca.core.connectionmanager.listener.TxConnectionListener#11852abe[state=NORMAL managed connection=org.jboss.jca.adapters.jdbc.local.LocalManagedConnection#7fc47256 connection handles=0 lastReturned=1503076551554 lastValidated=1503075869230 lastCheckedOut=1503076551553 trackByTx=false pool=org.jboss.jca.core.connectionmanager.pool.strategy.OnePool#6893c4c mcp=SemaphoreConcurrentLinkedQueueManagedConnectionPool#61e62cb9[pool=ReadingDS] xaResource=LocalXAResourceImpl#666ebccc[connectionListener=11852abe connectionManager=3f58cd97 warned=false currentXid=null productName=Oracle productVersion=Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production jndiName=java:/ReadingDS] txSync=null]
19:15:51,554 ERROR [org.jboss.as.ejb3.invocation] (default task-38) WFLYEJB0034: EJB Invocation failed on component EmpService for method public java.util.List de.test.EmpService.getAllEmps(): javax.ejb.EJBTransactionRolledbackException: org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
Thanks to this SO-Question I could fix the problem by setting the following systemproperty in my wildfly:
<system-properties>
<property name="com.arjuna.ats.arjuna.allowMultipleLastResources" value="true"/>
</system-properties>
The most important thing about this exception is the fact that wildfly tries to create two connections, one for each data source (see highlighted JNDI-names) in exception text. Before setting the system-property I verified that by removing the #UsingDataSet-Annotation. After removing the test case failed because the assertion (Assert.assertEquals(2, allEmps.size());) failed as there were zero rows in the table - which indicates there was no second connection created for seeding. So I created the system property, used the DataSet and got a green bar.
The second test method tries to delete all entries in the set, which must fail in an exception as the user behind the readingDS datasource has no rights to delete rows in the table. This test was also successfull. The full exception log was:
javax.ejb.EJBTransactionRolledbackException: org.hibernate.exception.SQLGrammarException: could not execute statement
[...]
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute statement
[...]
... 187 more
Caused by: org.hibernate.exception.SQLGrammarException: could not execute statement
[...]
... 217 more
Caused by: java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges
[...]
... 226 more
As you can see the delete statement failes because of insufficient privileges
Conclusion
It's possible to use different data sources to seed/delete tables by defining the datasource in the arquillian.xml and inside an persistence-unit for an EJB. Arquillian and the application server can handle these different datasources correctly.
Currently this is not supported yet (we are working on it), but in version 2.0.0 (currently are on alpha stage are on maven central) there is a workaround by using programmatic way instead of declarative way (using annotations). You can see an example here https://github.com/arquillian/arquillian-extension-persistence/blob/2.0.0/arquillian-ape-sql/container/int-tests/src/test/java/org/arquillian/integration/ape/dsl/ApeDslIncontainerTest.java