Transaction isolation level -1 not supporting - java

I connected the spring application to smartbaer servicev, where the virtual data source (Postgres) created.
Driver class: "com.smartbear.servicev.jdbc.driver.JdbcVirtDriver"
Connection String (Local servicev virtual server url): "jdbc:servicev://localhost:10080"
Application.properties :-
spring.datasource.driver-class-name=com.smartbear.servicev.jdbc.driver.JdbcVirtDriver
spring.datasource.url=jdbc:servicev://localhost:10080
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
spring.jpa.show-sql=true
It is showing the following error while executing SQL from spring boot project (Only for JPA and JDBC Template).
Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Exception: org.postgresql.util.PSQLException: Transaction isolation level -1 not supported.
But it is working properly for the following basic code.
#Bean
public DataSource getDataSource() {
try {
Class.forName("com.smartbear.servicev.jdbc.driver.JdbcVirtDriver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
DriverManagerDataSource driverManager = new
DriverManagerDataSource("jdbc:servicev://localhost:10080", "", "");
return driverManager;
}

I ran into the same error and tracked it down to trying to connect to a newer version Postresql than the code expected.

Related

How to create oracle connection in java application, deployed in JBoss Server? (WrappedConnectionJDK8 cannot be cast to oracle.jdbc.OracleConnection)

I want to create oracle connection. Currently i am passing jdbc connection to create struct descriptor and here i am getting exception as below. so to avoid this, required to create a java.sql.connection or oracle connection instead of getting from data source.
org.jboss.resource.adapter.jdbc.jdk8.WrappedConnectionJDK8 cannot be cast to oracle.jdbc.OracleConnection
I found for JDK6 a solution, but it does not work for JDK8
How to create oracle connection in Spring application deployed in JBoss Server? (WrappedConnectionJDK6 cannot be cast to oracle.jdbc.OracleConnection)
You should use the unwrap method to obtain your instancedatasource.getConnection().unwrap(OracleConnection.class)
if you use an application server you can configure a Datasource and then use simple code like:
public class JDBCConnection {
#Resource(name = "jdbc/betting-offer-db") private DataSource dataSource;
public void executeQuery() {
logger.info("Reloading {}", getCacheNames());
try (Connection conn = dataSource.getConnection();
PreparedStatement stmt = conn.prepareStatement(getQuery())) {
processStatement(stmt);
} catch(Exception e) {
throw new RuntimeException(e);
}
}
}

How to prevent CommunicationsException?

I am currently working with an app which using two different DB(different instance).
DB A is totally under A's project, however DB B is under the other project.(I am managing these via gcloud app engine.)
What is my problem :
DB B always disconnected if no request more than few hours with below error message.
{"timestamp":1555464776769,"status":500,"error":"Internal Server Error","exception":"org.springframework.transaction.CannotCreateTransactionException","message":"Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: com.mysql.cj.jdbc.exceptions.CommunicationsException: The last packet successfully received from the server was 43,738,243 milliseconds ago. The last packet sent successfully to the server was 43,738,243 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.","path":"/client/getAllCompany"}
To resolve this issue, I tried.
1) add 'autoReconnect=true' at application.properties
api.datasource.url = jdbc:mysql://google/projectB?cloudSqlInstance=projectB:australia-southeast1:projectB&socketFactory=com.google.cloud.sql.mysql.SocketFactory&useSSL=false&autoReconnect=true
2) add below config at application.properties.
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.time-between-eviction-runs-millis=3600000
spring.datasource.tomcat.min-evictable-idle-time-millis=7200000
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
(My project doesn't have web.xml file)
If i re-deploy this project, i can access data from DB B as well.
How can I config to prevent killed the connectivity with DB B?
Wish to listen advice. Thank you in advanced.
HibernateConfig Code for DB B
#Bean(name = "apiDataSource")
#ConfigurationProperties(prefix = "api.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "apiEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean apiEntityManagerFactory(
EntityManagerFactoryBuilder builder, #Qualifier("apiDataSource") DataSource dataSource
) {
return builder.dataSource(dataSource).packages("com.workspez.api.entity").persistenceUnit("api").build();
}
#Bean(name = "apiTransactionManager")
public PlatformTransactionManager apiTransactionManager(
#Qualifier("apiEntityManagerFactory") EntityManagerFactory apiEntityManagerFactory
) {
return new JpaTransactionManager(apiEntityManagerFactory);
}
#Bean(name = "apiJdbc")
public NamedParameterJdbcTemplate apiJdbcTemplate() {
return new NamedParameterJdbcTemplate(dataSource());
}

How to create schema in Postgres DB, before liquibase start to work?

I have standalone application. It’s on java, spring-boot, postgres and it has liquibase.
I need to deploy my app and liquibase should create all tables, etc. But it should do it into custom schema not in public. All service tables of liquibase (databasechangelog and databasechangeloglock) should be in custom schema too. How can I create my schema in DB before liquibase start to work? I must do it inside my app when it’s deploying, in config or some like. Without any manual intervention into the DB.
application.properties:
spring.datasource.jndi-name=java:/PostgresDS
spring.jpa.properties.hibernate.default_schema=my_schema
spring.jpa.show-sql = false
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
spring.datasource.continue-on-error=true
spring.datasource.sql-script-encoding=UTF-8
liquibase.change-log = classpath:liquibase/changelog-master.yaml
liquibase.default-schema = my_schema
UPD:
When liquibase start, it's create two tables databasechangelogs and one more table. After that, liquibase start working. But I want liquibase in liquibase.default-schema = my_schema, but it's not exist when liquibase start to work and it an error: exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: ERROR: schema "my_schema" does not exist
I want liquibase work in custom schema, not in public:
liquibase.default-schema = my_schema
but before liquibase can do it, the schema must be created. Liquibase can't do this because it not started yet and for start it needs schema.
Vicious circle.
I found a solution with my application.properties.
org.springframework.jdbc.datasource.init.ScriptUtils worked before liquibase.
Logs:
14:11:14,760 INFO [org.springframework.jdbc.datasource.init.ScriptUtils]
(ServerService Thread Pool -- 300) Executing SQL script from URL
[vfs:/content/app.war/WEB-INF/classes/schema.sql]
14:11:14,761 INFO [org.springframework.jdbc.datasource.init.ScriptUtils]
(ServerService Thread Pool -- 300) Executed SQL script from URL
[vfs:/content/app.war/WEB-INF/classes/schema.sql] in 1 ms.
14:11:14,912 ERROR [stderr] (ServerService Thread Pool -- 300) INFO 9/27/18
2:11 PM: liquibase: Successfully acquired change log lock
14:11:15,292 ERROR [stderr] (ServerService Thread Pool -- 300) INFO 9/27/18
2:11 PM: liquibase: Reading from my_schema.databasechangelog
14:11:15,320 ERROR [stderr] (ServerService Thread Pool -- 300) INFO 9/27/18
2:11 PM: liquibase: Successfully released change log lock
I just put schema.sql with CREATE SCHEMA IF NOT EXISTS my_schema; into resources dir and all working properly.
Thanks all for help.
Update: It's work for Spring boot 1.X. If you are use Spring Boot 2, you should enable schema.sql in properties file, with spring.datasource.initialization-mode=always.
More info in Spring Boot - Loading Initial Data
Update 2: In Spring Boot 2.5.2 (maybe in earlier versions too) this solution is not working now, as #peterh has wrote in comment. Sad but true. The last version I was try this solution and it's work was Spring Boot 2.0.9 In docs Spring Boot says that it was redesigned from Spring Boot 2.5.x
Update 3: Some information why they kill this feature -> https://github.com/spring-projects/spring-boot/issues/22741
You can use Spring Boot Pre-Liquibase module for this. It is exactly what it is meant for. It executes some SQL prior to executing Liquibase itself. Pre-Liquibase sets itself up in the Spring Boot AutoConfigure chain so that it is guaranteed to always execute before Liquibase.
Step 1
Add the following Starter to your project:
<dependency>
<groupId>net.lbruun.springboot</groupId>
<artifactId>preliquibase-spring-boot-starter</artifactId>
<version> ---latest-version--- </version>
</dependency>
Step 2
Add a SQL file to src/main/resources/preliquibase with a name of postgresql.sql and content like this:
CREATE SCHEMA IF NOT EXISTS ${spring.liquibase.default-schema};
The ${} syntax denotes a placeholder variable. Pre-Liquibase will resolve it from the properties in your Spring Environment.
Step 3
Set application properties like this:
spring.liquibase.default-schema=${my.db.schemaname}
spring.jpa.properties.hibernate.default_schema=${my.db.schemaname}
Now - in this example - the only thing left to decide is where the my.db.schemaname value comes from. That is your choice. The example project advocates that it should come from an OS environment variable, in particular if your are deploying to a cloud.
Final words
WARNING: Pre-Liquibase is possibly way too flexible in that it allows to execute any SQL code. Don't be tempted to put stuff in Pre-Liquibase files which rightfully belong in an Liquibase ChangeSet. Honestly, the only usage I can think of for Pre-Liquibase is to set up a database "home" (meaning a schema or a catalog) where Liquibase db objects can live so that instances of the same application can be separated by schema or catalog while residing on the same database server.
(Disclosure: I'm the author of Pre-Liquibase module)
To solve this, we need to run a SQL statement that creates the schema during Spring Boot initialization at the point when DataSource bean had been already initialized so DB connections can be easily obtained but before Liquibase runs.
By default, Spring Boot runs Liquibase by creating an InitializingBean named SpringLiquibase. This happens in LiquibaseAutoConfiguration.
Knowing this, we can use AbstractDependsOnBeanFactoryPostProcessor to configure SpringLiquibase to depend on our custom schema creating bean (SchemaInitBean in the example below) which depends on DataSource. This arranges the correct execution order.
My application.properties:
db.schema=my_schema
spring.jpa.hibernate.ddl-auto=validate
spring.jpa.open-in-view=false
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
spring.jpa.properties.hibernate.default_schema=${db.schema}
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=postgres
spring.liquibase.enabled=true
spring.liquibase.change-log=classpath:db/changelog/db.changelog-master.xml
spring.liquibase.defaultSchema=${db.schema}
Add the #Configuration class below to the project, for example put it in a package processed by component scan.
#Slf4j
#Configuration
#ConditionalOnClass({ SpringLiquibase.class, DatabaseChange.class })
#ConditionalOnProperty(prefix = "spring.liquibase", name = "enabled", matchIfMissing = true)
#AutoConfigureAfter({ DataSourceAutoConfiguration.class, HibernateJpaAutoConfiguration.class })
#Import({SchemaInit.SpringLiquibaseDependsOnPostProcessor.class})
public class SchemaInit {
#Component
#ConditionalOnProperty(prefix = "spring.liquibase", name = "enabled", matchIfMissing = true)
public static class SchemaInitBean implements InitializingBean {
private final DataSource dataSource;
private final String schemaName;
#Autowired
public SchemaInitBean(DataSource dataSource, #Value("${db.schema}") String schemaName) {
this.dataSource = dataSource;
this.schemaName = schemaName;
}
#Override
public void afterPropertiesSet() {
try (Connection conn = dataSource.getConnection();
Statement statement = conn.createStatement()) {
log.info("Going to create DB schema '{}' if not exists.", schemaName);
statement.execute("create schema if not exists " + schemaName);
} catch (SQLException e) {
throw new RuntimeException("Failed to create schema '" + schemaName + "'", e);
}
}
}
#ConditionalOnBean(SchemaInitBean.class)
static class SpringLiquibaseDependsOnPostProcessor extends AbstractDependsOnBeanFactoryPostProcessor {
SpringLiquibaseDependsOnPostProcessor() {
// Configure the 3rd party SpringLiquibase bean to depend on our SchemaInitBean
super(SpringLiquibase.class, SchemaInitBean.class);
}
}
}
This solution does not require external libraries like Spring Boot Pre-Liquibase and not affected by limitations on data.sql / schema.sql support. My main motivation for finding this solution was a requirement I had that schema name must be a configurable property.
Putting everything in one class and using plain JDBC is for brevity.
Simple solution based on Pavel D. answer.
It also can be used without liquibase
#Slf4j
#Component
public class SchemaConfig implements BeanPostProcessor {
#Value("${db.schema}")
private String schemaName;
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (!StringUtils.isEmpty(schemaName) && bean instanceof DataSource) {
DataSource dataSource = (DataSource) bean;
try (Connection conn = dataSource.getConnection();
Statement statement = conn.createStatement()) {
log.info("Going to create DB schema '{}' if not exists.", schemaName);
statement.execute("create schema if not exists " + schemaName);
} catch (SQLException e) {
throw new RuntimeException("Failed to create schema '" + schemaName + "'", e);
}
}
return bean;
}
}

How to Close HikariCP JNDI DataSource on Shutdown/Redeploy

I am using HikariCP 2.3.3 with Spring and Jetty 9 and am trying to resolve the fact that when I hot deploy a new war file, all of the Hikari database pool connections to MySQL are left open and idle. I am using a JNDI lookup in my spring applicationContext file to retrieve the datasource from a Jetty context file.
Since I cannot specify a destroy-method in the jndi-lookup like I can if I were to define a dataSource bean, I referred to this question: Should I close JNDI-obtained data source?, where it mentions you can attempt to close the datasource in the contextDestroyed() method of a ServletContextListener. In that case they were using tomcat and c3po so I'm not sure how much the example relates.
I have tried the following in my contextDestroyed method:
InitialContext initial;
DataSource ds;
try
{
initial = new InitialContext();
ds = (DataSource) initial.lookup("jdbc/myDB");
if (ds.getConnection() == null)
{
throw new RuntimeException("Failed to find the JNDI Datasource");
}
HikariDataSource hds = (HikariDataSource) ds;
hds.close();
} catch (NamingException | SQLException ex)
{
Logger.getLogger(SettingsInitializer.class.getName()).log(Level.SEVERE, null, ex);
}
But at HikariDataSource hds = (HikariDataSource) ds; I get the following exception: java.lang.ClassCastException: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
I have also tried the following after reading this issue on GitHub: Is it essential to call shutdown() on HikariDataSource?:
InitialContext initial;
DataSource ds;
try
{
initial = new InitialContext();
ds = (DataSource) initial.lookup("jdbc/myDB");
ds.unwrap(HikariDataSource.class).close();
} catch (NamingException | SQLException ex)
{
Logger.getLogger(SettingsInitializer.class.getName()).log(Level.SEVERE, null, ex);
}
But I get the following exception: java.sql.SQLException: Wrapped connection is not an instance of class com.zaxxer.hikari.HikariDataSource
at com.zaxxer.hikari.HikariDataSource.unwrap(HikariDataSource.java:177)
I feel like I'm close to a working solution but can't quite get it. What is the proper way to close a JNDI HikariCP data source, whether in contextDestroyed() or elsewhere?
I can't find where the 2.3.3 code lines up with the HikariDataSource.java:177 line number above. One suggestion is upgrading to the latest HikariCP version, 2.3.8.
While your code looks correct, I suspect that you are running into a classloader issue, whereby the HikariDataSource (class) loaded by Jetty/Spring classloader and registered in JNDI is not the same classloader that is loading HikariDataSource in your web app.
One quick way to check is to log/print both class instances like so:
...
ds = (DataSource) initial.lookup("jdbc/myDB");
logger.info("JNDI HikariDataSource : " + System.identityHashCode(ds.getClass()));
logger.info("Local HikariDataSource: " + System.identityHashCode(HikariDataSource.class));
...
If the two class objects have different hashCodes, they are not the same class. In which case you will have to investigate whether the JNDI instance is registered in the "global JNDI context". If it is, that datasource can be shared across web app instances, and it is not appropriate for your web app to unilaterally shut it down.
UPDATE: Sorry I missed it. Re-reading your question, my guess was correct. Your original error:
java.lang.ClassCastException: com.zaxxer.hikari.HikariDataSource cannot be cast to
com.zaxxer.hikari.HikariDataSource
is a clear indication that there are two classloaders that have loaded two separate instances of the HikariDataSource class. The first is the Jetty classloader (JNDI), and the second is your web application classloader.
This indicates that the pool is shared across web applications and you probably should not try to shut it down from your application context.

How to get a mysql connection from remote server in spring jdbc?

I am using db4free's mysql server for my spring-webmvc project.But the problem is , I can't get a connection to the server and the Exception is
org.springframework.web.util.NestedServletException: Request processing failed;
nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException:
Must specify port after ':' in connection string
But I have specified port correctly just after ':' , Here is my configuration java class:
#Configuration
#ComponentScan(basePackages="org.ratajo.amaderbari")
#EnableWebMvc
public class MvcConfiguration extends WebMvcConfigurerAdapter{
#Bean
public DataSource getDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUrl("jdbc:mysql://http://www.db4free.net:3306/myDB");
dataSource.setUsername("user");
dataSource.setPassword("pass");
return dataSource;
}
Here is the sample program I am trying to execute
MvcConfiguration mv = new MvcConfiguration();
JdbcTemplate jdbcTemplate = new JdbcTemplate(mv.getDataSource());
String sql="CREATE TABLE 'contact' ('contact_id' int(11) NOT NULL AUTO_INCREMENT,) ENGINE=InnoDB AUTO_INCREMENT=25 DEFAULT CHARSET=utf8";
jdbcTemplate.execute(sql);
The url looks weird :
dataSource.setUrl("jdbc:mysql://http://www.db4free.net:3306/myDB");
should be something like
dataSource.setUrl("jdbc:mysql://www.db4free.net:3306/myDB");
otherwise it is trying to use http as hostname and //www.db4free.net as port. (which explains the error). But I would also double check the hostname as it looks weird to go to a host 'www.something'.
OTOH jdbc url's are weird.

Categories