There is a datasource configured like below in Spring.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="${prop_jdbc.driverClassName}"/>
<property name="url" value="${prop_jdbc.url}"/>
<property name="username" value="${prop_jdbc.username}"/>
<property name="password" value="${prop_jdbc.password}"/>
<property name="initialSize" value="2"/>
<property name="maxActive" value="5"/>
<property name="maxIdle" value="2"/>
<property name="poolPreparedStatements" value="true"/>
<property name="maxOpenPreparedStatements" value="-1"/>
<!-- property name="defaultAutoCommit">
<value>false</value>
</property-->
</bean>
Now, I am doing firstly a DROP TABLE using jdbcTemplate created from above dataSource, then in next statement, I create same TABLE again and finally try to DROP it immediately in third statement.
jdbcTemplate.update( dropSql,new Object[] { });
jdbcTemplate.update( createSql,new Object[] { });
jdbcTemplate.update( dropSql,new Object[] { });
EDIT after #Brian comments
After first statement , Table was dropped immediately, and second statement also creates it immediately but second time, DROP is not happening .. There is no error as well..
Does JdbcTemplate execute DROP immediately or periodically as this is hard to understand , using same data source why should second DROP not happen when first happened 2 lines before.. ?
DDL - like create and drop are not transactional. Please share the actual ddl being executed. I would suggest in the absence of having the actual SQL to review that you use the execute method instead of update method on jdbcTemplate.
What are you doing to drop the table between each failed attempt by your code?
Related
I have an application that uses MySQL fail-over via a connection url:
jdbc:mysql://172.17.0.4:3306,172.17.0.3:3306/db_name?autoReconnect=true&failOverReadOnly=false
Now when primary db is down then it should move to secondary connection and application flow should work as expected.
Now the problem happens when moving to secondary db. It takes too long to move/execute the queries, causing the flow to take much longer than expected.
I have checked already with db and there's no slow query issues. I guess it's something to do with fail-over and with checking the states. So, any idea what might be causing this issue or how to resolve this delay?
I am also using c3p0 to manage the connections pools. Already tried with initialTimeout and maxReconnects but no luck so far.
DataSource
<bean id="productionDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="${jdbc.driver}"/>
<property name="jdbcUrl" value="${jdbc.url}"/>
<property name="user" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<property name="description" value="integration_ds"/>
<!-- configuration pool via c3p0-->
<property name="acquireIncrement" value="${datasource.acquireIncrement}"/>
<property name="idleConnectionTestPeriod" value="${datasource.idleConnectionTestPeriod}"/>
<!-- seconds -->
<property name="maxPoolSize" value="${datasource.maxPoolSize}"/>
<property name="maxStatements" value="${datasource.maxStatements}"/>
<property name="maxStatementsPerConnection" value="${datasource.maxStatementsPerConnection}"/>
<property name="minPoolSize" value="${datasource.minPoolSize}"/>
<property name="initialPoolSize" value="${datasource.initialPoolSize}"/>
<property name="maxIdleTime" value="${datasource.maxIdleTime}"/>
<property name="acquireRetryAttempts" value="${datasource.acquireRetryAttempts}"/>
<property name="acquireRetryDelay" value="${datasource.acquireRetryDelay}"/>
<property name="breakAfterAcquireFailure" value="${datasource.breakAfterAcquireFailure}"/>
<property name="debugUnreturnedConnectionStackTraces" value="true"/>
</bean>
Properties
datasource.acquireIncrement=1
datasource.idleConnectionTestPeriod=1000
datasource.maxPoolSize=10
datasource.maxStatements=600
datasource.minPoolSize=5
datasource.initialPoolSize=5
datasource.maxIdleTime=7200
#datasource.acquireRetryAttempts=5
datasource.acquireRetryAttempts=1
#datasource.acquireRetryDelay=5000
#datasource.acquireRetryDelay=1000
datasource.acquireRetryDelay=100
datasource.breakAfterAcquireFailure=false
datasource.maxStatementsPerConnection=3
datasource.checkoutTimeout=100
DAO
private static final String findByAppIdHql = "select app from AppImpl app where app.appId = ?";
final Query query = sf.getCurrentSession().createQuery(findByAppIdHql).setString(0, appId);
query.setCacheable(true);
query.setCacheRegion("app_query_cache");
query.setCacheMode(CacheMode.NORMAL);
List<App> apps = query.list();
I am currently trying to use Hibernate generated functions for a PostgreSQL Database. It all works correctly except for when I use the functions setFirstResult() and setMaxResults() on my hibernate queries.
Here is a sample of my Java code :
String queryString = "select h from History h " ;
Query onePageQuery = getEntityManager().createQuery(queryString)
.setFirstResult(rowMin).setMaxResults(PAGE_SIZE);
onePageQuery.getResultList();
The generated query is the following :
select
*
from
( select
rownumber() over(
order by
history0_.DTMTC desc) as rownumber_,
history0_.ID as ID2_,
history0_.CCOUL as CCOUL2_,
history0_.CDIAM as CDIAM2_,
history0_.CODAV as CODAV2_,
history0_.COOPVM as COOPVM2_,
history0_.COOPVT as COOPVT2_,
history0_.CPOTA as CPOTA2_,
history0_.DTMTC as DTMTC2_,
history0_.TYCOUP as TYCOUP2_
from
F23VCM2D history0_
order by
history0_.DTMTC desc ) as temp_
where
rownumber_ <= ?
Before I used a PostgreSQL database, I was using a DB2 Database. I have a jpa.xml file in which I declare that I am using a POSTGRESQL database as follows :
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="POSTGRESQL" />
<!-- <property name="database" value="DB2" /> -->
</bean>
</property>
I think that maybe the generated query is not adapted to PostgreSQL, because the error I have is : (roughly translated from French)
Caused by: org.postgresql.util.PSQLException: ERROR: the function rownumber() Doesn't exist
Hint : There's no function corresponding to the given name or the arguments types.
You need to add explicit type conversion.
My questions are :
1) Is the generated query the one that should be generated ? I doubt it cause I tried to send it via SQL Developper and it failed.
2) Is there something that is obvious I didn't do with my hibernate set up?
Modify jpaVendorAdapter like this :
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="databasePlatform" value="org.hibernate.dialect.PostgreSQL95Dialect"/>
</bean>
</property>
in order to correctly set the hibernate dialect.
The problem was on my side : there were two jpa.xml files.
I didn't know this because it's a legacy project where I'm trying to change only the database connection. Hence, I was trying to modify the wrong jpa.xml file so there were no impact on the generated queries.
Once I had removed the "DB2" line :
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="DB2" />
</bean>
</property>
and replaced it with the "POSTGRE" line, in the right file, it worked and the query was generated successfully. So the following version of the jpaVendorAdapter did it for me :
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="POSTGRESQL" />
</bean>
</property>
Thank you for the other input, it helped me find the answer. There didn't seem to need to add the dialect part, so I didn't add it in the end.
When I run Spring batch to process more than 100 records, I am get following error,
"Listener Refused the connection with the following error: ORA-12516,
TNS:Listener could not available handler with matching protocol stack".
But when I run the batch to process less than 50 records, it works fine.
In my batch's before step of my reader, I query DB to get records.
e.g. If I get 100 records from DB, Using Loop, I extract a particular field from each record and using the particular fields i will query to another table. So the second query run for 100 times inside the for loop.
In Logs, I can see the batch runs for a while (queries some records inside for loop) and then it throws the error.
Please help me to solve this.
OnlyGod Team -
Oracle database server's value for "PROCESSES" has been configured too low
you can resolve it by steps
Launch 'SQL Plus'
Logon as 'system'
Type the following command (to check that the database is using spfile):
show parameter spfile
Assuming that it shows that you ARE using spfile, then type the following command:
alter system set PROCESSES=300 scope = spfile
Obtain some downtime (nobody using the databases) and restart the Oracle database server (or simply the relevant Oracle database) or you can edit it in notepad++
Changed my data source bean from
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="${database.driverClassName}" />
<property name="url" value="${database.url}" />
<property name="username" value="${database.username}" />
<property name="password" value="${database.password}" />
</bean>
to this
<bean id="dataSource" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${database.driverClassName}" />
<property name="url" value="${database.url}" />
<property name="username" value="${database.username}" />
<property name="password" value="${database.password}" />
<property name="connectionProperties" value="initialSize=1,maxTotal=10" />
</bean>
I am using dbunit to create database backups, which can be imported and exported. My application can use several database engines: MySQL, PostgreSQL, SQLServer, H2 and Oracle.
All of the above work fine with the following code:
// Connect to the database
conn =BackupManager.getInstance().getConnection();
IDatabaseConnection connection = new DatabaseConnection(conn);
InputSource xmlSource = new InputSource(new FileInputStream(new File(nameXML)));
FlatXmlProducer flatXmlProducer = new FlatXmlProducer(xmlSource);
flatXmlProducer.setColumnSensing(true);
DatabaseOperation.CLEAN_INSERT.execute(connection,new FlatXmlDataSet(flatXmlProducer));
But on Oracle I get this exception:
!ENTRY es.giro.girlabel.backup 1 0 2012-04-11 11:51:40.542
!MESSAGE Start import backup
org.dbunit.database.AmbiguousTableNameException: AQ$_SCHEDULES
at org.dbunit.dataset.OrderedTableNameMap.add(OrderedTableNameMap.java:198)
at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:231)
at org.dbunit.database.DatabaseDataSet.getTableMetaData(DatabaseDataSet.java:281)
at org.dbunit.operation.DeleteAllOperation.execute(DeleteAllOperation.java:109)
at org.dbunit.operation.CompositeOperation.execute(CompositeOperation.java:79)
at es.giro.girlabel.backup.ImportBackup.createData(ImportBackup.java:39)
at es.giro.girlabel.backup.handlers.Import.execute(Import.java:45)
From the docs:
public class AmbiguousTableNameException extends DataSetException
This exception is thrown by IDataSet when multiple tables having the
same name are accessible. This usually occurs when the database
connection have access to multiple schemas containing identical table
names.
Possible solutions:
1) Use a database connection credential that has
access to only one database schema.
2) Specify a schema name to the
DatabaseConnection or DatabaseDataSourceConnection constructor.
3) Enable the qualified table name support (see How-to documentation).
For whom uses SpringDBUnit. I had struggled with this very annoying issue. I had ended up solving the issue by adding the configuration for com.github.springtestdbunit.bean.DatabaseConfigBean and com.github.springtestdbunit.bean.DatabaseDataSourceConnectionFactoryBean.
This is my full spring context for SpringDBUnit
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" />
<property name="url" value="jdbc:oracle:thin:#localhost:1521/XE" />
<property name="username" value="xxxx" />
<property name="password" value="xxxx" />
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource">
<ref bean="dataSource" />
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.OracleDialect</prop>
<prop key="hibernate.show_sql">true</prop>
</props>
</property>
<property name="annotatedClasses">
<list>
<value>xxx.example.domain.Person</value>
</list>
</property>
</bean>
<bean id="dbUnitDatabaseConfig" class="com.github.springtestdbunit.bean.DatabaseConfigBean">
<property name="skipOracleRecyclebinTables" value="true" />
<property name="qualifiedTableNames" value="true" />
<!-- <property name="caseSensitiveTableNames" value="true"/> -->
</bean>
<bean id="dbUnitDatabaseConnection"
class="com.github.springtestdbunit.bean.DatabaseDataSourceConnectionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="databaseConfig" ref="dbUnitDatabaseConfig" />
<property name="schema" value="<your_schema_name>"/>
</bean>
Setting the database schema fixed it for me:
#Bean
public DatabaseDataSourceConnectionFactoryBean dbUnitDatabaseConnection(final DataSource dataSource){
final DatabaseDataSourceConnectionFactoryBean connectionFactory = new DatabaseDataSourceConnectionFactoryBean();
connectionFactory.setDataSource(dataSource);
connectionFactory.setSchema(DB_SCHEMA);
return connectionFactory;
}
I had the same AmbiguousTableNameException while executing Dbunits aginst Oracle DB. It was working fine and started throwing error one day.
Rootcause: while calling a stored procedure, it got modified by mistake to lower case. When changed to upper case it stared working.
I could solve this also by setting the shema name to IDatabaseTester like iDatabaseTester.setSchema("SCHEMANAMEINCAPS")
Also please make sure your connection doesn't access only to many schemas having same table name.
You might encounter issues when importing data from Hibernate before DBUnit runs. According to the database you are using, the casing of table and column names could be important.
For example, in HSQL, database names must be declared in uppercase.
In case you import data via Hibernate's import.sql, make sure the table names are also in uppercase there, otherwise you'll end up with the following problem:
Hibernate creates the tables in lower case
DBUnit reads the table names from the DB in lower case
DBUnit tries to import its datasets using upper case table names
You end up in a mess, with the ambiguous name exception.
Remember to also check whether multiple tables were created during a previous run (both upper and lower case), in which case you need to clean it up too.
I am using JdbcPagingItemReader of Spring Batch for processing entries in my database. There is a timestamp column in the table I am querying and I want the JdbcPagingItemReader in the next run to just process the items where timestamp > "last successful job execution"
I think this should be a fairly common use case but somehow I can't figure out how to configure it. Thanks for your help!
JdbcPagingItemReader has it's own custom restart logic. It searches for the last retrieved value which maps to a unique index field and restarts the job from there.
From the JavaDocs:
On restart it uses the last sort key
value to locate the first page to read
(so it doesn't matter if the
successfully processed itmes have been
removed or modified).
As you can see, your timestamp field would not make any significant difference.
Update after reading comment:
OK, then how about dynamically creating the where clause for your PagingQueryProvider?
<bean id="itemReader" class="org.spr...JdbcPagingItemReader">
<property name="dataSource" ref="dataSource"/>
<property name="queryProvider">
<bean class="org.spr...SqlPagingQueryProviderFactoryBean">
<property name="selectClause" value="select id, name, credit"/>
<property name="fromClause" value="from customer"/>
<property name="whereClause">
<bean class="your.company.WhereClauseFactorybean" />
<property />
<property name="sortKey" value="id"/>
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="status" value="NEW"/>
</map>
</property>
<property name="pageSize" value="1000"/>
<property name="rowMapper" ref="customerMapper"/>
</bean>
Now implement WhereClauseFactorybean as a FactoryBean that uses JdbcTemplate to find the last timestamp and return something like where timestamp > <your time stamp> or null if no timestamp is found.
Reference:
Spring Batch:
JdbcPagingItemReader
Spring: FactoryBean
Spring: JdbcTemplate
Update after reading more comments:
Then I guess you will have to implement a custom StepExecutionListener, inject the AbstractSqlPagingQueryProvider into it and set the where clause in the beforeStep(StepExecution) method.