Call stored procedure on mysql slave using ReplicationDriver - java

I have a Spring application that currently executes some queries utilizing stored procedures. The configuration is something like this:
Datasource:
<bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.ReplicationDriver"/>
<property name="url" value="jdbc:mysql:replication://master,slave1,slave2/db?allowMultiQueries=true"/>
<property name="username" value="${db.dbusername}"/>
<property name="password" value="${db.dbpassword}"/>
<property name="defaultReadOnly" value="true"/>
</bean>
<bean id="jdbcDeviceDAO" class="dao.jdbc.JdbcDeviceDAO">
<property name="dataSource" ref="dataSource"/>
</bean>
DAO:
public class JdbcDeviceDAO implements DeviceDAO {
// ...
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
this.procGetCombinedDeviceRouting = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName("get_combined_device_routing");
// ...
}
public CombinedDeviceRouting getCombinedDeviceRouting(String deviceName, String deviceNameType) {
SqlParameterSource in = createParameters(deviceName, deviceNameType);
Map<String, Object> results = this.procGetCombinedDeviceRouting.execute(in);
return extractResults(results);
}
Now when I call getCombinedDeviceRouting(...) it fails with the following exception:
org.springframework.dao.TransientDataAccessResourceException: CallableStatementCallback; SQL [{call get_combined_device_routing()}]; Connection is read-only. Queries leading to data modification are not allowed; nested exception is java.sql.SQLException: Connection is read-only. Queries leading to data modification are not allowed
I know the connection is read-only and I need it to be that way so the queries are load-balanced between slave hosts. But the stored procedure is actually read only, it's just a lot of SELECT statements, in fact I tried adding READS SQL DATA to its definition but it didn't work.
Finally I came to the point of reading the mysql's connector code and I found this:
protected boolean checkReadOnlySafeStatement() throws SQLException {
synchronized (checkClosed().getConnectionMutex()) {
return this.firstCharOfStmt == 'S' || !this.connection.isReadOnly();
}
}
It sounds naive, but is the connector checking whether my statement is read-only by just matching the first character with 'S'?
If this is the case, it seems like there's no way of calling a stored procedure on a slave host, because the statement starts with 'C' (CALL ...).
Does anyone know if there's a workaround for this problem? Or maybe I'm wrong assuming this first character check?

It appears as though this is a bug with the driver I had a look at the code to see if there is an easy extension point, but it looks like you'd have to extend a lot of classes to affect this behaviour :(

Related

Reading value from file into applicationContext.xml file

I have a spring based web application and in my application context xml file, I have defined a bean which has all the parameters to connect to database. As part of this bean, for one of the parameters, I have a password key, as shown in the below example and I wanted the value should come from a /vault/password file. This /vault/password is not part of the project/application. This /vault/password will be there in host machine by default.
What is the syntax in applicationContext.xml bean definition, to read a value from a file outside of application context.
<bean class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" id="dataSource">
<property name="url" value="jdbc:postgresql://postgres:5432/" />
<property name="username" value="postgres" />
<property name="password" value="/vault/password" />
</bean>
Something like this is probably your best bet:
How to correctly override BasicDataSource for Spring and Hibernate
PROBLEM:
Now I need to provide custom data source based on server environment
(not config), for which I need to calculate driverClassName and url
fields based on some condition.
SOLUTION:
Create a factory (since you need to customize only the creation phase
of the object, you don't need to control the whole lifetime of it).
public class MyDataSourceFactory {
public DataSource createDataSource() {
BasicDataSource target = new BasicDataSource();
if (condition) {
target.setDriverClassName("com.mysql.jdbc.Driver");
target.setUrl("jdbc:mysql://localhost/test?relaxAutoCommit=true");
} else { ... }
return target;
}
}
In your case, your customization would do some I/O to set target.password.

C3PO connection pooling - connections not being released

I have a web application running under Tomcat 7 using Spring with c3po as the connection pool manager. I have also used dbcp and have the same result.
I initiate a long running single threaded process which makes a large number of database calls using jdbcTemplate.update(), etc, in various dao's. As each of these updates is simple and independent, no transaction manager is being used.
For some reason, I am running out of connections. What appears to be happening is that each dao is holding onto its own connection and not returning it to the pool.
Is this normal behaviour? I had expected that the connection was tied to the jdbcTemplate.update() and released back as soon as this had finished.
...
In the context file...
<bean id="enquiryDataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="${enquiry.drivername}"/>
<property name="url" value="${enquiry.jdbc}"/>
<property name="username" value="${enquiry.username}"/>
<property name="password" value="${enquiry.password}"/>
<property name="maxWait" value="30000"/>
<property name="maxActive" value="50"/>
</bean>
In a typical dao constructor...
#Autowired
public XXXCountryDao(#Qualifier("enquiryDataSource") DataSource dataSource,
#Qualifier("sqlUpdaterFactoryImpl") SqlUpdaterFactory sqlUpdaterFactory, #Qualifier("sqlFormatterFactoryImpl") SqlFormatterFactory sqlFormatterFactory) {
super("Country", dataSource, sqlUpdaterFactory, sqlFormatterFactory);
// ...other constructor stuff
}
All dao's inherit from...
public abstract class AbstractFileProcessorDao<ImportRecType, QueryRecType> extends JdbcDaoSupport {
// ...
}
In a typical dao method...
protected boolean runUpdateToSqlDatabase(Map<String, Object> values, Map<String, Object> whereValues) {
if (values.isEmpty())
return true;
String sql = updateUpdaterServer.getSql(values, whereValues);
if (logger.isDebugEnabled())
logger.debug("Server SQL -> " + sql);
getJdbcTemplate().update(sql);
return false;
}
Please check your application for "rogue" calls to DataSource#getConnection (you can use your IDE to search for method references). Connection leaks are usually caused by obtaining a connection which is then never closed via Connection#close.
When working with Spring's JdbcTemplate all JDBC resource handling (opening / closing connections, statements, result sets) is done automatically. But with legacy code you never know.

Refresh DataSource using Spring+dbcp

I'm using Spring with DBCP and need to refresh my datasource when some configuration on operation environment changes, without restart all application.
If I do it with no use of DBCP, I force this refresh closing current opened datasource in use and Start a new instance of DataSource.
Using DBCP+Spring, I can't do that.
Somebody knows if it is possible?
I don't think there is such a support in plain DBCP, mostly because database connection properties are very rarely changing during the lifetime of the application. Also you will have to consider transition time, when some connections served by the old data source are still opened while others are already served from the new (refreshed) one.
Decorator/proxy approach
I would suggest you to write custom implementation of DataSource leveraging Decorator/Proxy design pattern. Your implementation would simply call target data source (created by DBCP), most of the time doing nothing more. But when you call some sort of refresh() method, your decorator will close previously created data source and create new one with fresh configuration. Remember about multi-threading!
#Service
public class RefreshableDataSource implements DataSource {
private AtomicReference<DataSource> target = new AtomicReference<DataSource>();
#PostConstruct
public void refresh() {
target.set(createDsManuallyUsingSomeExternalConfigurationSource());
}
#Override
public Connection getConnection() throws SQLException {
return target.get().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return target.get().getConnection(username, password);
}
//Rest of DataSource methods
}
The createDsManuallyUsingSomeExternalConfigurationSource() method might look like this:
private DataSource createDsManuallyUsingSomeExternalConfigurationSource() {
DataSource ds = new org.apache.commons.dbcp.BasicDataSource();
ds.setDriverClassName("org.h2.Driver");
ds.setUrl(/*New database URL*/);
ds.setUsername(/*New username*/);
ds.setPassword(/*New password*/);
return ds;
}
This is a rough equivalent of Spring bean:
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:mem:" />
<property name="username" value="sa" />
<property name="password" value="" />
</bean>
You can't just inject such a target bean into your proxy/decorator RefreshableDataSource as you want data source configuration to be dynamic/refreshable, while Spring only allows you to inject static properties. This means that it is your responsibility to create an instance of target BasicDataSource, but as you can see, it is nothing scary.
Actually, I have a second thought: Spring SpEL AFAIK allows you to call other beans' methods from XML configuration. But this is a very wide topic.
JNDI approach
Another approach might be to use JNDI to fetch DataSource and use hot-deployment (it works with JBoss and its *-ds.xml files.

Hibernate creates too many connections using #Transactional, how to prevent this?

I'm fairly new to Hibernate and PostgreSQL, but so far it's going well, although I'm running into a problem now that I can't solve. I'm getting an error while filling the database on the very first operation (which is one transaction inserting or updating 1000 rows in the database). The error is:
SQL Error: 0, SQLState: 53300
FATAL: sorry, too many clients already
Exception in thread "main" org.hibernate.exception.GenericJDBCException: Cannot open connection
This is the important code:
#Repository
public class PDBFinderDAO extends GenericDAO<PDBEntry> implements IPDBFinderDAO {
#Override
#Transactional
public void updatePDBEntry(Set<PDBEntry> pdbEntrySet) {
for (PDBEntry pdbEntry : pdbEntrySet) {
getCurrentSession().saveOrUpdate(pdbEntry);
}
}
}
getCurrentSession() is extended from GenericDAO and calls sessionFactory.getCurrentSession().
This is my Hibernate configuration:
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<!-- Database connection settings -->
<property name="hibernate.connection.driver_class">org.postgresql.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>
<property name="hibernate.connection.url">jdbc:postgresql://localhost/PDBeter</property>
<property name="hibernate.connection.username">xxxx</property>
<property name="hibernate.connection.password">xxxx</property>
<!-- Create or update the database schema on startup -->
<property name="hbm2ddl.auto">create</property>
<!-- Use the C3P0 connection pool provider -->
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.idle_test_period">300</property>
<!-- Disable the second-level cache -->
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<!-- Batch size -->
<property name="hibernate.jdbc.batch_size">50</property>
<!-- this makes sure the more efficient new id generators are being used,
though these are not backwards compatible with some older databases -->
<property name="hibernate.id.new_generator_mappings">true</property>
<!-- Echo all executed SQL to stdout -->
<!--
<property name="hibernate.show_sql">true</property>
-->
<property name="format_sql">true</property>
<property name="use_sql_comments">true</property>
</session-factory>
</hibernate-configuration>
This is my Spring configuration:
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/beans"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.0.xsd">
<context:component-scan base-package="nl.ru.cmbi.pdbeter" />
<!-- Transaction Manager -->
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<tx:annotation-driven />
<!-- Session Factory -->
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="configLocation" value="hibernate.cfg.xml" />
<property name="packagesToScan" value="nl.ru.cmbi.pdbeter.core.model.domain" />
</bean>
<!-- Task Executor -->
<task:annotation-driven />
</beans>
I'm not really sure what is going wrong, this should only open one connection every time, and close it afterwards, isn't that what #Transactional is supposed to do? Also, is there an easy way to check how many connections are open at a certain time so that I can check before and after the error how many connections were open?
EDIT: when I check the database nothing has been added, so it can't even make one connection, what is going wrong here?
EDIT: I'm sorry but I already solved it myself. It was a very stupid mistake, there was a very small query using criteria that was executed 1000 times as well, but that one was executed before the transaction, causing it to be executed in 1000 separate transactions/sessions/connections (I think, correct me if I'm wrong!)
EDIT: ok, turns out that didn't solve it at all, cause I needed that small query to see if something was already in the database, and if so, get that object from the database so I could update it's fields/columns/whatever you want to call it.
This is the method in the GenericDAO:
#Override
public PDBEntry findByAccessionCode(String accessionCode) {
return (PDBEntry) createCriteria(Restrictions.eq("accessionCode", accessionCode)).uniqueResult();
}
There is a function that builds the mapped objects that isn't in the DAO, since it converts a raw datafile into the database object, so I wanted to keep that out of database operations and only put the saveOrUpdate() within the database module. The problem I have now is that the findByAccessionCode() is being called a 1000 times during the conversion of the raw datafile to the database objects, because I need to check whether a certain piece of data is already present in the database, and if so, get the object from the database instead of making a new one.
Now how would I execute that query a 1000 times inside one connection in this context? I tried making that conversion method that converts the 1000 files #transactional, but that didn't work.
Here's the conversion method:
private void updatePDBSet(Set<RawPDBEntry> RawPDBEntrySet) {
Set<PDBEntry> pdbEntrySet = new LinkedHashSet<PDBEntry>();
for (RawPDBEntry pdb : RawPDBEntrySet) {
PDBEntry pdbEntry = pdbEntryDAO.findByAccessionCode(pdb.id);
if (pdbEntry == null) {
pdbEntry = new PDBEntry(pdb.id, pdb.header.date);
}
pdbEntry.setHeader(pdb.header.header);
ExpMethod expMethod = new ExpMethod.Builder(pdbEntry, pdb.expMethod.expMethod.toString()).build();
if (pdb.expMethod.resolution != null) {
expMethod.setResolution(pdb.expMethod.resolution);
}
if (pdb.expMethod.rFactor != null) {
expMethod.setRFactor(pdb.expMethod.rFactor.rFactor);
if (pdb.expMethod.rFactor.freeR != null) {
expMethod.setFreeR(pdb.expMethod.rFactor.freeR);
}
}
if (pdb.hetGroups != null) {
for (PFHetId hetId : pdb.hetGroups.hetIdList) {
HetGroup hetGroup = new HetGroup(pdbEntry, hetId.hetId);
if (hetId.nAtom != null) {
hetGroup.setNAtom(hetId.nAtom);
}
if (hetId.name != null) {
hetGroup.setName(hetId.name);
}
}
}
for (PFChain chain : pdb.chainList) {
new Chain(pdbEntry, chain.chain);
}
pdbEntrySet.add(pdbEntry);
}
pdbFinderDAO.updatePDBEntry(pdbEntrySet);
}
(The pdbFinderDAO.updatePDBEntry(pdbEntrySet) was where I originally thought the problem originated)
EDIT: First of all sorry that I created this new post, I really thought I found the answer, but I'll just continue in this post for further edits.
Ok, now I put all the 1000 findAccessionCode criteria inside the DAO by sending a Set of the raw data files to the DAO so it can retrieve the id's there, then finding them in the database, getting the ones it can find and adding it to a HashMap where the database object is mapped with the reference to the raw data file as key (so I know what raw data belongs to what database entry). This function I made #Transactional like so:
#Override
#Transactional
public Map<RawPDBEntry, PDBEntry> getRawPDBEntryToPDBEntryMap(Set<RawPDBEntry> rawPDBEntrySet) {
Map<RawPDBEntry, PDBEntry> RawPDBEntryToPDBEntryMap = new HashMap<RawPDBEntry, PDBEntry>();
for (RawPDBEntry pdb : rawPDBEntrySet) {
RawPDBEntryToPDBEntryMap.put(pdb, (PDBEntry) createCriteria(Restrictions.eq("accessionCode", pdb.id)).uniqueResult());
}
return RawPDBEntryToPDBEntryMap;
}
Still, no success... I get the exact same error, but it does tell me it's the criteria that causes it. Why can't I execute all these 1000 queries within the same connection?
EDIT: Yet another update: I tried adding all the queries 1 by 1, and this worked, slowly, but it worked. I did this on an empty database. Next I tried the same thing but now the database already contained the stuff from the first try, and I got the following error:
Exception in thread "main" org.hibernate.HibernateException: Illegal attempt to associate a collection with two open sessions
I'm guessing this has something to do with the fact that I'm getting the objects (that are already in the database and so have to be updated) within the DAO, then sending references back to the conversion method, then changing their fields, then sending them back to the DAO to make them persistent. Although after googling a bit I found people that had problems with collections in their POJO's with the annotation:
#OneToMany(mappedBy = "pdbEntry", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
where the cascading caused the problem. Do I have to remove all of these cascades and hardcode the saveOrUpdate() operations for all the different mapped objects? Or has this nothing to do with that error?
And finally: I'm still no closer to figuring out how to do this for 1000 objects at a time.
Solved the problem, it had to do with a bad setup of Spring, which caused the #Transactional to not be recognized. I fixed that, and then the error went away.

How to set a default query timeout with JPA and Hibernate?

I am doing some big queries on my database with Hibernate and I sometimes hit timeouts. I would like to avoid setting the timeout manually on every Query or Criteria.
Is there any property I can give to my Hibernate configuration that would set an acceptable default for all queries I run?
If not, how can I set a default timeout value on Hibernate queries?
JPA 2 defines the javax.persistence.query.timeout hint to specify default timeout in milliseconds. Hibernate 3.5 (currently still in beta) will support this hint.
See also https://hibernate.atlassian.net/browse/HHH-4662
JDBC has this mechanism named Query Timeout, you can invoke setQueryTime method of java.sql.Statement object to enable this setting.
Hibernate cannot do this in unified way.
If your application retrive JDBC connection vi java.sql.DataSource, the question can be resolved easily.
we can create a DateSourceWrapper to proxy Connnection which do setQueryTimeout for every Statement it created.
The example code is easy to read, I use some spring util classes to help this.
public class QueryTimeoutConfiguredDataSource extends DelegatingDataSource {
private int queryTimeout;
public QueryTimeoutConfiguredDataSource(DataSource dataSource) {
super(dataSource);
}
// override this method to proxy created connection
#Override
public Connection getConnection() throws SQLException {
return proxyWithQueryTimeout(super.getConnection());
}
// override this method to proxy created connection
#Override
public Connection getConnection(String username, String password) throws SQLException {
return proxyWithQueryTimeout(super.getConnection(username, password));
}
private Connection proxyWithQueryTimeout(final Connection connection) {
return proxy(connection, new InvocationHandler() {
//All the Statement instances are created here, we can do something
//If the return is instance of Statement object, we set query timeout to it
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Object object = method.invoke(connection, args);
if (object instanceof Statement) {
((Statement) object).setQueryTimeout(queryTimeout);
}
return object;
});
}
private Connection proxy(Connection connection, InvocationHandler invocationHandler) {
return (Connection) Proxy.newProxyInstance(
connection.getClass().getClassLoader(),
ClassUtils.getAllInterfaces(connection),
invocationHandler);
}
public void setQueryTimeout(int queryTimeout) {
this.queryTimeout = queryTimeout;
}
}
Now we can use this QueryTimeoutConfiguredDataSource to wrapper your exists DataSource to set Query Timeout for every Statement transparently!
Spring config file:
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource">
<bean class="com.stackoverflow.QueryTimeoutConfiguredDataSource">
<constructor-arg ref="dataSource"/>
<property name="queryTimeout" value="1" />
</bean>
</property>
</bean>
Here are a few ways:
Use a factory or base class method to create all queries and set the timeout before returning the Query object
Create your own version of org.hibernate.loader.Loader and set the timeout in doQuery
Use AOP, e.g. Spring, to return a proxy for Session; add advice to it that wraps the createQuery method and sets the timeout on the Query object before returning it
Yes, you can do that.
As I explained in this article, all you need to do is to pass the JPA query hint as a global property:
<property
name="javax.persistence.query.timeout"
value="1000"
/>
Now, when executing a JPQL query that will timeout after 1 second:
List<Post> posts = entityManager
.createQuery(
"select p " +
"from Post p " +
"where function('1 >= ALL ( SELECT 1 FROM pg_locks, pg_sleep(2) ) --',) is ''", Post.class)
.getResultList();
Hibernate will throw a query timeout exception:
SELECT p.id AS id1_0_,
p.title AS title2_0_
FROM post p
WHERE 1 >= ALL (
SELECT 1
FROM pg_locks, pg_sleep(2)
) --()=''
-- SQL Error: 0, SQLState: 57014
-- ERROR: canceling statement due to user request
For more details about setting a timeout interval for Hibernate queries, check out this article.
For setting global timeout values at query level - Add the below to config file.
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"></property>
<property name="queryTimeout" value="60"></property>
</bean>
For setting global timeout values at transaction(INSERT/UPDATE) level - Add the below to config file.
<bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="myEmf" />
<property name="dataSource" ref="dataSource" />
<property name="defaultTimeout" value="60" />
<property name="jpaDialect">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
</property>
</bean>

Categories