I am using Tomcat JDBC API(org.apache.tomcat.jdbc.pool.DataSource) to connect to my PostgreSQL database from Spring configuration file as shown below. I got a new requirement to configure two databases which should act as a fail over mechanism, Like When one database is down application should automatically switch back to another database.
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource"
destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost/dbname?user=postgres" />
<property name="username" value="postgres" />
<property name="password" value="postgres" />
<property name="maxActive" value="5" />
<property name="maxIdle" value="5" />
<property name="minIdle" value="2" />
<property name="initialSize" value="2" />
</bean>
Can anyone suggest how this can be achieved using Spring configuration file.
The normal way this is done is by using virtual IP addresses (with possible forwarding), checking for activity, a shoot-the-other-node-in-the-head approach and proper failover. Spring is exactly the wrong solution to this if you want to avoid things like data loss.
A few recommendations.
repmgr from 2ndquadrant will manage a lot of the process for you.
Use identical hardware and OS and streaming replication.
Use virtual IP addresses, and the like. Use a heartbeat mechanism to trigger failover via rempgr
Then from this perspective your spring app doesn't need reconfiguring.
Related
Our application has multiple modules, each module use its own schema in the same mysql database. Now I need to make different connection pool configurations for each module because of their different db resource consuming nature, i.e. some module may have 20 active connections at a point of time, but others may just have 1 max. I have searched here and other forums, couldn't find a solution, just this topic is not about multi-tenancy or multi-database, all schemas are in the same db.
Here's the config we have:
<bean id="dataSource" class="our.own.package.RoutingDataSource"> <!-- RoutingDataSource extends spring AbstractDataSource -->
<property name="master" ref="masterDS"/>
</bean>
<bean id="abstractDataSource" abstract="true">
<property name="driverClass" value="com.mysql.jdbc.Driver" />
<property name="initialPoolSize" value="#initial.pool.size#" />
<property name="minPoolSize" value="#min.pool.size#" />
<property name="maxPoolSize" value="#max.pool.size#" /> <!-- I want to have different configs for each module in our application -->
</bean>
<bean id="masterDS" class="com.mchange.v2.c3p0.ComboPooledDataSource" parent="abstractDataSource">
<property name="jdbcUrl" value="jdbc:mysql://#host#/" />
<property name="user" value="#user#" />
<property name="password" value="#pwd#" />
<property name="dataSourceName" value="#dbName#" />
</bean>
So now I have two questions:
1) Is it possible to have different connection pool configurations for one datasource in Spring?
2) If I have to go with the multiple datasource way(one datasource for one module), is implementing Spring's AbstractRoutingDataSource the correct way to go?
Thank you!
Ad.1 Your data source is in the fact connection pool so you want to have multiple pools on the top of another pool. You can do it but you will face many other problems.
Ad.2. Yes definitely. You already have RoutingDataSource so this one should be implementation of AbstractRoutingDataSource. Probably you already have logic there to determine current the data source routing key which should be used to do the lookup.
I want to use Spring to connect to my local PostgreSQL db. I don't know if it is possible, cause I didn't find any tutorials for this. So is it possible? If yes, please explain me where can I find some fine tutorial. If no, how can I do it? I know I can make it via postgresql jdbc, but I want to do it like in real company.
Of course you can. The database vendor is immaterial. Java hides database details using JDBC.
Here is a Spring tutorial that shows you how to do it in 15 minutes or less.
First you need to create a spring project from https://start.spring.io/ and add postgresql to its dependencies. You will then see it build up in your pom.xml file. Then you have to enter the information of the postgresql database you want to connect to in the application.yml file.
Here is my example.
applicationContext.xml :
<!-- the setting msg -->
<bean id="propertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:config/database.properties</value>
</list>
</property>
</bean>
<!-- PostgreSQL datasource -->
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="${jdbc.driverClassName}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
</bean>
<!-- ibatis client -->
<bean id="sqlMapClient" class="org.springframework.orm.ibatis.SqlMapClientFactoryBean">
<property name="configLocation" value="classpath:config/SqlMapConfig.xml" />
<property name="dataSource" ref="dataSource" />
</bean>
I'm using spring DMLC for my application with below settings, i'm facing strange behavior with DMLC that if I send 1000 messages on listener queue only ~1990 reaches to dmlc very quickly and ~10 get stuck on server, on further analysis i found that acknowledgements are not sent back for those 10 that's why i can see them on server, after few minutes acks is sent back but very slowly.
further on this i tried cacheConsumers=false in CachingConnectionFactory and everything becomes fine, however this makes frequent bind/unbind to mq server and creates huge consumer objects in jmv, does anyone have any solution how to solve this issue keeping cacheConsumers=true ?
<bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="cachingjmsQueueConnectionFactory" />
<property name="destination" ref="queueDestination" />
<property name="messageListener" ref="queueDestination" />
<property name="concurrency" value="10-10" />
<property name="cacheLevel" value="1" />
<property name="transactionManager" ref="dbTransactionManager" />
<property name="sessionTransacted" value="true" />
</bean>
<bean id="cachingjmsQueueConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="jmsQueueConnectionFactory" />
<property name="reconnectOnException" value="true" />
<property name="cacheConsumers" value="true" />
<property name="cacheProducers" value="true" />
<property name="sessionCacheSize" value="1" />
</bean>
You can set cacheConsumer to false on the cachingConnectionFactory and also change the cacheLevel to level 3 (CACHE_CONSUMER) on the DefaultMessageListenerClass. This way, the consumer will be cached at the DMLC level and the issue with stuck messages should be resolved without seeing frequent binds/unbinds.
The cacheConsumer should be set to false and you should have the DefaultMessageListenerClasse control the caching because is preferable to have the listener container handle appropriate caching within it's lifecycle. The following note in the Spring documentation (http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/listener/DefaultMessageListenerContainer.html) discusses this:
Note: Don't use Spring's CachingConnectionFactory in combination with
dynamic scaling. Ideally, don't use it with a message listener
container at all, since it is generally preferable to let the listener
container itself handle appropriate caching within its lifecycle.
Also, stopping and restarting a listener container will only work with
an independent, locally cached Connection - not with an externally
cached one.
I'm trying to find the best way to create a dataSource in Spring for connecting to a Google Cloud SQL instance.
I'm currently using:
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.GoogleDriver" />
<property name="url" value="jdbc:google:mysql://myappid:instanceId/mydb?user=myuser" />
<property name="username" value="myuser" />
<property name="password" value="mypassword" />
</bean>
However, I'm a little concerned about using the DriverManagerDataSource provided by Spring as it's documentation says it creates a new connection for every call.
Before migrating over to App Engine I was using a connection pool called BoneCP - however it uses classes that are restricted by App Engine. Is there a connection pool or some other data source class that is recommended to be used with Google Cloud SQL?
Try c3p0 or commons-dbcp. They both implement javax.sql.Datasource which is whitelisted by app-engine.
Example on commons-dbcp:
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.GoogleDriver" />
<property name="url" value="jdbc:google:mysql://myappid:instanceId/mydb?user=myuser" />
<property name="username" value="myuser" />
<property name="password" value="mypassword" />
<property name="validationQuery" value="SELECT 1"/>
</bean>
I am trying to setup an Amazon EC2 with tomcat and mysql. Both are up and running, both are in same instance. My confusions is, what jdbc url I have to use to connect my database on the same instance
<bean id="masterDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName">
<value>com.mysql.jdbc.Driver</value>
</property>
<property name="url">
<value>WHAT TO ADD HERE</value>
</property>
.....
Add them like this:
<property name="url" value="jdbc:mysql://localhost/_dbName" />
<property name="username" value="your username" />
<property name="password" value="your password" />
Try the following:
jdbc:mysql://localhost/database_name?user=your_username&password=your_greatsqlpw
or
jdbc:mysql://127.0.0.1/database_name?user=your_username&password=your_greatsqlpw
As long as your server is secure you should not be concerned about security too much since the connections are only internal in the server.