I'm trying to manually configure Hazelcast 2.5.1 instances through the use of their programmatic API, but I find that it has different behaviors when doing -- supposedly -- similar things.
So, my first approach is rather rudimentary, which is:
String confString = "<hazelcast><network><port auto-increment=\"true\">10555</port><join><multicast enabled=\"false\" /><tcp-ip enabled=\"true\"><interface>127.0.0.1</interface></tcp-ip></join><ssl enabled=\"false\" /></network></hazelcast>";
Config config = new InMemoryXmlConfig(confString);
Hazelcast.newHazelcastInstance(config);
This will work and starting different instances will allow them to join the cluster. For readability purposes, here's the XML I'm building in memory:
<hazelcast>
<network>
<port auto-increment="true">10555</port>
<join>
<multicast enabled="false" />
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>
</join>
<ssl enabled="false" />
</network>
</hazelcast>
Starting different instances of this will make them join the cluster, which is the behavior that I want.
However, when I try to do this programatically, Hazelcast won't allow new instances to join and will complain with the following error:
Jul 09, 2015 9:39:33 AM com.hazelcast.impl.Node
WARNING: [127.0.0.1]:10556 [dev] Config seed port is 10555 and cluster size is 1. Some of the ports seem occupied!
This is the code that is supposed to do the same thing programatically:
Config config = new Config();
config.setInstanceName("HazelcastService");
config.getNetworkConfig().setPortAutoIncrement(true);
config.getNetworkConfig().setPort(10555);
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(true);
config.getNetworkConfig().getInterfaces().addInterface("127.0.0.1");
config.getNetworkConfig().getInterfaces().setEnabled(true);
SSLConfig sslConfig = new SSLConfig();
sslConfig.setEnabled(false);
config.getNetworkConfig().setSSLConfig(sslConfig);
Hazelcast.newHazelcastInstance(config);
What am I missing?
Interfaces you added in java code are not the same you added in xml.
This is what you set in java code - http://docs.hazelcast.org/docs/2.5/manual/html-single/#ConfigSpecifyInterfaces
For your configuration to work - you should add this
config.getNetworkConfig().getJoin().getTcpIpConfig().addMember("127.0.0.1");
Related
I have 8 different processes distributed across 6 different servers with the following TCP/TCPPING protocol configuration:
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP
bind_port="${jgroups.tcp.bind_port:16484}"
bind_addr="${jgroups.tcp.bind_addr:127.0.0.1}"
recv_buf_size="20M"
send_buf_size="20M"
max_bundle_size="64K"
sock_conn_timeout="300"
use_fork_join_pool="true"
thread_pool.min_threads="10"
thread_pool.max_threads="100"
thread_pool.keep_alive_time="30000" />
<TCPPING
async_discovery="true"
initial_hosts="${jgroups.tcpping.initial_hosts:127.0.0.1[16484]}"
port_range="5" #/>
<MERGE3 min_interval="10000" max_interval="30000" />
<FD_SOCK get_cache_timeout="10000"
cache_max_elements="300"
cache_max_age="60000"
suspect_msg_interval="10000"
num_tries="10"
sock_conn_timeout="10000"/>
<FD timeout="10000" max_tries="10" />
<VERIFY_SUSPECT timeout="10000" num_msgs="5"/>
<BARRIER />
<pbcast.NAKACK2
max_rebroadcast_timeout="5000"
use_mcast_xmit="false"
discard_delivered_msgs="true" />
<UNICAST3 />
<pbcast.STABLE
stability_delay="1000"
desired_avg_gossip="50000"
max_bytes="4M" />
<AUTH
auth_class="com.qfs.distribution.security.impl.CustomAuthToken"
auth_value="distribution_password"
token_hash="SHA" />
<pbcast.GMS
print_local_addr="true"
join_timeout="10000"
leave_timeout="10000"
merge_timeout="10000"
num_prev_mbrs="200"
view_ack_collection_timeout="10000"/>
</config>
The cluster keeps splitting in subgroups, then merges again and again which results in high memory usages. I can also see in the logs a lot of "suspect" warning resulting from the periodic heartbeats sent by all other cluster members. Am I missing something ?
EDIT
After enabling gc logs, nothing suspect appeared to me. On the other hand, I've noticed this jgroups logs appearing a lot:
WARN: lonlx21440_FrtbQueryCube_QUERY_29302: I was suspected by woklxp00330_Sba-master_DATA_36219; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
DEBUG: lonlx21440_FrtbQueryCube_QUERY_29302: closing expired connection for redlxp00599_Sba-master_DATA_18899 (121206 ms old) in send_table
DEBUG: I (redlxp00599_Sba-master_DATA_18899) will be the merge leader
DEBUG: redlxp00599_Sba-master_DATA_18899: heartbeat missing from lonlx21503_Sba-master_DATA_2175 (number=1)
DEBUG: redlxp00599_Sba-master_DATA_18899: suspecting [lonlx21440_FrtbQueryCube_QUERY_29302]
DEBUG: lonlx21440_FrtbQueryCube_QUERY_29302: removed woklxp00330_Sba-master_DATA_36219 from xmit_table (not member anymore)enter code here
and this one
2020-08-31 16:35:34.715 [ForkJoinPool-3-worker-11] org.jgroups.protocols.pbcast.GMS:116
WARN: lonlx21440_FrtbQueryCube_QUERY_29302: failed to collect all ACKs (expected=6) for view [redlxp00599_Sba-master_DATA_18899|104] after 2000ms, missing 6 ACKs from (6) lonlx21503_Sba-master_DATA_2175, lonlx11179_DRC-master_DATA_15999, lonlx11184_Rrao-master_DATA_31760, lonlx11179_Rrao-master_DATA_25194, woklxp00330_Sba-master_DATA_36219, lonlx11184_DRC-master_DATA_49264
I still can;'t figure out where the instability comes from.
Thanks
Any instability is not due to TCPPING protocol - this belongs to the Discovery protocol family and its purpose is to find new members, not kick them out of the cluster.
You use both FD_SOCK and FD to find if members left, and then VERIFY_SUSPECT to confirm that the node is not reachable. The setting seems pretty normal.
First thing to check is your GC logs. If you experience STW pauses longer than, say, 15 seconds, chances are that the cluster disconnect because of unresponsiviness due to GC.
If your GC logs are fine, increase logging level for FD, FD_SOCK and VERIFY_SUSPECT to TRACE and see what's going on.
I am trying to create a simple test that:
Activates a full server embedded instance (Embedded Server and Distributed Configuration)
Creates an initial test database in document mode during the first run (Create a Database)
Opens the test database (Open a Database)
Insert a sample record
Fetch the sample record
Add another node and repeat
I can roughly understand the steps individually but I am having some difficulty piecing together a simple test case. For example, the API documentation assumes a remote connection. I am not sure whether that is the applicable method here, and if so, what URL I should specify.
Once I have completed steps 1, 2 and 3 correctly, I should be able to just refer to the API documentation for steps 4 and 5.
As a novice user, I find difficult to interpret the documentation in context. Any help or clarification would be appreciated.
I am trying to run this test as a jUnit test. Here is what I have so far:
public class TestOrientDb {
private static final Logger log = Logger.getLogger(TestOrientDb.class);
#Test
public void testFullEmbeddedServer() throws Exception {
log.debug("connectiong to database server...");
String orientdbHome = new File("src/test/resources").getAbsolutePath(); //Set OrientDB home to current directory
log.debug("the orientdb home: " + orientdbHome);
System.setProperty("ORIENTDB_HOME", orientdbHome);
OServer server = OServerMain.create();
URL configUrl = this.getClass().getResource("/orientdb-config.xml");
server.startup(configUrl.openStream());
server.activate();
//HOW DO I CREATE A DATABASE HERE?
//HOW DO I OPEN MY DATABASE TO USE THE API LIKE THIS: http://orientdb.com/docs/last/Document-Database.html
//SHOULD I PAUSE THE THREAD TO KEEP THE SERVER ACTIVE?
log.debug("shutting down orientdb...");
server.shutdown();
}}
Here is orientdb-config.xml:
<orient-server>
<users>
<user name="root" password="password" resources="*"/>
</users>
<properties>
<entry value="/etc/kwcn/databases" name="server.database.path"/>
<entry name="log.console.level" value="fine"/>
</properties>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<!-- NODE-NAME. IF NOT SET IS AUTO GENERATED THE FIRST TIME THE SERVER RUN -->
<!-- <parameter name="nodeName" value="europe1" /> -->
<parameter name="enabled" value="true"/>
<parameter name="configuration.db.default" value="${ORIENTDB_HOME}/orientdb-config.json"/>
<parameter name="configuration.hazelcast" value="${ORIENTDB_HOME}/hazelcast.xml"/>
</parameters>
</handler>
Here is hazelcast.xml:
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.0.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>orientdb</name>
<password>orientdb</password>
</group>
<network>
<port auto-increment="true">2434</port>
<join>
<multicast enabled="true">
<multicast-group>235.1.1.1</multicast-group>
<multicast-port>2434</multicast-port>
</multicast>
</join>
</network>
<executor-service>
<pool-size>16</pool-size>
</executor-service>
Here is orientdb-config.json:
{ "autoDeploy": true, "hotAlignment": false, "executionMode": "asynchronous", "readQuorum": 1, "writeQuorum": 2, "failureAvailableNodesLessQuorum": false, "readYourWrites": true, "servers": { "*": "master" }, "clusters": { "internal": { }, "index": { }, "*": { "servers": [ "<NEW_NODE>" ] } } }
Here is the output:
2016-02-07 16:02:17:867 INFO OrientDB auto-config DISKCACHE=10,695MB (heap=3,641MB os=16,384MB disk=71,698MB) [orientechnologies] 2016-02-07 16:02:18:016 INFO Loading configuration from input stream [OServerConfigurationLoaderXml] 2016-02-07 16:02:18:127
INFO OrientDB Server v2.2.0-beta is starting up... [OServer] 2016-02-07 16:02:18:133 INFO Databases directory: /etc/kwcn/databases [OServer] 2016-02-07 16:02:18:133 WARNI Network configuration was not found [OServer] 2016-02-07 16:02:18:133 WARNI Found
ORIENTDB_ROOT_PASSWORD variable, using this value as root's password [OServer] 2016-02-07 16:02:18:523 INFO OrientDB Server is active v2.2.0-beta. [OServer] 2016-02-07 16:02:18:523 INFO OrientDB Server is shutting down... [OServer] 2016-02-07 16:02:18:523
INFO Shutting down plugins: [OServerPluginManager] DEBUG [ kwcn.TestOrientDb]: shutting down orientdb... 2016-02-07 16:02:18:524 INFO Shutting down databases: [OServer] 2016-02-07 16:02:18:565 INFO OrientDB Engine shutdown complete [Orient] 2016-02-07
16:02:18:566 INFO OrientDB Server shutdown complete
I suggest you to take a look at
https://github.com/orientechnologies/orientdb/blob/2.1.x/distributed/src/test/java/com/orientechnologies/orient/server/distributed/AbstractServerClusterTest.java
it's the base class of OrientDB distributed tests. Its class hierarchy seems quite complex, but in the end it just instantiates multiple servers and delegates to subclasses to test operations against them.
You can also check
https://github.com/orientechnologies/orientdb/blob/2.1.x/distributed/src/test/java/com/orientechnologies/orient/server/distributed/HATest.java
that is one of its subclasses. Actually you could just copy or extend it and implement your own logic in executeTest() method.
About your questions:
HOW DO I CREATE A DATABASE HERE?
As a normal plocal db:
new ODatabaseDocumentTx("plocal:...").create()
or
new OrientGraph("plocal:...")
//HOW DO I OPEN MY DATABASE TO USE THE API LIKE THIS:
same as above:
new ODatabaseDocumentTx("plocal:...").open("admin", "admin");
//SHOULD I PAUSE THE THREAD TO KEEP THE SERVER ACTIVE?
There is no need to pause the thread, the server creates some non-daemon threads, so it will remain active. Just make sure that someone, at the end of the tests, invokes server.shutdown() (even from another thread)
We are using Oracle as the database for our Web application. The application runs well most of the time, but we get this "No more data to read from socket" error.
Caused by: java.sql.SQLRecoverableException: No more data to read from socket
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1142)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1099)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:288)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:93)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:93)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1869)
at org.hibernate.loader.Loader.doQuery(Loader.java:718)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:270)
at org.hibernate.loader.Loader.doList(Loader.java:2449)
... 63 more
We use spring, hibernate and i have the following for the datasource in my applciation context file.
<bean class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" id="dataSource">
<property name="driverClassName" value="${database.driverClassName}" />
<property name="url" value="${database.url}" />
<property name="username" value="${database.username}" />
<property name="password" value="${database.password}" />
<property name="defaultAutoCommit" value="false" />
<property name="initialSize" value="10" />
<property name="maxActive" value="30" />
<property name="validationQuery" value="select 1 from dual" />
<property name="testOnBorrow" value="true" />
<property name="testOnReturn" value="true" />
<property name="poolPreparedStatements" value="true" />
<property name="removeAbandoned" value="true" />
<property name="logAbandoned" value="true" />
</bean>
I am not sure whether this is because of application errors, database errors or network errors.
We see the following on the oracle logs
Thu Oct 20 10:29:44 2011
Errors in file d:\oracle\diag\rdbms\ads\ads\trace\ads_ora_3836.trc (incident=31653):
ORA-03137: TTC protocol internal error : [12333] [4] [195] [3] [] [] [] []
Incident details in: d:\oracle\diag\rdbms\ads\ads\incident\incdir_31653\ads_ora_3836_i31653.trc
Thu Oct 20 10:29:45 2011
Trace dumping is performing id=[cdmp_20111020102945]
Thu Oct 20 10:29:49 2011
Sweep [inc][31653]: completed
Sweep [inc2][31653]: completed
Thu Oct 20 10:34:20 2011
Errors in file d:\oracle\diag\rdbms\ads\ads\trace\ads_ora_860.trc (incident=31645):
ORA-03137: TTC protocol internal error : [12333] [4] [195] [3] [] [] [] []
Incident details in: d:\oracle\diag\rdbms\ads\ads\incident\incdir_31645\ads_ora_860_i31645.trc
Thu Oct 20 10:34:21 2011
Oracle Version : 11.2.0.1.0
For errors like this you should involve oracle support. Unfortunately you do not mention what oracle release you are using. The error can be related to optimizer bind peeking. Depending on the oracle version different workarounds apply.
You have two ways to address this:
upgrade to 11.2
set oracle parameter _optim_peek_user_binds = false
Of course underscore parameters should only be set if advised by oracle support
We were facing same problem, we resolved it by increasing initialSize and maxActive size of connection pool.
You can check this link
Maybe this helps someone.
Another case: If you are sending date parameters to a parameterized sql, make sure you sent java.sql.Timestamp and not java.util.Date. Otherwise you get
java.sql.SQLRecoverableException: No more data to read from socket
Example statement:
In our java code, we are using org.apache.commons.dbutils and we have the following:
final String sqlStatement = "select x from person where date_of_birth between ? and ?";
java.util.Date dtFrom = new Date(); //<-- this will fail
java.util.Date dtTo = new Date(); //<-- this will fail
Object[] params = new Object[]{ dtFrom , dtTo };
final List mapList = (List) query.query(conn, sqlStatement, new MapListHandler(),params);
The above was failing until we changed the date parameters to be java.sql.Timestamp
java.sql.Timestamp tFrom = new java.sql.Timestamp (dtFrom.getTime()); //<-- this is OK
java.sql.Timestamp tTo = new java.sql.Timestamp(dtTo.getTime()); //<-- this is OK
Object[] params = new Object[]{ tFrom , tTo };
final List mapList = (List) query.query(conn, sqlStatement, new MapListHandler(),params);
This is a very low-level exception, which is ORA-17410.
It may happen for several reasons:
A temporary problem with networking.
Wrong JDBC driver version.
Some issues with a special data structure (on database side).
Database bug.
In my case, it was a bug we hit on the database, which needs to be patched.
Try two things:
Set in $ORACLE_HOME/network/admin/tnsnames.ora on the oracle server server=dedicated to server=shared to allow more than one connection at a time. Restart oracle.
If you are using Java this might help you: In java/jdk1.6.0_31/jre/lib/security/Java.security change securerandom.source=file:/dev/urandom to securerandom.source=file:///dev/urandom
I had the same problem. I was able to solve the problem from application side, under the following scenario:
JDK8, spring framework 4.2.4.RELEASE, apache tomcat 7.0.63, Oracle Database 11g Enterprise Edition 11.2.0.4.0
I used the database connection pooling apache tomcat-jdbc:
You can take the following configuration parameters as a reference:
<Resource name="jdbc/exampleDB"
auth="Container"
type="javax.sql.DataSource"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
testWhileIdle="true"
testOnBorrow="true"
testOnReturn="false"
validationQuery="SELECT 1 FROM DUAL"
validationInterval="30000"
timeBetweenEvictionRunsMillis="30000"
maxActive="100"
minIdle="10"
maxWait="10000"
initialSize="10"
removeAbandonedTimeout="60"
removeAbandoned="true"
logAbandoned="true"
minEvictableIdleTimeMillis="30000"
jmxEnabled="true"
jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;
org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer"
username="your-username"
password="your-password"
driverClassName="oracle.jdbc.driver.OracleDriver"
url="jdbc:oracle:thin:#localhost:1521:xe"/>
This configuration was sufficient to fix the error. This works fine for me in the scenario mentioned above.
For more details about the setup apache tomcat-jdbc: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Downgrading the JRE from 7 to 6 fixed this issue for me.
Yes, as #ggkmath said, sometimes a good old restart is exactly what you need. Like when "contact the author and have him rewrite the app, meanwhile wait" is not an option.
This happens when an application is not written (yet) in a way that it can handle restarts of the underlying database.
In our case we had a query which loads multiple items with select * from x where something in (...)
The in part was so long for benchmark test.(17mb as text query). Query is valid but text so long. Shortening the query solved the problem.
I got this error then restarted my GlassFish server that held connection pools between my client app and the database, and the error went away. So, try restarting your application server if applicable.
I seemed to fix my instance by removing the parameter placeholder for a parameterized query.
For some reason, using these placeholders were working fine, and then they stopped working and I got the error/bug.
As a workaround, I substituted literals for my placeholders and it started working.
Remove this
where
SOME_VAR = :1
Use this
where
SOME_VAR = 'Value'
Seemed to be an issue with a view. JDBC query was using a view. I took a guess, recompiled the view and error is gone.
In my case the error occurs running a simple query like this in SQLdeveloper:
select count(1) from tabel1 inner join tabel2 on table1.id = table2.id_table1 ;
I solved the error so...
select
/*+OPT_PARAM('_index_join_enabled' 'false') */
count(1) from tabel1 inner join tabel2 on table1.id = table2.id_table1 ;
I am trying to implement a distributed system in Infinispan and I wanted to get the key associated with the local node. I was trying to implement this by utilizing the KeyAffinityService but I get a NullPointerException. I was hoping if someone can help me figure my mistake.
Code snippet
// Create the affinity service to find the Key for the manager
KeyAffinityService keyAffinityService = KeyAffinityServiceFactory.newLocalKeyAffinityService(
cache,
(KeyGenerator)new RndKeyGenerator(),
Executors.newSingleThreadExecutor(),
100);
The implementation of cache is done as following:
EmbeddedCacheManager manager = new DefaultCacheManager();
try{
manager = new DefaultCacheManager("democluster.xml");
}catch(IOException e){}
Cache<Integer, String> cache = manager.getCache();
Xml file
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="demoCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<namedCache name="clusteredCache">
<clustering mode="distributed">
<hash numOwners="1" >
<groups enabled="true"/>
</hash>
</clustering>
</namedCache>
</infinispan>
Error:
Exception in thread "main" java.lang.NullPointerException
at org.infinispan.affinity.KeyAffinityServiceFactory.newLocalKeyAffinityService(KeyAffinityServiceFactory.java:95)
at org.infinispan.affinity.KeyAffinityServiceFactory.newLocalKeyAffinityService(KeyAffinityServiceFactory.java:104)
at SimpleCache.start(SimpleCache.java:46)
at SimpleCache.main(SimpleCache.java:96)
I was wondering if anyone had encountered anything similar or might have any ideas regarding this problem.
You are using the wrong cache, you should be doing
Cache<Integer, String> cache = manager.getCache("clusteredCache");
KeyAffinityService only works with a distributed cache, and the default cache is local-only (because you don't have a <default> element in your configuration).
I need to configure JBOSS for MySQL clustering with databases at two different machines (i.e different IPs).
Active-active configuration of db is desired with -
Both db to be updated simultaneously
Loadbalancing
Failover handling - to switch to the other db in case of failure of the 1st db
How do I configure mysql-ds.xml file to achieve all these ? Will it alone solve my problem or any other configuration changes need to be done ?
So far I have tried the following but without much success -
code sample 1 -
<local-tx-datasource>
<jndi-name>/abc</jndi-name>
<connection-url>jdbc:mysql:loadbalance://ip1:portno1,ip2:portno2/dbname?loadBalanceBlacklistTimeout=5000</connection-url>
<driver-class>com.mysql.jdbc.Driver</driver-class>
<user-name>def</user-name>
<password>defpassword</password>
<exception-sorter-class-name>path to exception sorter class</exception-sorter-class-name>
</local-tx-datasource>
code sample 2 -
<local-tx-datasource>
<jndi-name>/abc</jndi-name>
<connection-url>jdbc:mysql:loadbalance://ip1:portno1,ip2:portno2/dbname?loadBalanceBlacklistTimeout=5000</connection-url>
<url-delimiter>|</url-delimiter>
<autoReconnect>true</autoReconnect>
<failOverReadOnly>false</failOverReadOnly>
<maxReconnects>0</maxReconnects>
<initialTimeout>15</initialTimeout>
<idle-timeout-minutes>0</idle-timeout-minutes>
<connection-property name="readOnly">false</connection-property>
<min-pool-size>5</min-pool-size>
<max-pool-size>20</max-pool-size>
<driver-class>com.mysql.jdbc.Driver</driver-class>
<user-name>def</user-name>
<password>defpassword</password>
<exception-sorter-class-name>path to exception sorter class</exception-sorter-class-name>
</local-tx-datasource>
What more is required ?
Thanks
You can achieve this only by using mysql-ds.xml datasource file configuration. What you need to do is following.
Note: You will have to use mysql-connector-java-5.1.13-bin.jar and above for database driver. Earlier versions have some bug (http://bugs.mysql.com/bug.php?id=31053) which will pose you many issues so avoid them.
I am providing a snippet for mysql-ds.xml file.
<local-tx-datasource>
<jndi-name>DATA_SOURCE_NAME</jndi-name>
<connection-url>jdbc:mysql:loadbalance://IP1:port1,IP2:port2</connection-url>
<user-name>username</user-name>
<password>password</password>
<driver-class>org.gjt.mm.mysql.Driver</driver-class>
<check-valid-connection-sql>select count(*) from your_table_name</check-valid-connection-sql>
</local-tx-datasource>