How to get database connection through websphere ORB call? - java

i've got a websphere 6.1 cluster environment which is composed of two nodes with 2 appservers each. Let's call NodeA including Server1(2809) & Server2(2810), NodeB including Server3(2811) & Server4(2812). Meanwhile, i created a cluster-scope datasource with JNDI local_db.
Right now i want to get database connection in a java client through WAS ORB call from above environment. The specific part of java code would look like this:
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
env.put(Context.PROVIDER_URL,"iiop://localhost:2809");
javax.sql.DataSource ds = (DataSource)initialContext.lookup("local_db");
Connection cn = ds.getConnection();
If above java client code gets run, will the database connection retrieve request follow load-balancing rule among the four connection pools of all application servers?
Moreover, if my java client gets one database connection successfully and then run a big SQL Query with large result return, as for the memory space occupation, which WAS application server would take care? only server1 due to port 2809 used above or the target server who returns the database connection?
BTW, if i put two server members for that PROVIDER_URL, such as iiop://localhost:2809, localhost:2810, does it mean load-balancing or failover?
Please help explain and correct me if i'm understanding wrongly!
Thanks

Let me start with the easy ones and proceed to the rest
Having two provider URLs' implies failover. If you can't connect to the first naming server, it connects to the second naming server and continues till the end of that list. Notice the Fail over is for connection to the naming server (not to the resource itself)
The look up is done on the server that you connect to. THe local_db represents a datasource (and its connection pool) on that server. You will one work with the server1 (as you are connecting to that NS) and will get connection from the datasource hosted on that server.
You will never get any connection from the other servers. In others words there is no load balancing (one request uses connection from server1, another uses a connection from server 2 etc). I believe this is what you mean by load balancing in your question above.
HTH

A DataSource is neither remotable nor serializable. Therefore, if you look up local_db from the server, it will return a javax.naming.Reference that the client uses to create a new DataSource instance with the same configuration as the one on the server. That also means that once the lookup has been performed, no JDBC method will ever send requests to the WebSphere server.

Related

Java application connecting to MariaDB servers using Pacemaker

I'm trying to test out MariaDB, Galera, and Corosync/Pacemaker to understand clustering with high-availability using CentOS 7 servers. The cluster size I am using for testing is 3 servers to prevent quorum issues for the most part. My tests and application are written in Java.
I have the clustering part down; it works in an active-active configuration and runs just fine. As well, I have HA set up using Pacemaker and corosync. I've done many tests with it in failing the slaves or bringing them up during the run. As well, I've written to all three at run-time regardless of if it was the master connection or one of the child connections. When I try to test the Master going down during run-time (to simulate a power outage, server crash, or whatever in the data center), the application immediately stops running. I get a java.net.SocketException error and the application closes with the other two connections being shut down successfully. I've used both the kill and stop commands in the terminal to test and see if it'll work (just in the off chance it would).
JDBC URL String
Below is the part of the code that connects the application to the cluster. It connects correctly and does work until I cause the first master to go down; the other two going down does not affect this.
public void connections() {
try {
bigConnec = DriverManager.getConnection(
"jdbc:mariadb:sequential:failover:loadbalance://"
+ "10.32.18.90,10.32.18.91,10.32.18.92/"+DB+"?autoReconnect=true&failOverReadOnly=false"
+ "&retriesAllDown=120",
"root", "PASS");
bigConnec.setAutoCommit(false);
} catch(SQLException e) {
System.err.println("Unable to connect to any one of the three servers! \n" + e);
System.exit(1);
}
...
}
There are three other connections made to each individual server so I can pull information more easily from them; that is what the ellipses indicate. The servers will exchange which is the "primary" node but the application will not connect to the next node in the list.
I feel like the issue is in the way I have my URL set up because everything works outside of the cluster. As well, nothing happens during testing when I shut down the child nodes. The most that happens is I'll get a warning that the URL lost connection to either or both of them. Is there a way to configure the URL in such a way to allow automatic failover to the next available node in the string or do I have to go about it some other way using individual connection URLs and Objects (or an array of Connection objects) or black magic and pixie dust that really only SysAdmins know some other way I have yet to try?
What I Have Tried
How to make MaxScale High Available with Corosync/Pacemaker (MariaDB Article)
Failover and High availability with MariaDB Connector/J (MariaDB Documentation)
What is the right MariaDB Galera jdbc URL properties for loadbalance (Stack Overflow)
HA Proxy Configuration with MariaDB Galera cluster (Stack Overflow)
Configuring Server Failover (MySQL Documentation)
Advanced Load-balancing and Failover Configuration (MySQL Documentation)
TL;DR
Problem: Java application is not automatically failing over despite being flagged for failover and sequential support in the JDBC URL (see JDBC URL String). Everything works with corosync and Pacemaker but I cannot get the Java application to transfer to the next available node to act as the primary connection when the current one goes down.
Question: Is the issue in the URL? A follow-up to that is, if so, would it be better to use three separate connections and use the first valid one or is there something I can do to allow the application to automatically rollover to the next available connection in the current URL?
Software/Equipment
MariaDB 10.1.24
corosync 2.4.0
Pacemaker 1.1.15
CentOS 7
Java 8 / Eclipse Neon.3 / Eclipse 4.6.3
MariaDB Connector/J 2.0.1
If there is any more information you need, please do tell me in the comments and I'll update this as soon as I can!

Datasource Microsoft JDBC Driver for SQL Server (AlwaysOn Availability Groups)

I have a question related to the scenario when connecting from a Java application using the Microsoft JDBC Driver 4.0 to a SQL Server 2014 with AlwaysOn Availability Groups set up for high availability.
With this set up, we will be connecting to an availability group listener (specified in the db connecting string instead of any particular instance), so that the DB fail-over etc. is handled gracefully by the listener and it tries to connect to the next available instance behind the scenes if current primary goes down in the AG cluster.
Question(s) I have is,
In the data-source that is configured on the j2ee application server side (we use WebSphere), what happens to those connections already pooled by the data-source?
When a database goes down, though the AG listener would try to reconnect on the db side to the next available DB, will the AG Listener also through the jdbc driver send an event or something to the data-source created on the app server and make sure those connections that are already pooled by the datasource to be discarded and have it create new ones so that transactions on the application side wont fail (though they might for a while till new connections are created and fail over is successful) or the java application has to find out only after requesting it from the datasource?
WebSphere Application Server is able to cope with bad connections and removes them from the pool. Exactly when this happens depends on some configurable options and on how fully the Microsoft JDBC driver takes advantage of the javax.sql.ConnectionEventListener API to send notifications to the application server. In the ideal case where a JDBC driver sends the connectionErrorOccurred event immediately for all connections, WebSphere Application Server responds by removing all of these connections from the pool and by marking any connection that is currently in-use as bad so that it does not get returned to the pool once the application closes the handle. Lacking this, WebSphere Application Server will discover the first bad connection upon next use by the application. It is discovered either by a connectionErrorOcurred event that is sent by the JDBC driver at that time, or lacking that, upon inspecting the SQLState/error code of an exception for known indicators of bad connections. WebSphere Application Server then goes about purging bad connections from the pool according to the configured Purge Policy. There are 3 options:
Purge Policy of Entire Pool - all connections are removed from
the pool and in-use connections marked as bad so that they are not
pooled.
Purge Policy of Failing Connection Only - only the
specific connection upon which the error actually occurred is
removed from the pool or marked as bad and not returned to the pool
Purge Policy of Validate All Connections - all connections are
tested for validity (Connection.isValid API) and connections found
to be bad are removed from the pool or marked as bad and not
returned to the pool. Connections found to be valid remain in the
pool and continue to be used.
I'm not sure from your description if you are using WebSphere Application Server traditional or Liberty. If traditional, there is an additional option for pre-testing connections as they are handed out of the pool, but be aware that turning this on can have performance implications.
That said, the one thing to be aware of is that regardless of any of the above, your application will always need to be capable of handling the possibility of errors due to bad connections (even if the connection pool is cleared, connections can go bad while in use) and respond by requesting a new connection and retrying the operation in a new transaction.
Version 4 of that SQL Server JDBC driver is old and doesn't know anything about the always on feature.
Any data source connection pool can be configured to check the status of the connection from the pool prior to doling it out to the client. If the connection cannot be used the pool will create a new one. That's true of all vendors and versions. I believe that's the best you can do.

Connection Pooling and Oracle Seesion

Before I start with my question, I would like to clarify that I am a DB developer and have limited understanding of things on Java/J2EE side.
Ours is a web application (n-tier with app server/web server). We are using connection pooling to manage connections to the database. I have limited understanding of connection pooling - App server manages connections for applications, lets the application get a connection from a pool, return the connection once its done back to the pool.
Let's say that I follow these steps -
1. Let's say that I log in the application
2. Application requests for a connection from connection pool to authenticate me
3. Once authentication is done, App server will return the connection back to pool
4. I browse to a page where I have to do some CRUD operation and let's say that I am updating some data on the page.
5. App Server will again request for a connection from Pool
6. Application will process the data using the connection.
Here is my problem statement -
Let's say that I have to capture audit information using triggers (on the tables which are undergoing update). One of the attribute which I need to capture is username (logged in user).
I set a global package variable when I log in (step 1 - 3), which stores the logged in user name. My trigger is going to read the global package variable for the username. Since the connections are not going to remain same (connection pool manages connection), will my global package variable be available when I am processing the trigger?
What will happen to the variable (it obviously depends on answer to first question) when multiple users are logged in and accessing the application?
I tried to look around but have not been able to get clear answer to my doubts.
Pardon me, if my question is not clear. Let me know and I can edit to provide more information.
You can use CLIENT_IDENTIFIER attribute to preserve the actual user who logged in to the application.
Please find below more information from Oracle documentation:
Support for Application User Models by Using Client Identifiers
Many applications use session pooling to set up a number of sessions to be reused by multiple application users. Users authenticate themselves to a middle-tier application, which uses a single identity to log in to the database and maintains all the user connections. In this model, application users are users who are authenticated to the middle tier of an application, but who are not known to the database. Oracle Database supports use of a CLIENT_IDENTIFIER attribute that acts like an application user proxy for these types of applications.
In this model, the middle tier passes a client identifier to the database upon the session establishment. The client identifier could actually be anything that represents a client connecting to the middle tier, for example, a cookie or an IP address. The client identifier, representing the application user, is available in user session information and can also be accessed with an application context (by using the USERENV naming context). In this way, applications can set up and reuse sessions, while still being able to keep track of the application user in the session. Applications can reset the client identifier and thus reuse the session for a different user, enabling high performance.
You can set the CLIENT_IDENTIFIER in java using the following code snippet:
public Connection prepare(Connection conn) throws SQLException {
String prepSql = "{ call DBMS_SESSION.SET_IDENTIFIER('userName') }";
CallableStatement cs = conn.prepareCall(prepSql);
cs.execute();
cs.close();
return conn;
}

Simultaneous access to Database in Web application

How can I know the average or exact number of users accessing database simultaneously in my Java EE web enterprise application? I would like to see if the "Connection pool setting" I set in the Glassfish application server is suitable for my web application or not. I need to correctly set the maximun number of connection in Connection Pool setting in Application Server. Recently, my application ran out of connections and threw exceptions when the client request for DB expires.
There are multiple ways.
One and easiest would be take help from your DBAs - they can tell you exactly how many connections are active from your webserver or the user id for connection pool at a given time.
If you want some excitement, you will have to JMX management extensions provided by glassfish. Listing 6 on this page - gives an example as to how to write a JMS based snippet to monitor a connection pool.
Finally, you must make sure that all connections are closed explicitly by a connection.close(); type of call in your application. In some cases, you need to close ResultSet as well.
Next is throttling your http thread pool to avoid too many concurrent access if your db connections are taking longer to close.

How to save time when opening JDBC MYSQL connections

I have a very short Java application that just opens a connection to a remote MySQL database, reads some data, prints it, and exits. The most time-consuming part of the application is the database connection.
Currently I have only a single thread, and my only concern is to save the time of opening the connection.
I thought of several ways to make it faster, but it turned out they do not help:
Connection Pooling - doesn't help because the pool lives only only during a single run of the application. When the application is terminated, the pool is gone, and when I re-run the application, I have to re-open all the connections in the pool.
mysql-proxy - connects only to the local server: mysql-proxy for a remote MySQL server
TCP/IP server - I thought of holding a local TCP/IP server that will keep a persistent open connection and send it to a TCP/IP client on request. However, Connection objects cannot be serialized, so I have no way to pass the Connection object from client to server.
Any other option?
Generally connection to a DB is a most time-consuming operation. If the application is to be started and stopped then there is little that you can do.
Using connection-pooling in a web-server and call that by running your app which talks to the web server using JSON might be an option.
You said you have a very short application so your 3rd option might work if you put the database logic into you "option 3 TCP/IP server" and just forward the results to your connecting client. This is a typical application server pattern.
Another thing you should consider about network look up https://stackoverflow.com/q/3641155/1055715 which Marc B has mentioned in his comment.
It turns out the best solution is to use mysql-proxy with a script that handles connection pooling (a combination of my first two options). I found one such script here:
http://forge.mysql.com/tools/tool.php?id=151
It was probably written for an older version of mysql-proxy, so I had to fix it (if anyone need the fixed version - write me).
It works like a charm - I run the exact same application as before, the only change is in the connection string: instead of connecting to "qa-srv:3308" (the remote server) I connect to "127.0.0.1:4040" (the proxy server).

Categories