Opened LDAP Connections from connection pool after server restart (UnboundId) - java

I'm using UnboundID LDAP SDK for Java to query Active Directory using LDAP.
I'm creating the connection every time now so I'd like to change my code to use connections pool. I've found some examples how to create a connections pool, how to get and return the connection (link). I plan to do so using spring components on Mule ESB server but I'm worried about opened connections after my server or AD server restarts or communication problems.
How can I:
prevent my code from opened connections after my server restart
reconnect, recreate connections after AD communications problems

Related

MySQL sleeping connection build up

I have a java application which accesses a MySQL database. When testing locally all the connections are closed properly and not clogging up the amount of allowed active connections, but when I moved this over to a server the connections stopped closing and then they start to build up until their are too many connections.

Disconnecting a single client disconnects many other clients

I’m testing a Diffusion solution in our pre-production environment. The solution gives anonymous clients 10 minutes of free access before they have to authenticate, or be disconnected. This works fine in development and early testing, but in pre-production when one client is disconnected we see many simultaneous disconnection of other clients without cause. Once the logging is set to FINEST the log file says:
2016-03-21 11:57:36.557|DEBUG|Diffusion: InboundThreadPool Thread_4||NIOBufferedChannel#52e2a219[connected local=/10.0.4.1:8080 remote=/10.0.1.99:58673] : Closed(UNEXPECTED_ERROR) Unexpected error EOF|com.pushtechnology.diffusion.io.message.MessageChannelException
2016-03-21 11:57:36.558|DEBUG|Diffusion: InboundThreadPool Thread_4||Java Client 50328FF242799CD4-000000000000015A AWAITING_RECONNECTION#10.0.1.99: State changed from CONNECTED to AWAITING_RECONNECTION.|com.pushtechnology.diffusion.clients.impl.ClientImpl
2016-03-21 11:57:36.558|DEBUG|Diffusion: InboundThreadPool Thread_4||Java Client 50328FF242799CD4-000000000000015A AWAITING_RECONNECTION#10.0.1.99: CONNECTION_LOST keeping alive for 60000 ms.|com.pushtechnology.diffusion.clients.impl.ClientImpl
The effected clients are always browsers, not smart phones. Often older browsers such as IE9.
I'm guessing that your pre-production environment has a load balancer which is set to use connection pooling. Versions of IE prior to v10 did not support WebSockets, so they'll be using XHR long polling. Your smart phone client also will be using WebSockets, so will be unaffected.
The manual has this to say in section "Considerations when using load balancers"
Do not use connection pooling for connections between the load balancer and the Diffusion server. If multiple client connections are multiplexed through a single server-side connection, this can cause client connections to be prematurely closed.
In Diffusion, a client is associated with a single TCP/HTTP connection for the lifetime of that connection. If a Diffusion server closes a client, the connection is also closed. Diffusion makes no distinction between a single client connection and a multiplexed connection, so when a client sharing a multiplexed connection closes, the connection between the load balancer and Diffusion is closed, and subsequently all of the client-side connections multiplexed through that server-side connection are closed.
To illustrate the problem. When a Diffusions server has a direct connection with its audience Alice, Bob and Charlie, closing Bob's connection is straight forward
When a connection pooling middle box (a proxy or load-balancer) enters the mix, closing Bob's connection results in disconnection for Alice and Charlie as well.
So, whereas connection pooling is a good idea for regular HTTP servers, it is problematic for Diffusion servers entertaining an audience of XHR polling clients if it needs to disconnect discrete clients.

Datasource Microsoft JDBC Driver for SQL Server (AlwaysOn Availability Groups)

I have a question related to the scenario when connecting from a Java application using the Microsoft JDBC Driver 4.0 to a SQL Server 2014 with AlwaysOn Availability Groups set up for high availability.
With this set up, we will be connecting to an availability group listener (specified in the db connecting string instead of any particular instance), so that the DB fail-over etc. is handled gracefully by the listener and it tries to connect to the next available instance behind the scenes if current primary goes down in the AG cluster.
Question(s) I have is,
In the data-source that is configured on the j2ee application server side (we use WebSphere), what happens to those connections already pooled by the data-source?
When a database goes down, though the AG listener would try to reconnect on the db side to the next available DB, will the AG Listener also through the jdbc driver send an event or something to the data-source created on the app server and make sure those connections that are already pooled by the datasource to be discarded and have it create new ones so that transactions on the application side wont fail (though they might for a while till new connections are created and fail over is successful) or the java application has to find out only after requesting it from the datasource?
WebSphere Application Server is able to cope with bad connections and removes them from the pool. Exactly when this happens depends on some configurable options and on how fully the Microsoft JDBC driver takes advantage of the javax.sql.ConnectionEventListener API to send notifications to the application server. In the ideal case where a JDBC driver sends the connectionErrorOccurred event immediately for all connections, WebSphere Application Server responds by removing all of these connections from the pool and by marking any connection that is currently in-use as bad so that it does not get returned to the pool once the application closes the handle. Lacking this, WebSphere Application Server will discover the first bad connection upon next use by the application. It is discovered either by a connectionErrorOcurred event that is sent by the JDBC driver at that time, or lacking that, upon inspecting the SQLState/error code of an exception for known indicators of bad connections. WebSphere Application Server then goes about purging bad connections from the pool according to the configured Purge Policy. There are 3 options:
Purge Policy of Entire Pool - all connections are removed from
the pool and in-use connections marked as bad so that they are not
pooled.
Purge Policy of Failing Connection Only - only the
specific connection upon which the error actually occurred is
removed from the pool or marked as bad and not returned to the pool
Purge Policy of Validate All Connections - all connections are
tested for validity (Connection.isValid API) and connections found
to be bad are removed from the pool or marked as bad and not
returned to the pool. Connections found to be valid remain in the
pool and continue to be used.
I'm not sure from your description if you are using WebSphere Application Server traditional or Liberty. If traditional, there is an additional option for pre-testing connections as they are handed out of the pool, but be aware that turning this on can have performance implications.
That said, the one thing to be aware of is that regardless of any of the above, your application will always need to be capable of handling the possibility of errors due to bad connections (even if the connection pool is cleared, connections can go bad while in use) and respond by requesting a new connection and retrying the operation in a new transaction.
Version 4 of that SQL Server JDBC driver is old and doesn't know anything about the always on feature.
Any data source connection pool can be configured to check the status of the connection from the pool prior to doling it out to the client. If the connection cannot be used the pool will create a new one. That's true of all vendors and versions. I believe that's the best you can do.

Will Oracle database connection pool recover from the database server restart?

If the server running Oracle database reboots, this probably should invalidate all previous JDBC connections in the connection pool (running as part of application on another server).
Some connection pools can handle this, others require manual re-initialization, about Oracle I do not know. Would Oracle connection pool (versions 11g and up) be able to detect such situation and recover, or do I need to check for this myself and reinitialize?
Ideally, I would like to know also that would happen for the connection in use that has been borrowed from the pool. Would such connection proxy be able to use the rebooted server?
I have read http://docs.oracle.com/cd/B10501_01/java.920/a96654/connpoca.htm without particular success.

App Java and hosting mysql

I have a Java application and I have to connect to a MySQL DB host in aruba.it. If I make a connection, aruba.it refuses that. How to solve this?
To start, I assume that you're trying to run this Java application locally, or at least at a different machine than where the MySQL DB runs and that you got a SQLException: Connection Refused.
To fix the particular problem, all routers and firewalls in the complete network pipe between the client (where the Java application runs) and the server (where the MySQL DB runs) needs to be configured to allow/forward the port number which the DB uses. This is by default 3306. If this port is blocked, you cannot reach the DB from outside.
Another solution is just to upload the Java application in flavor of a webapplication and run it by HTTP. You'd normally use JSP/Servlet for this.
Apart from network, routers, firewall issues the reason can be that by default remote access to MySQL database server is disabled for security reasons. Mostly DB is hosted on the same server or on the trusted server. If you run java application from your desktop, you need to configure MySQL so it will accept this connections. See this manual for details how to do it.

Categories