I have two sql database connections for which health checks are automatically added by dropwizard. But when the application loses connection to one of them, the /healthcheck endpoint takes indefinitely long to respond, where I would want it to timeout after a few seconds.
I've already set the maxWaitForConnection setting, and I've also experimented with the various checkConnectionOn.. settings, but nothing helped.
UPDATE: The health check correctly fails if the server actively refuses the connection, but it hangs indefinitely if it's not the case, for instance, a network issue.
Is it possible to have sql health checks timeout at a specified time value whatever the problem is?
If the JDBC timeout settings aren't working you could always just wrap the DB check in a Future and limit how long it can run:
protected Result check() throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<Void> future = executor.submit(new DbConnectionChecker());
try {
future.get(SECONDS_THRESHOLD, TimeUnit.SECONDS);
} catch (TimeoutException e) {
return Result.unhealthy("DB timed out");
}
executor.shutdownNow();
return Result.healthy();
}
Where DbConnectionChecker is something like:
static class DbConnectionChecker implements Callable<Void> {
public Void call() throws Exception {
// Check your DB connection as you normally would
}
}
Would be nice to figure out why the JDBC connection settings aren't working as expected, though.
Related
I tried to set a 1-second time limit for my SQL query in Java, using the methods:
How to timeout a thread
public class App {
public static void main(String[] args) throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<String> future = executor.submit(new Task());
try {
System.out.println("Started..");
System.out.println(future.get(1, TimeUnit.SECONDS));
System.out.println("Finished!");
} catch (TimeoutException e) {
future.cancel(true);
System.out.println("Terminated!");
}
executor.shutdownNow();
}
}
class Task implements Callable<String> {
#Override
public String call() throws Exception {
try {
// some codes to do query via SQL Server JDBC, assuming it takes 10 seconds.
ResultSet result = statement.executeQuery();
// some codes to print the query result
return "Done";
}
catch (Exception e) {
System.out.println();
e.printStackTrace();
}
}
}
However, I found that though it prints 'Terminated' after 1 second, the program keeps running and prints the query result after 10 seconds. What's the reason why it doesn't work and how to fix it?
shutdownNow doesn't actually stop a thread, it merely sends a signal (an interrupt) that the Thread can act upon. Stopping a Thread in Java is tricky because while you can just kil the thread (with Thread.stop), you really shouldn't because you have no idea what state the Thread is in and what it will leave behind.
You can find more information in the documentation.
Calling cancel on a future does not guarantee that the job will be cancelled. It depends on the method checking periodically for interrupts, and then aborting if an interrupt is detected. Statement.execute() does not do that.
In your case, given you are executing a SQL statement, there is a method in the Statement class (setQueryTimeout) which achieves what you appear to be after without over-engineering timeouts by other means.
Another way you can approach this is by using the thread.sleep() method. I often use it when I want my program to simply pause for a short or long period of time. In the parameters, you put values in thousands that correspond to seconds. For example:
public static void main(String[] args) throws InterruptedException // Required for thread.sleep()
{
System.out.println("Hi there.");
Thread.sleep(2000); // Wait two seconds before running the next line of code
System.out.println("Goodbye.");
}
This is quite basic, but can be used for more than just strings. Hope this helps.
I am trying to implement a cached thread pool in Java to read data coming in off of a bus.
Everything works great... so long as I have data coming in on that bus.
If I leave the data cable disconnected, the program starts generating endless threads from the pool. I intermittently check /proc/{PID}/fd and it goes from 8 to 100+ in about an hour. Eventually the system crashes.
I am submitting a timeout value for these threads (30 seconds) and they do trigger a TimeoutException Catch which I see in my log file. Shouldn't these threads be ending when they time out?
Some code:
private static ExecutorService executorService = Executors.newCachedThreadPool();
(I later supply a callable)
long[] rawFrame;
Future<long[]> future = executorService.submit(callable);
try {
rawFrame = future.get(timeout, timeUnit); // timeout is 30 seconds
// Do some things with the raw frame and store as parsedFrame
return parsedFrame;
} catch (TimeoutException e) {
LOGGER.error("Bus timeout. Check connection");
return null;
} finally {
future.cancel(true);
}
I am just learning how to do concurrent processing in Java but as far as I know, these processes should be timing out but instead it appears they just sit there waiting for data on a bus that isn't spitting data.
What am I missing?
EDIT: Thanks for helping me narrow it down.
Here is my callable
private class BusCallable implements Callable<long[]> {
private long messageId;
private boolean readAnyFrame = false;
public BusCallable() {
super();
readAnyFrame = true;
}
public BusCallable(long id) {
super();
this.messageId = id;
readAnyFrame = false;
}
#Override
public long[] call() throws Exception() {
if (readAnyFrame) {
return read(busInterface);
}
return readFrame(busInterface, messageId);
}
Calling Future.get() with a timeout will timeout the get(), not the thread (ie.whataver is in the callable's call() method). If your callable doesn't end, it will remain in the service and callables will accumulate if you keep adding more. It's in the callable's thread that you should put a timeout mechanism in place.
I have a RMI client testing if a RMI server is running and reachable.
At the moment I perform every few seconds this test:
try {
rMIinstance.ping();
} catch (RemoteException e) {
getInstanceRegister().removeInstance(rMIinstance);
}
(ping() is a simple dummy RMI call.)
If the instance is offline the I get approx 1 minute later a
java.net.ConnectException: Connection timed out
exception, showing me that the server is offline.
However the code hangs one minute, which is far to long for us. (I don't want to change the timeout setting.)
Is there a method to perform this test faster?
You could interrupt the thread from a timer. It's a bit hacky and will throw InterruptedException instead of RemoteException, but it should work.
try {
Timer timer = new Timer(true);
TimerTask interruptTimerTask = new InterruptTimerTask(Thread.currentThread());
timer.schedule(interruptTimerTask, howLongDoYouWantToWait);
rMIinstance.ping();
timer.cancel();
} catch (RemoteException | InterruptedException e) {
getInstanceRegister().removeInstance(rMIinstance);
}
And the TimerTask implementation:
private static class InterruptTimerTask extends TimerTask {
private Thread thread;
public InterruptTimerTask(Thread thread) {
this.thread=thread;
}
#Override
public void run() {
thread.interrupt();
}
}
Inspired by the answer of #NeplatnyUdaj I found this solution:
try {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<String> future = executor.submit(new Task(rMIinstance));
System.out.println("Result: "+ future.get(3, TimeUnit.SECONDS));
} catch (RemoteException | TimeoutException e) {
getInstanceRegister().removeInstance(rMIinstance);
}
And this task:
class Task implements Callable<String> {
DatabaseAbstract rMIinstance;
public Task(DatabaseAbstract rMIinstance)
{
this.rMIinstance = rMIinstance;
}
#Override
public String call() throws Exception {
rMIinstance.ping();
return "OK";
}
}
The proposed solutions that interrupt the thread making the RMI call might not work, depending on whether that thread is at a point where it can be interrupted. Ordinary in-progress RMI calls aren't interruptible.
Try setting the system property java.rmi.server.disableHttp to true. The long connection timeout may be occurring because RMI is failing over to its HTTP proxying mechanism. This mechanism is described -- albeit very briefly -- in the class documentation for RMISocketFactory. (The HTTP proxying mechanism has been deprecated in JDK 8).
Set the system property sun.rmi.transport.proxy.connectTimeout to the desired connect timeout in milliseconds. I would also set sun.rmi.transport.tcp.responseTimeout.
The answers suggesting interrupting the thread rely on platform-specific behaviour of java.net, whose behaviour when interrupted is undefined.
I am using Connection Pool (snaq.db.ConnectionPool) in my application. The connection pool is initialized like:
String dburl = propertyUtil.getProperty("dburl");
String dbuserName = propertyUtil.getProperty("dbuserName");
String dbpassword = propertyUtil.getProperty("dbpassword");
String dbclass = propertyUtil.getProperty("dbclass");
String dbpoolName = propertyUtil.getProperty("dbpoolName");
int dbminPool = Integer.parseInt(propertyUtil.getProperty("dbminPool"));
int dbmaxPool = Integer.parseInt(propertyUtil.getProperty("dbmaxPool"));
int dbmaxSize = Integer.parseInt(propertyUtil.getProperty("dbmaxSize"));
long dbidletimeout = Long.parseLong(propertyUtil.getProperty("dbidletimeout"));
Class.forName(dbclass).newInstance();
ConnectionPool moPool = new ConnectionPool(dbpoolName, dbminPool, dbmaxPool, dbmaxSize,
dbidletimeout, dburl, dbuserName, dbpassword);
DB Pool values used are:
dbminPool=5
dbmaxPool=30
dbmaxSize=30
dbclass=org.postgresql.Driver
dbidletimeout=25
My application was leaking connection somewhere (connection was not released) and due to which the connection pool was getting exhausted. I have fixed that code for now.
Shouldn't the connections be closed after idle timeout period? If that is not correct assumption, Is there any way to close the open idle connections anyway (through java code only)?
The timeout variable does not seem to correspond to the time the connection is being idle but to how much time the pool can wait to return a new connection or throw an exception (I had a look at this source code, don't know if it is up-to-date). I think that it would be rather difficult to keep track of "idle" connections because what "idle" really means in this case? You might want to get a connection for later use. So I would say that the only safe way for the connection pool to know that you are done with the connection, is to call close() on it.
If you are worried about the development team forgetting to call close() in their code, there is a technique which I describe below and I have used in the past (in my case we wanted to keep track of unclosed InputStreams but the concept is the same).
Disclaimer:
I assume that the connections are only used during a single request and do not span during consecutive requests. In the latter case you can't use the solution below.
Your connection pool implementation seems to already use similar techniques with the ones I describe below (i.e. it already wraps the connections) so I cannot possibly know if this will work for your case or not. I have not tested the code below, I just use it to describe the concept.
Please use that only in your development environment. In production you should feel confident that your code is tested and that it behaves correctly.
Having said the above, the main idea is this: We have a central place (the connection pool) from where we acquire resources (connections) and we want to keep track if those resources are released by our code. We can use a web Filter that uses a ThreadLocal object that keeps track of the connections used during the request. I named this class TrackingFilter and the object that keeps track of the resources is the Tracker class.
public class TrackingFilter implements Filter {
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
Tracker.start();
try {
chain.doFilter(request, response);
} finally {
Tracker.stop();
}
}
...
}
For the Tracker to be able to keep track of the connections, it needs to be notified every time a connection is acquired with getConnection() and every time a connection is closed with a close() call. To be able to do that in a way that is transparent to the rest of the code we need to wrap the ConnectionPool and the returned Connection objects. Your code should return the new TrackingConnectionPool instead of the original pool (I assume the way to access the connection pool is at a single place). This new pool will wrap in turn, every Connection it provides, as a TrackableConnection. The TrackableConnection is the object that knows how to notify our Tracker when created and when closed.
When you call Tracker.stop() at the end of the request it will report any connections for which close() has not been called yet. Since this is a per request operation you will identify only the faulty operations (i.e. during your "Create new product" functionality) and then hopefully you will be able to track down those queries that leave open connections and fix them.
Below you can find code and comments for the TrackingConnectionPool, TrackableConnection and the Tracker class. The delegate methods were left out for brevity. I hope that helps.
Note: For the wrappers use an automated IDE feature (like Eclipse's "Generate delegate methods") otherwise it would be a time-consuming and error prone task.
//------------- Pool Creation
ConnectionPool original = new ConnectionPool(String dbpoolName, ...);
TrackingConnectionPool trackingCP = new TrackingConnectionPool(original);
// ... or without creating the ConnectionPool yourself
TrackingConnectionPool trackingCP = new TrackingConnectionPool(dbpoolName, ...);
// store the reference to the trackingCP instead of the original
//------------- TrackingConnectionPool
public class TrackingConnectionPool extends ConnectionPool {
private ConnectionPool originalPool; // reference to the original pool
// Wrap all available ConnectionPool constructors like this
public TrackingConnectionPool(String dbpoolName, ...) {
originalPool = new ConnectionPool(dbpoolName, ...);
}
// ... or use this convenient constructor after you create a pool manually
public TrackingConnectionPool(ConnectionPool pool) {
this.originalPool = pool;
}
#Override
public Connection getConnection() throws SQLException {
Connection con = originalPool.getConnection();
return new TrackableConnection(con); // wrap the connections with our own wrapper
}
#Override
public Connection getConnection(long timeout) throws SQLException {
Connection con = originalPool.getConnection(timeout);
return new TrackableConnection(con); // wrap the connections with our own wrapper
}
// for all the rest public methods of ConnectionPool and its parent just delegate to the original
#Override
public void setCaching(boolean b) {
originalPool.setCaching(b);
}
...
}
//------------- TrackableConnection
public class TrackableConnection implements Connection, Tracker.Trackable {
private Connection originalConnection;
private boolean released = false;
public TrackableConnection(Connection con) {
this.originalConnection = con;
Tracker.resourceAquired(this); // notify tracker that this resource is aquired
}
// Trackable interface
#Override
public boolean isReleased() {
return this.released;
}
// Note: this method will be called by Tracker class (if needed). Do not invoke manually
#Override
public void release() {
if (!released) {
try {
// attempt to close the connection
originalConnection.close();
this.released = true;
} catch(SQLException e) {
throw new RuntimeException(e);
}
}
}
// Connection interface
#Override
public void close() throws SQLException {
originalConnection.close();
this.released = true;
Tracker.resourceReleased(this); // notify tracker that this resource is "released"
}
// rest of the methods just delegate to the original connection
#Override
public Statement createStatement() throws SQLException {
return originalConnection.createStatement();
}
....
}
//------------- Tracker
public class Tracker {
// Create a single object per thread
private static final ThreadLocal<Tracker> _tracker = new ThreadLocal<Tracker>() {
#Override
protected Tracker initialValue() {
return new Tracker();
};
};
public interface Trackable {
boolean isReleased();
void release();
}
// Stores all the resources that are used during the thread.
// When a resource is used a call should be made to resourceAquired()
// Similarly when we are done with the resource a call should be made to resourceReleased()
private Map<Trackable, Trackable> monitoredResources = new HashMap<Trackable, Trackable>();
// Call this at the start of each thread. It is important to clear the map
// because you can't know if the server reuses this thread
public static void start() {
Tracker monitor = _tracker.get();
monitor.monitoredResources.clear();
}
// Call this at the end of each thread. If all resources have been released
// the map should be empty. If it isn't then someone, somewhere forgot to release the resource
// A warning is issued and the resource is released.
public static void stop() {
Tracker monitor = _tracker.get();
if ( !monitor.monitoredResources.isEmpty() ) {
// there are resources that have not been released. Issue a warning and release each one of them
for (Iterator<Trackable> it = monitor.monitoredResources.keySet().iterator(); it.hasNext();) {
Trackable resource = it.next();
if (!resource.isReleased()) {
System.out.println("WARNING: resource " + resource + " has not been released. Releasing it now.");
resource.release();
} else {
System.out.println("Trackable " + resource
+ " is released but is still under monitoring. Perhaps you forgot to call resourceReleased()?");
}
}
monitor.monitoredResources.clear();
}
}
// Call this when a new resource is acquired i.e. you a get a connection from the pool
public static void resourceAquired(Trackable resource) {
Tracker monitor = _tracker.get();
monitor.monitoredResources.put(resource, resource);
}
// Call this when the resource is released
public static void resourceReleased(Trackable resource) {
Tracker monitor = _tracker.get();
monitor.monitoredResources.remove(resource);
}
}
You don't have your full code posted so I assume you are not closing your connections. You STILL need to close the connection object obtained from the pool as you would if you were not using a pool. Closing the connection makes it available for the pool to reissue to another caller. If you fail to do this, you will eventually consume all available connections from your pool. A pool's stale connection scavenger is not the best place to clean up your connections. Like your momma told you, put your things away when you are done with them.
try {
conn = moPool.getConnection(timeout);
if (conn != null)
// do something
} catch (Exception e) {
// deal with me
} finally {
try {
conn.close();
} catch (Exception e) {
// maybe deal with me
}
}
E
The whole point of connection pooling is to let pool handle all such things for you.
Having a code for closing open idle connections of java pool will not help in your case.
Think about connection pool maintaining MAPs for IDLE or IN-USE connections.
IN-USE: If a connection object is being referenced by application, it is put in to in-use-map by pool.
IDLE: If a connection object is not being referenced by application / or closed, it is put into idle-map by pool.
Your pool exhausted because you were not closing connections. Not closing connections resulted all idle connections to be put into in-use-map.
Since idle-pool does not have any entry available, pool is forced to create more of them.
In this way all your connections got marked as IN-USE.
Your pool does not have any open-idle-connections, which you can close by code.
Pool is not in position to close any connection even if time-out occurs, because nothing is idle.
You did your best when you fixed connection leakage from your code.
You can force release of pool and recreate one. But you will have to be carefull because of existing connections which are in-use might get affected in their tasks.
In most connection pools, the idle timeout is the maximum time a connection pool is idle in the connection pool (waiting to be requested), not how long it is in use (checked out from the connection pool).
Some connection pools also have timeout settings for how long a connection is allowed to be in use (eg DBCP has removeAbandonedTimeout, c3p0 has unreturnedConnectionTimeout), and if those are enabled and the timeout has expired, they will be forcefully revoked from the user and either returned to the pool or really closed.
log4jdbc can be used to mitigate connection leak troubleshooting by means of jdbc.connection logger.
This technique doesn't require any modification of the code.
I am new to using wait and notify. I have trouble in testing my code. Below is my implementation: (NOTE: I have not included all the implementation)
public class PoolImp {
private Vector<Connection> connections; // For now maximum of 1 connection
public synchronized Connection getconnection() {
if(connections.size == 1() ) {
this.wait();
}
return newConnection(); // also add to connections
}
public synchronized void removeconnection() {
connections.size = 0;
this.notify();
}
}
Below is my test method: conn_1 gets the first connection. conn_2 goes into wait as only maximum of 1 connection is allowed.
I want to test this in such a way that when I call removeconnection, conn_2 gets notified and gets the released connection.
Testing :
#Test
public void testGetConnections() throws SQLException
{
PoolImpl cp = new PoolImpl();
Connection conn_1 = null;
Connection conn_2 = null;
conn_1 = cp.getConnection();
conn_2 = cp.getConnection();
cp.removeConnection(conn_1);}
}
In order to test waiting and notifications, you need multiple threads. Otherwise, the waiting thread will block, and never get to the notifying code, because it is on the same thread.
P.S. Implementing connection pools is not an easy undertaking. I would not even bother, since you can use ready-made ones.
Everyone right, you should take a ready-made class for your connection pool. But if you insist, I've fixed the code for you:
public class PoolImp {
private Vector<Connection> connections; // For now maximum of 1 connection
public synchronized Connection getconnection() {
while(connections.isEmpty()) {
this.wait();
}
return newConnection();
}
public synchronized void removeconnection(Connection c) {
connections.add(c);
this.notify();
}
}
Replacing the if block with a while loop is an improvement but will not solve the real problem here. It will simply force another check on the size of the collection after a notify was issued, to ensure the validity of the claim made while issuing a notify().
As was pointed earlier, you need multiple client threads to simulate this. Your test thread is blocked when you call
conn_2 = cp.getConnection();
Now, it never gets a chance to issue this call as it will wait indefinitely (unless it is interrupted)
cp.removeConnection(conn_1);