I am creating a SpannerSingleton to stay connected for the duration of the app's life.
I'm interested in connection durability... if there is a session/connection issue how can I recreate a session?
One idea was to spawn a new connection increasing the setMaxSessions to a higher number if more than 90% of the pool is exhausted. Like the opposite of exponential backoff? But where / how can I do that? I could not find anything in the client library that would let me monitor the pool status or client count.
I went with bill-pugh-singleton cuz it seemed like a good choice...
Here is what I have:
public class SpannerSingleton {
private static Spanner spanner;
private static SpannerOptions options;
private static SessionPoolOptions sessionPoolOps = SessionPoolOptions
.newBuilder()
.setMaxSessions(1000) // 1000 concurrent queries
.setMinSessions(100) // keep 100 alive
.setMaxIdleSessions(100) // how many to keep from being idle and closed
.build();
private SpannerSingleton() {
try {
options = SpannerOptions
.newBuilder()
.setSessionPoolOption(sessionPoolOps)
.build();
spanner = options.getService();
} catch (Exception e) {
e.printStackTrace();
}
}
private static class SingletonHelper{
private static final Spanner CONNECTION = new SpannerSingleton().spanner;
}
public static synchronized Spanner getSpanner() {
return SingletonHelper.CONNECTION;
}
}
I use a Factory pattern to make the dbClient
public SpannerFactory {
private static Spanner spanner = SpannerSingleton.getSpanner();
private static DatabaseId dbId;
public static DatabaseClient getConnection(String instance) {
if (Util.isEmpty(instance)) return null;
if ("mickey".equalsIgnoreCase(instance)) {
dbId = DatabaseId.of(spanner.getOptions().getProjectId(), "instance1", "mickey");
}
if ("mouse".equalsIgnoreCase(instance)) {
dbId = DatabaseId.of(spanner.getOptions().getProjectId(), "instance1", "mouse");
}
return spanner.getDatabaseClient(dbId);
}
}
What I would like to add is something that would check for the connection pool to see how close to starved we are and then recreate itself... I might be over thinking it, but what might happen if the connection is disrupted?
Client library should take care of maintaining healthy session pool, user shouldn't have to worry about sessions/connections explicitly.
As documented in java client, if you set MaxSessions correctly - client will take care of maintaining those many sessions.
At a high level, the flow would be like:
If currentSessions < MaxSessions {
if !idleSessions.empty()
use an idle session.
else
CreateNewSession.
} else {
Block/Fail based on action chosen in : ActionOnExhaustion.
}
If you want to avoid small overhead in CreateSession as part of request processing, one recommended option is to keep minSessions and maxSessions same as your concurrent TPS requirement so that at the beginning we have those many sessions ready to use.
For additional details like session monitoring, keeping idle sessions alive : please refer to documentation at : https://cloud.google.com/spanner/docs/sessions
Related
I have an application which establishes connections to multiple Active MQ queues and I need to introduce a health check endpoint (i.e. /health) which can be used to test queues connectivity. In order to do this I am using AbstractHealthIndicator provided by Spring Boot actuator.
The main problem with it is if multiple health checks are defined as a separate health indicators - they are running in sequential manner. In case of an issue / timeout with some of the connections (5 in total) - overall health check time to check every queue significantly increases.
To overcome this problem I created only one health indicator by extending AbstractHealthIndicator and running checks for each of the queue within it in parallel using ExecutorService and futures.
MQ health check class implementation:
public class HealthCheck extends AbstractHealthCheck implements Callable<Health> {
private String queueName;
public HealthCheck(JmsTemplate jmsTemplate, String queueName) {
super(jmsTemplate);
this.queueName = queueName;
}
public String getQueueName() {
return queueName;
}
#Override
public Health call() throws JMSException {
Health.Builder healthBuilder = new Health.Builder();
healthBuilder.withDetail("QueueName", getQueueName());
ConnectionFactory connectionFactory = super.getJmsTemplate().getConnectionFactory();
try (Connection connection = connectionFactory.createConnection()) {
connection.start();
healthBuilder.up();
}
catch (Exception e) {
healthBuilder.down();
}
return healthBuilder.build();
}
}
Health check indicator implementation which will be used by Spring Actuator:
public class HealthChecker extends AbstractHealthIndicator {
private final List<AbstractHealthCheck> healthCheckList;
private List<Future<Health>> futureListHealth;
private ExecutorService executorService = Executors.newFixedThreadPool(5);
public HealthChecker(final List<AbstractHealthCheck> healthCheckList) {
this.healthCheckList = healthCheckList;
}
#Override
protected synchronized void doHealthCheck(final Builder builder) throws Exception {
futureListHealth = new ArrayList<>();
Map<String, Object> compositeHealthCheckDetails = new HashMap<>();
builder.up();
futureListHealth = executorService.invokeAll(healthCheckList, 5, TimeUnit.SECONDS);
futureListHealth.forEach(future -> {
try {
Health futureHealthCheckResult = future.get();
compositeHealthCheckDetails.put(futureHealthCheckResult.getDetails().get("QueueName").toString(), futureHealthCheckResult);
} catch (CancellationException | ExecutionException | InterruptedException ex) {
compositeHealthCheckDetails.put(/* how to get queue details ? */, new Health.Builder().down().withDetail("error", ex.getMessage()).build());
builder.down();
System.out.println("Cancellation Exception Occurred");
}
});
builder.withDetails(compositeHealthCheckDetails);
builder.build();
}
}
The main challenge occurs when some of checks times out and future.get() for them throws a CancellationException.
Active MQ / JMS connectionFactory.createConnection() doesn't handle InterruptionException and as a result I cannot handle cancellation scenario within HealthCheck class itselft and lose Health details of timed out health check (queue name, for instance).
I am trying to understand what is the best way to handle this scenario.
Questions I have:
Within CancellationException catch section - what is the best way to obtain queue name details for which future has been called?;
Currently for each timed out health check I recreate Health.Builder with down state and exception details like mentioned below. Is there an alternative / better way doing it?
compositeHealthCheckDetails.put(/* how to get queue details ? */, new Health.Builder().down().withDetail("error", ex.getMessage()).build());
I want to compute per request the time between the begining of request and time connection is established (TCP connection establishment).
I asked this question question but it was ambiguous.
I had a look at proposed answer, maybe I misunderstood, but I understand that Connection.Listener works on a global basis at least (Overall connect time)
So I tried the approach based on SocketAddressResolver:
private static class CustomSocketAddressResolver implements SocketAddressResolver {
private StopWatch stopWatch = new StopWatch();
private SocketAddressResolver adaptee;
public CustomSocketAddressResolver(SocketAddressResolver adaptee) {
this.adaptee = adaptee;
}
public long getConnectTime() {
return stopWatch.getTime();
}
#Override
public void resolve(String host, int port, Promise<List<InetSocketAddress>> promise) {
stopWatch.reset();
stopWatch.start();
adaptee.resolve(host, port, new Promise<List<InetSocketAddress>>() {
#Override
public void succeeded(List<InetSocketAddress> result) {
// Add as first address an invalid address so that we test
// that the connect operation iterates over the addresses.
stopWatch.stop();
promise.succeeded(result);
}
#Override
public void failed(Throwable x) {
stopWatch.stop();
promise.failed(x);
}
});
}
}
Then I can get connect time like this:
((CustomSocketAddressResolver) httpClient.getSocketAddressResolver()).getConnectTime()
It works but is there a better way ?
I want to compute per request the time between the begining of request and time connection is established (TCP connection establishment).
Perhaps there is a misunderstanding here.
HTTP/1.1 and HTTP/2 use persistent TCP connections.
A TCP connection is created, kept open, and reused for many requests.
For example, a TCP connection is created and then reused for, say, 1067 requests, and then still kept open.
It would not make much sense to compute the time between the beginning of the 1067th request and the time the connection was first established (or I would love to hear a use case for that).
I know of case where connections remain open for days.
I am using below class to send data to our messaging queue by using socket either synchronously or asynchronously as shown below. It depends on requirement whether I want to call synchronous or asynchronous method to send data on a socket. Most of the times we will send data asynchronously but sometimes I may need to send data synchronously.
sendAsync - It sends data asynchronously and we don't block the thread which is sending data. If acknowledgment is not received then it will retry again from the background thread which is started in SendToQueue constructor only.
send - It sends data synchronously on a socket. It internally calls doSendAsync method and then sleep for a particular timeout period and if acknowledgment is not received then it removes from cache bucket so that we don't retry again.
So the only difference between those two above methods is - For async case, I need to retry at all cost if acknowledgment is not received but for sync I don't need to retry at all and that's why I am storing more state in a PendingMessage class.
ResponsePoller is a class which receives the acknowledgment for the data that was sent to our messaging queue on a particular socket and then calls handleAckReceived method below to remove the address so that we don't retry after receiving the acknowledgment. If acknowledgment is received then socket is live otherwise it is dead.
public class SendToQueue {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(2);
private final Cache<Long, PendingMessage> cache = CacheBuilder.newBuilder()
.maximumSize(1000000)
.concurrencyLevel(100)
.build();
private static class PendingMessage {
private final long _address;
private final byte[] _encodedRecords;
private final boolean _retryEnabled;
private final Object _monitor = new Object();
private long _sendTimeMillis;
private volatile boolean _acknowledged;
public PendingMessage(long address, byte[] encodedRecords, boolean retryEnabled) {
_address = address;
_sendTimeMillis = System.currentTimeMillis();
_encodedRecords = encodedRecords;
_retryEnabled = retryEnabled;
}
public synchronized boolean hasExpired() {
return System.currentTimeMillis() - _sendTimeMillis > 500L;
}
public synchronized void markResent() {
_sendTimeMillis = System.currentTimeMillis();
}
public boolean shouldRetry() {
return _retryEnabled && !_acknowledged;
}
public boolean waitForAck() {
try {
synchronized (_monitor) {
_monitor.wait(500L);
}
return _acknowledged;
} catch (InterruptedException ie) {
return false;
}
}
public void ackReceived() {
_acknowledged = true;
synchronized (_monitor) {
_monitor.notifyAll();
}
}
public long getAddress() {
return _address;
}
public byte[] getEncodedRecords() {
return _encodedRecords;
}
}
private static class Holder {
private static final SendToQueue INSTANCE = new SendToQueue();
}
public static SendToQueue getInstance() {
return Holder.INSTANCE;
}
private void handleRetries() {
List<PendingMessage> messages = new ArrayList<>(cache.asMap().values());
for (PendingMessage m : messages) {
if (m.hasExpired()) {
if (m.shouldRetry()) {
m.markResent();
doSendAsync(m, Optional.<Socket>absent());
} else {
cache.invalidate(m.getAddress());
}
}
}
}
private SendToQueue() {
executorService.submit(new ResponsePoller()); // another thread which receives acknowledgment
// and then delete entry from the cache
// accordingly.
executorService.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
handleRetries();
}
}, 0, 1, TimeUnit.SECONDS);
}
public boolean sendAsync(final long address, final byte[] encodedRecords) {
PendingMessage m = new PendingMessage(address, encodedRecords, true);
cache.put(address, m);
return doSendAsync(m, Optional.<Socket>absent());
}
private boolean doSendAsync(final PendingMessage pendingMessage, final Optional<Socket> socket) {
Optional<Socket> actualSocket = socket;
if (!actualSocket.isPresent()) {
SocketHolder liveSocket = SocketManager.getInstance().getSocket();
actualSocket = Optional.of(liveSocket.getSocket());
}
ZMsg msg = new ZMsg();
msg.add(pendingMessage.getEncodedRecords());
try {
return msg.send(actualSocket.get());
} finally {
msg.destroy();
}
}
public boolean send(final long address, final byte[] encodedRecords) {
return send(address, encodedRecords, Optional.<Socket>absent());
}
public boolean send(final long address, final byte[] encodedRecords,
final Optional<Socket> socket) {
PendingMessage m = new PendingMessage(address, encodedRecords, false);
cache.put(address, m);
try {
if (doSendAsync(m, socket)) {
return m.waitForAck();
}
return false;
} finally {
cache.invalidate(address);
}
}
// called by acknowledgment thread which is in "ResponsePoller" class
public void handleAckReceived(final long address) {
PendingMessage m = cache.getIfPresent(address);
if (m != null) {
m.ackReceived();
cache.invalidate(address);
}
}
}
As I am sending data on a socket and if I get the acknowledgment back for the same data then it means Socket is alive but if data is not acknowledge back then it means socket is dead (but I will keep retrying to send the data).
So with my above design (or if there is any better way), how can I figure out whether any socket is dead or live because either acknowledgment was not received or it was received from that socket and basis on that I need to release the socket back into its pool (whether it is alive or dead) by calling below method depending on whether acknowledgment is received or not either for sync or async case.
I also need to configure count that if acknowledgment is not received on a particular socket for x (where x is a number > 0, default should be 2) times, then only mark a socket dead. What is the best and efficient way to do this thing?
SocketManager.getInstance().releaseSocket(socket, SocketState.LIVE);
SocketManager.getInstance().releaseSocket(socket, SocketState.DEAD);
Say you are at home, you have cable plugged to your laptop, and router, and behind router there is cable modem. If you turn off your router - your laptop will know that - no voltage. If you turn off your modem... that goes tricky. You simply can't know that. One potential problem is no route to host. But even if you are connected, it can be any other issue. Some protocols - like ssh have ping build in - so they have keep-alive for connection. If your app is doing nothing every interval there is ping-pong between client and server, so you know if that is alive.
If you have full control on the protocol - keep-alive is one of the options.
Your client is at one end, but in general it is really hard to have two parties be sure that they have agreement. Byzantine generals problem describes something that is general network model, where each node is not aware about any other, and can trust only what is aware of.
In general I would not write distributed system myself. I'm using Hystrix for that. Link is to their configuration, so you can see how big it is. You can track if the server is working, or not. Also when it is back again you can prepare policy to figure it out, not to flood it, cancel messages that are outdated, and many more - graphs, stats, integration with other solutions. There is big community using it, and solving problems. That is much better option then doing it yourself.
Not sure if you have only this one service to talk with, or if Hystrix is for you good way. In Java people tend to use layers, frameworks if they deals with problems... Hope it help.
I am using Connection Pool (snaq.db.ConnectionPool) in my application. The connection pool is initialized like:
String dburl = propertyUtil.getProperty("dburl");
String dbuserName = propertyUtil.getProperty("dbuserName");
String dbpassword = propertyUtil.getProperty("dbpassword");
String dbclass = propertyUtil.getProperty("dbclass");
String dbpoolName = propertyUtil.getProperty("dbpoolName");
int dbminPool = Integer.parseInt(propertyUtil.getProperty("dbminPool"));
int dbmaxPool = Integer.parseInt(propertyUtil.getProperty("dbmaxPool"));
int dbmaxSize = Integer.parseInt(propertyUtil.getProperty("dbmaxSize"));
long dbidletimeout = Long.parseLong(propertyUtil.getProperty("dbidletimeout"));
Class.forName(dbclass).newInstance();
ConnectionPool moPool = new ConnectionPool(dbpoolName, dbminPool, dbmaxPool, dbmaxSize,
dbidletimeout, dburl, dbuserName, dbpassword);
DB Pool values used are:
dbminPool=5
dbmaxPool=30
dbmaxSize=30
dbclass=org.postgresql.Driver
dbidletimeout=25
My application was leaking connection somewhere (connection was not released) and due to which the connection pool was getting exhausted. I have fixed that code for now.
Shouldn't the connections be closed after idle timeout period? If that is not correct assumption, Is there any way to close the open idle connections anyway (through java code only)?
The timeout variable does not seem to correspond to the time the connection is being idle but to how much time the pool can wait to return a new connection or throw an exception (I had a look at this source code, don't know if it is up-to-date). I think that it would be rather difficult to keep track of "idle" connections because what "idle" really means in this case? You might want to get a connection for later use. So I would say that the only safe way for the connection pool to know that you are done with the connection, is to call close() on it.
If you are worried about the development team forgetting to call close() in their code, there is a technique which I describe below and I have used in the past (in my case we wanted to keep track of unclosed InputStreams but the concept is the same).
Disclaimer:
I assume that the connections are only used during a single request and do not span during consecutive requests. In the latter case you can't use the solution below.
Your connection pool implementation seems to already use similar techniques with the ones I describe below (i.e. it already wraps the connections) so I cannot possibly know if this will work for your case or not. I have not tested the code below, I just use it to describe the concept.
Please use that only in your development environment. In production you should feel confident that your code is tested and that it behaves correctly.
Having said the above, the main idea is this: We have a central place (the connection pool) from where we acquire resources (connections) and we want to keep track if those resources are released by our code. We can use a web Filter that uses a ThreadLocal object that keeps track of the connections used during the request. I named this class TrackingFilter and the object that keeps track of the resources is the Tracker class.
public class TrackingFilter implements Filter {
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
Tracker.start();
try {
chain.doFilter(request, response);
} finally {
Tracker.stop();
}
}
...
}
For the Tracker to be able to keep track of the connections, it needs to be notified every time a connection is acquired with getConnection() and every time a connection is closed with a close() call. To be able to do that in a way that is transparent to the rest of the code we need to wrap the ConnectionPool and the returned Connection objects. Your code should return the new TrackingConnectionPool instead of the original pool (I assume the way to access the connection pool is at a single place). This new pool will wrap in turn, every Connection it provides, as a TrackableConnection. The TrackableConnection is the object that knows how to notify our Tracker when created and when closed.
When you call Tracker.stop() at the end of the request it will report any connections for which close() has not been called yet. Since this is a per request operation you will identify only the faulty operations (i.e. during your "Create new product" functionality) and then hopefully you will be able to track down those queries that leave open connections and fix them.
Below you can find code and comments for the TrackingConnectionPool, TrackableConnection and the Tracker class. The delegate methods were left out for brevity. I hope that helps.
Note: For the wrappers use an automated IDE feature (like Eclipse's "Generate delegate methods") otherwise it would be a time-consuming and error prone task.
//------------- Pool Creation
ConnectionPool original = new ConnectionPool(String dbpoolName, ...);
TrackingConnectionPool trackingCP = new TrackingConnectionPool(original);
// ... or without creating the ConnectionPool yourself
TrackingConnectionPool trackingCP = new TrackingConnectionPool(dbpoolName, ...);
// store the reference to the trackingCP instead of the original
//------------- TrackingConnectionPool
public class TrackingConnectionPool extends ConnectionPool {
private ConnectionPool originalPool; // reference to the original pool
// Wrap all available ConnectionPool constructors like this
public TrackingConnectionPool(String dbpoolName, ...) {
originalPool = new ConnectionPool(dbpoolName, ...);
}
// ... or use this convenient constructor after you create a pool manually
public TrackingConnectionPool(ConnectionPool pool) {
this.originalPool = pool;
}
#Override
public Connection getConnection() throws SQLException {
Connection con = originalPool.getConnection();
return new TrackableConnection(con); // wrap the connections with our own wrapper
}
#Override
public Connection getConnection(long timeout) throws SQLException {
Connection con = originalPool.getConnection(timeout);
return new TrackableConnection(con); // wrap the connections with our own wrapper
}
// for all the rest public methods of ConnectionPool and its parent just delegate to the original
#Override
public void setCaching(boolean b) {
originalPool.setCaching(b);
}
...
}
//------------- TrackableConnection
public class TrackableConnection implements Connection, Tracker.Trackable {
private Connection originalConnection;
private boolean released = false;
public TrackableConnection(Connection con) {
this.originalConnection = con;
Tracker.resourceAquired(this); // notify tracker that this resource is aquired
}
// Trackable interface
#Override
public boolean isReleased() {
return this.released;
}
// Note: this method will be called by Tracker class (if needed). Do not invoke manually
#Override
public void release() {
if (!released) {
try {
// attempt to close the connection
originalConnection.close();
this.released = true;
} catch(SQLException e) {
throw new RuntimeException(e);
}
}
}
// Connection interface
#Override
public void close() throws SQLException {
originalConnection.close();
this.released = true;
Tracker.resourceReleased(this); // notify tracker that this resource is "released"
}
// rest of the methods just delegate to the original connection
#Override
public Statement createStatement() throws SQLException {
return originalConnection.createStatement();
}
....
}
//------------- Tracker
public class Tracker {
// Create a single object per thread
private static final ThreadLocal<Tracker> _tracker = new ThreadLocal<Tracker>() {
#Override
protected Tracker initialValue() {
return new Tracker();
};
};
public interface Trackable {
boolean isReleased();
void release();
}
// Stores all the resources that are used during the thread.
// When a resource is used a call should be made to resourceAquired()
// Similarly when we are done with the resource a call should be made to resourceReleased()
private Map<Trackable, Trackable> monitoredResources = new HashMap<Trackable, Trackable>();
// Call this at the start of each thread. It is important to clear the map
// because you can't know if the server reuses this thread
public static void start() {
Tracker monitor = _tracker.get();
monitor.monitoredResources.clear();
}
// Call this at the end of each thread. If all resources have been released
// the map should be empty. If it isn't then someone, somewhere forgot to release the resource
// A warning is issued and the resource is released.
public static void stop() {
Tracker monitor = _tracker.get();
if ( !monitor.monitoredResources.isEmpty() ) {
// there are resources that have not been released. Issue a warning and release each one of them
for (Iterator<Trackable> it = monitor.monitoredResources.keySet().iterator(); it.hasNext();) {
Trackable resource = it.next();
if (!resource.isReleased()) {
System.out.println("WARNING: resource " + resource + " has not been released. Releasing it now.");
resource.release();
} else {
System.out.println("Trackable " + resource
+ " is released but is still under monitoring. Perhaps you forgot to call resourceReleased()?");
}
}
monitor.monitoredResources.clear();
}
}
// Call this when a new resource is acquired i.e. you a get a connection from the pool
public static void resourceAquired(Trackable resource) {
Tracker monitor = _tracker.get();
monitor.monitoredResources.put(resource, resource);
}
// Call this when the resource is released
public static void resourceReleased(Trackable resource) {
Tracker monitor = _tracker.get();
monitor.monitoredResources.remove(resource);
}
}
You don't have your full code posted so I assume you are not closing your connections. You STILL need to close the connection object obtained from the pool as you would if you were not using a pool. Closing the connection makes it available for the pool to reissue to another caller. If you fail to do this, you will eventually consume all available connections from your pool. A pool's stale connection scavenger is not the best place to clean up your connections. Like your momma told you, put your things away when you are done with them.
try {
conn = moPool.getConnection(timeout);
if (conn != null)
// do something
} catch (Exception e) {
// deal with me
} finally {
try {
conn.close();
} catch (Exception e) {
// maybe deal with me
}
}
E
The whole point of connection pooling is to let pool handle all such things for you.
Having a code for closing open idle connections of java pool will not help in your case.
Think about connection pool maintaining MAPs for IDLE or IN-USE connections.
IN-USE: If a connection object is being referenced by application, it is put in to in-use-map by pool.
IDLE: If a connection object is not being referenced by application / or closed, it is put into idle-map by pool.
Your pool exhausted because you were not closing connections. Not closing connections resulted all idle connections to be put into in-use-map.
Since idle-pool does not have any entry available, pool is forced to create more of them.
In this way all your connections got marked as IN-USE.
Your pool does not have any open-idle-connections, which you can close by code.
Pool is not in position to close any connection even if time-out occurs, because nothing is idle.
You did your best when you fixed connection leakage from your code.
You can force release of pool and recreate one. But you will have to be carefull because of existing connections which are in-use might get affected in their tasks.
In most connection pools, the idle timeout is the maximum time a connection pool is idle in the connection pool (waiting to be requested), not how long it is in use (checked out from the connection pool).
Some connection pools also have timeout settings for how long a connection is allowed to be in use (eg DBCP has removeAbandonedTimeout, c3p0 has unreturnedConnectionTimeout), and if those are enabled and the timeout has expired, they will be forcefully revoked from the user and either returned to the pool or really closed.
log4jdbc can be used to mitigate connection leak troubleshooting by means of jdbc.connection logger.
This technique doesn't require any modification of the code.
I am new to using wait and notify. I have trouble in testing my code. Below is my implementation: (NOTE: I have not included all the implementation)
public class PoolImp {
private Vector<Connection> connections; // For now maximum of 1 connection
public synchronized Connection getconnection() {
if(connections.size == 1() ) {
this.wait();
}
return newConnection(); // also add to connections
}
public synchronized void removeconnection() {
connections.size = 0;
this.notify();
}
}
Below is my test method: conn_1 gets the first connection. conn_2 goes into wait as only maximum of 1 connection is allowed.
I want to test this in such a way that when I call removeconnection, conn_2 gets notified and gets the released connection.
Testing :
#Test
public void testGetConnections() throws SQLException
{
PoolImpl cp = new PoolImpl();
Connection conn_1 = null;
Connection conn_2 = null;
conn_1 = cp.getConnection();
conn_2 = cp.getConnection();
cp.removeConnection(conn_1);}
}
In order to test waiting and notifications, you need multiple threads. Otherwise, the waiting thread will block, and never get to the notifying code, because it is on the same thread.
P.S. Implementing connection pools is not an easy undertaking. I would not even bother, since you can use ready-made ones.
Everyone right, you should take a ready-made class for your connection pool. But if you insist, I've fixed the code for you:
public class PoolImp {
private Vector<Connection> connections; // For now maximum of 1 connection
public synchronized Connection getconnection() {
while(connections.isEmpty()) {
this.wait();
}
return newConnection();
}
public synchronized void removeconnection(Connection c) {
connections.add(c);
this.notify();
}
}
Replacing the if block with a while loop is an improvement but will not solve the real problem here. It will simply force another check on the size of the collection after a notify was issued, to ensure the validity of the claim made while issuing a notify().
As was pointed earlier, you need multiple client threads to simulate this. Your test thread is blocked when you call
conn_2 = cp.getConnection();
Now, it never gets a chance to issue this call as it will wait indefinitely (unless it is interrupted)
cp.removeConnection(conn_1);