Risks of using Apache CloseableHttpClient in a Singleton - java

I'm using Apache Http Client 4.5.3 and currently refactoring some code.
Currently I have a singleton Util that has several methods whose responsibility is to hit an API with gets, posts, patches, etc. Previously, we had been using an HttpClientBuilder to construct a CloseableHttpClient object for every call of every method. Roughly, the architecture for the Singleton is something like this:
import com.google.gson.Gson
import org.apache.http.client.methods.{HttpGet, HttpPost}
import org.apache.http.entity.StringEntity
import org.apache.http.impl.client.{CloseableHttpClient, HttpClientBuilder}
import org.apache.http.util.EntityUtils
import org.json4s.DefaultFormats
import org.json4s.jackson.JsonMethods.parse
object ApiCallerUtil {
case class StoreableObj(name:String, id:Long)
case class ResponseKey(key:Long)
def getKeyCall(param:String): ResponseKey = {
implicit val formats = DefaultFormats
val get = new HttpGet("http://wwww.someUrl.com/api/?value=" + param)
val client:CloseableHttpClient = HttpClientBuilder.create().build()
val response = client.execute(get)
try {
val entity = response.getEntity
val entityStr = EntityUtils.toString(entity)
parse(entityStr).extract[ResponseKey]
} finally {
response.close()
client.close()
}
}
def postNewObjCall(param:String, i:Long): Boolean = {
val post = new HttpPost(("http://wwww.someUrl.com/api/createNewObj"))
val client = HttpClientBuilder.create().build()
post.setHeader("Content-type", "application/json")
val pollAsJson = new Gson().toJson(StoreableObj(param, i))
post.setEntity(new StringEntity(pollAsJson))
val response = client.execute(post)
try {
if (response.getStatusLine.getStatusCode < 300) true else false
} finally {
response.close()
client.close()
}
}
//... and so on
}
Notes about how this is used - we have many classes all over our system that use this Singleton Util to make calls to the API. This Singleton will go through short periods of heavy use where several classes will hit the same calls with heavy frequency (up to #1000 of times within several minute periods), and also periods where it is hit several times over a long period of time (once or twice an hour), or not at all for hours at a time. Also, all the URLs it hits will start with the same URL (e.g. www.someUrl.com/api/)
But I'm wondering if it would make sense to implement it where the val client = HttpClientBuilder.create().build is called once as a private val for a in-object-only accessible variable. This way it is only created once, upon instantiation of the object. Here's where I pause, the Apache documentation does say these two things:
1.2.1. [Closeable]HttpClient implementations are expected to be thread safe. It is recommended that the same instance of this class is reused
for multiple request executions.
1.2.2. When an [Closeable]HttpClient instance is no longer needed and is about to go out of scope it is important to shut down its
connection manager to ensure that all connections kept alive by the
manager get closed and system resources allocated by those connections
are released.
I've read through most of the documentation, but don't have a solid answer to the following:
Are there any risks of having the CloseableHttpClient instance as private global variable? I'm worried something may get shut down if it's stale and that I'd have to resinstantiate it after a period of time, or, in the cases of heavy use, it would create too much of a bottleneck. Per the #1.2.2 above, the variable will "never" go out of scope, since it's a singleton object. But since I'm building just the client and passing it the HttpRequest objects as I go, and not connecting it to the API outside of the request alone, it seems it shouldn't matter.
Due to the nature of how this ApiCallerUtil Singleton is used, would it be wise to perhaps make use of their HttpClientConnectionManager or. PoolingHttpClientConnectionManager to maintain a steady connection to www.someUrl.com/api/? Will the performance increase be worth it? So far, the current implementation doesn't seem to have any drastic performance drawbacks.
Thanks for any feedback!

There are none (based on my 15+ years of experience with HttpClient).
This really depends on various factors (overhead of TLS session handshake, and so on). I imagine one would really want to ensure that series of related requests get executed over a persistent connection. During an extended period of inactivity one may want to evict all idle connections from the connection pool. This has an added benefit of reducing the chance of running into a stale (half-closed) connection problem.

Related

Webflux Webclient re-try with different URL

I am using webclient for the rest call and what i need is, if the primary URL is failing for the n'th time do the next re-try on Secondary URL . Please find below sample code for the logic which i am using. But it seems we cannot change the URL once he client is created and even if i change the URL its not getting effected and still requests are been fired to the initial URL.
ClientHttpConnector connector;//initiate
WebClient webClient = WebClient.builder().clientConnector(connector).build();
WebClient.RequestBodyUriSpec client = webClient.post();
client.uri("http://primaryUrl/").body(BodyInserters.fromObject("hi")).retrieve().bodyToMono(String.class).retryWhen(Retry.anyOf(Exception.class)
.exponentialBackoff(Duration.ofSeconds(2), Duration.ofSeconds(10)).doOnRetry(x ->
{
if (x.iteration() == 2) {
client.uri("http://fail_over_url/");//this does not work
}
})
.retryMax(2)).subscribe(WebClientTest::logCompletion, WebClientTest::handleError);
Is there any way to change the URL at the middle of re-try cycle ?
But it seems we cannot change the URL once he client is created
You cannot - it's immutable.
even if i change the URL its not getting effected
You're not actually changing the URL. Take a look at the uri() method - it's returning a new instance with a URI set. Since you're not doing anything with that new instance, nothing happens (as expected.)
The route I'd probably suggest is to create a separate method to form & return your basic WebClient publisher:
private Mono<String> fromUrl(String url) {
return WebClient.builder().clientConnector(connector).build()
.post()
.body(BodyInserters.fromValue("hi"))
.uri(url)
.retrieve()
.bodyToMono(String.class);
}
...and then do something like:
fromUrl("https://httpstat.us/400").retryWhen(Retry.backoff(2, Duration.ofSeconds(1)))
.onErrorResume(t -> Exceptions.isRetryExhausted(t), t -> fromUrl("https://httpstat.us/500").retryWhen(Retry.backoff(5, Duration.ofSeconds(1))))
.onErrorResume(t -> Exceptions.isRetryExhausted(t), t -> fromUrl("https://httpstat.us/200").retryWhen(Retry.backoff(7, Duration.ofSeconds(1))))
...which will try /400 3 times, then try /500 5 times, then /200 up to 7 times (but unless it's down, that will of course return on the first try.)
Note that the above example uses the latest version of reactor-core which has the retry functionality built in, rather than the retry functionality in reactor addons. Translating it to the reactor addons functionality should be reasonably straightforward.
This doesn't not strictly changing the URL in the same retry cycle, but instead chaining requests together with configurable retries per request. This then allows you to set different retry strategies on different URLs, which is advantageous if you don't necessarily want the retry to "carry on" from its previous point (It could make sense to set the backoff back to one second for a fresh URL, for example.)

Http Websocket as Akka Stream Source

I'd like to listen on a websocket using akka streams. That is, I'd like to treat it as nothing but a Source.
However, all official examples treat the websocket connection as a Flow.
My current approach is using the websocketClientFlow in combination with a Source.maybe. This eventually results in the upstream failing due to a TcpIdleTimeoutException, when there are no new Messages being sent down the stream.
Therefore, my question is twofold:
Is there a way – which I obviously missed – to treat a websocket as just a Source?
If using the Flow is the only option, how does one handle the TcpIdleTimeoutException properly? The exception can not be handled by providing a stream supervision strategy. Restarting the source by using a RestartSource doesn't help either, because the source is not the problem.
Update
So I tried two different approaches, setting the idle timeout to 1 second for convenience
application.conf
akka.http.client.idle-timeout = 1s
Using keepAlive (as suggested by Stefano)
Source.<Message>maybe()
.keepAlive(Duration.apply(1, "second"), () -> (Message) TextMessage.create("keepalive"))
.viaMat(Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri)), Keep.right())
{ ... }
When doing this, the Upstream still fails with a TcpIdleTimeoutException.
Using RestartFlow
However, I found out about this approach, using a RestartFlow:
final Flow<Message, Message, NotUsed> restartWebsocketFlow = RestartFlow.withBackoff(
Duration.apply(3, TimeUnit.SECONDS),
Duration.apply(30, TimeUnit.SECONDS),
0.2,
() -> createWebsocketFlow(system, websocketUri)
);
Source.<Message>maybe()
.viaMat(restartWebsocketFlow, Keep.right()) // One can treat this part of the resulting graph as a `Source<Message, NotUsed>`
{ ... }
(...)
private Flow<Message, Message, CompletionStage<WebSocketUpgradeResponse>> createWebsocketFlow(final ActorSystem system, final String websocketUri) {
return Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri));
}
This works in that I can treat the websocket as a Source (although artifically, as explained by Stefano) and keep the tcp connection alive by restarting the websocketClientFlow whenever an Exception occurs.
This doesn't feel like the optimal solution though.
No. WebSocket is a bidirectional channel, and Akka-HTTP therefore models it as a Flow. If in your specific case you care only about one side of the channel, it's up to you to form a Flow with a "muted" side, by using either Flow.fromSinkAndSource(Sink.ignore, mySource) or Flow.fromSinkAndSource(mySink, Source.maybe), depending on the case.
as per the documentation:
Inactive WebSocket connections will be dropped according to the
idle-timeout settings. In case you need to keep inactive connections
alive, you can either tweak your idle-timeout or inject ‘keep-alive’
messages regularly.
There is an ad-hoc combinator to inject keep-alive messages, see the example below and this Akka cookbook recipe. NB: this should happen on the client side.
src.keepAlive(1.second, () => TextMessage.Strict("ping"))
I hope I understand your question correctly. Are you looking for asSourceOf?
path("measurements") {
entity(asSourceOf[Measurement]) { measurements =>
// measurement has type Source[Measurement, NotUsed]
...
}
}

Spring and background thread execution

I have a Spring Boot 1.3.5 web application (Running on Tomcat 8), one of its features is to contact a third-party API through REST and launch many lenghty jobs (From 1 to around maybe 30 depending on the user input, each one with its own REST call in a for loop). I have all this logic in a controller called using a POST with some parameters.
What I need is to launch a background task after each job has been acknowledged by the API, which would be passed some parameter (Job ID) and periodically (~30 s) poll another API to fetch the job output (Again, these jobs may take from several seconds up to an hour, and getting its job takes about 3-4 seconds plus parsing a long string) and do some business logic based on their status (Updating a DB record for now)
However I'm not sure which, if any, TaskExecutor to use, or whether I should use Java's Future structures for this. I might benefit from a Thread pool which will only run X threads parallel and queue others to not overload the server. Is there an example I can take to learn and start off?
Sample of my existing code:
#RequestMapping(value={"/job/launch"}, method={RequestMethod.POST})
public ResponseEntity<String> runJob(HttpServletRequest req) {
for (int deployments=1; deployments <= deployments_required; deployments++) {
httpPost.setEntity((HttpEntity)new StringEntity(jsonInput));
CloseableHttpResponse response = httpclient.execute(httpPost);
HttpEntity entity = response.getEntity();
responseString = EntityUtils.toString(entity, "UTF-8");
JsonObject jsonObject = new JsonParser().parse(responseString).getAsJsonObject();
if (response.getStatusLine().getStatusCode() != 200) {
resultsNotOk.add(new ResponseEntity<String>(jsonObject.get("message").getAsString(), HttpStatus.INTERNAL_SERVER_ERROR));
continue;
}
String deploymentId;
deploymentId = jsonObject.get("id").getAsString();
// Start background task to keep checking the job every few seconds and find created instance IP addresses
start_checking_execution(deploymentId);
}
}
(Yes, this code may be better put in a Service but it was originally built as is so I haven't moved it yet. It may be a good time to do it now)
I would say it's work for Spring Batch
You can define Reader/Processor (to convert source read to target write objects)/Writer to work with the the logic
You can use JobOperator to get job state. See job status transitions

Are there major scaling limits with play framework and jdbc blocking io calls

I am using the playframework (2.4) for Java and connecting it to Postgres. The play framework is being used as a restful service and all it is doing is insert,updates,reads and deletes using JDBC. On this play page https://www.playframework.com/documentation/2.3.x/JavaAsync it states clearly that JDBC is blocking and that play has few threads. For the people who know about this, how limiting could this be and is there some way I can work around this? My specific app can have a few hundred database calls per second. I will have all the hardware and extra servers but do not know how play can handle this or scale to handle this in the code. My code in play looks like this:
public static Result myprofile() {
DynamicForm requestData = Form.form().bindFromRequest();
Integer id = Integer.parseInt(requestData.get("id"));
try {
JSONObject jo = null;
Connection conn = DB.getConnection();
ResultSet rs;
JSONArray ja = new JSONArray();
PreparedStatement ps = conn.prepareStatement("SELECT p.fullname as fullname, s.post as post,to_char(s.created_on, 'MON DD,YYYY') as created_on,s.last_reply as last_reply,s.id as id,s.comments as comments,s.state as state,s.city as city,s.id as id FROM profiles as p INNER JOIN streams as s ON (s.profile_id=p.id) WHERE s.profile_id=? order by created_on desc");
ps.setInt(1, id);
rs = ps.executeQuery();
while (rs.next()) {
jo = new JSONObject();
jo.put("fullname", rs.getString("fullname"));
jo.put("post", rs.getString("post"));
jo.put("city", rs.getString("city"));
jo.put("state", rs.getString("state"));
jo.put("comments", rs.getInt("comments"));
jo.put("id", rs.getInt("id"));
jo.put("last_reply", difference(rs.getInt("last_reply"), rs.getString("created_on")));
ja.put(jo);
}
JSONObject mainObj = new JSONObject();
mainObj.put("myprofile", ja);
String total = mainObj.toString();
System.err.println(total);
conn.close();
return ok(total);
} catch (Exception e) {
e.getMessage();
}
return ok();
}
I also know that I can try to wrap that in a futures promise however the blocking still occurs. As stated before I will have all the servers and the other stuff taken care of, but would the play framework be able to scale to hundreds of requests per second using jdbc? I am asking and learning now to avoid serious mistakes later on.
Play can absolutely handle this load.
The documentation states that blocking code should be avoided inside controller methods - the default configuration is tuned for them to have asynchronous execution. If you stick some blocking calls in there, your controller will now be waiting for that call to finish before it can process another incoming request - this is bad.
You can’t magically turn synchronous IO into asynchronous by wrapping
it in a Promise. If you can’t change the application’s architecture to
avoid blocking operations, at some point that operation will have to
be executed, and that thread is going to block. So in addition to
enclosing the operation in a Promise, it’s necessary to configure it
to run in a separate execution context that has been configured with
enough threads to deal with the expected concurrency. See
Understanding Play thread pools for more information.
https://www.playframework.com/documentation/2.4.x/JavaAsync#Make-controllers-asynchronous
I believe you are aware of this but I wanted to point out the bolded section. Your database has a limited number of threads that are available for applications to make calls on - it may be helpful to track this number down, create a new execution context that is turned for these threads, and assign that new execution context to a promise that wraps your database call.
Check out this post about application turning for Play, it should give you an idea of what this looks like. I believe he is using Akka Actors, something that might be out of scope for you, but the idea for thread tuning is the same:
Play 2 is optimized out-of-the-box for HTTP requests which don’t
contain blocking calls (i.e. asynchronous). Most database-driven apps
in Java use synchronous calls via JDBC so Play 2 needs a bit of extra
configuration to tune Akka for these types of requests.
http://www.jamesward.com/2012/06/25/optimizing-play-2-for-database-driven-apps
If you try to execute a massive number of requests on the database without turning the threads, you run the risk of starving the rest of your application of threads, which will halt your application. For the load you are expecting, the default tuning might be ok, but it is worth performing some additional investigating.
Getting started with thread tuning:
https://www.playframework.com/documentation/2.4.x/ThreadPools
You should update your controller to return Promise and there is also no reason to make it static anymore with Play 2.4. https://www.playframework.com/documentation/2.4.x/Migration24#Routing
Define an execution context in the application.conf with name "jdbc-execution-context"
//reference to context
ExecutionContext jdbcExecutionContext = Akka.system().dispatchers()
.lookup("jdbc-execution-context");
return promise(() -> {
//db call
}, jdbcExecutionContext)
.map(callResult -> ok(callResult));

Is it a good practice to use JMS Temporary Queue for synchronous use?

If we use JMS request/reply mechanism using "Temporary Queue", will that code be scalable?
As of now, we don't know if we will supporting 100 requests per second, or 1000s of requests per second.
The code below is what I am thinking of implementing. It makes use of JMS in a 'Synchronous' fashion. The key parts are where the 'Consumer' gets created to point a 'Temporary Queue' that was created for this session. I just can't figure out whether using such Temporary Queues is a scalable design.
destination = session.createQueue("queue:///Q1");
producer = session.createProducer(destination);
tempDestination = session.createTemporaryQueue();
consumer = session.createConsumer(tempDestination);
long uniqueNumber = System.currentTimeMillis() % 1000;
TextMessage message = session
.createTextMessage("SimpleRequestor: Your lucky number today is " + uniqueNumber);
// Set the JMSReplyTo
message.setJMSReplyTo(tempDestination);
// Start the connection
connection.start();
// And, send the request
producer.send(message);
System.out.println("Sent message:\n" + message);
// Now, receive the reply
Message receivedMessage = consumer.receive(15000); // in ms or 15 seconds
System.out.println("\nReceived message:\n" + receivedMessage);
Update:
I came across another pattern, see this blog
The idea is to use 'regular' Queues for both Send and Receive. However for 'Synchronous' calls, in order to get the desired Response (i.e. matching the request), you create a Consumer that listens to the Receive queue using a 'Selector'.
Steps:
// 1. Create Send and Receive Queue.
// 2. Create a msg with a specific ID
final String correlationId = UUID.randomUUID().toString();
final TextMessage textMessage = session.createTextMessage( msg );
textMessage.setJMSCorrelationID( correlationId );
// 3. Start a consumer that receives using a 'Selector'.
consumer = session.createConsumer( replyQueue, "JMSCorrelationID = '" + correlationId + "'" );
So the difference in this pattern is that we don't create a new temp Queue for each new request.
Instead all responses come to only one queue, but use a 'selector' to make sure each request-thread receives the only the response that is cares about.
I think the downside here is that you have to use a 'selector'. I don't know yet if that is less preferred or more preferred than earlier mentioned pattern. Thoughts?
Regarding the update in your post - selectors are very efficient if performed on the message headers, like you are doing with the Correlation ID. Spring Integration also internally does this for implementing a JMS Outbound gateway.
Interestingly, the scalability of this may actually be the opposite of what the other responses have described.
WebSphere MQ saves and reuses dynamic queue objects where possible. So, although use of a dynamic queue is not free, it does scale well because as queues are freed up, all that WMQ needs to do is pass the handle to the next thread that requests a new queue instance. In a busy QMgr, the number of dynamic queues will remain relatively static while the handles get passed from thread to thread. Strictly speaking it isn't quite as fast as reusing a single queue, but it isn't bad.
On the other hand, even though indexing on CORRELID is fast, performance is inverse to the number of messages in the index. It also makes a difference if the queue depth begins to build. When the app goes a GET with WAIT on an empty queue there is no delay. But on a deep queue, the QMgr has to search the index of existing messages to determine that the reply message isn't among them. In your example, that's the difference between searching an empty index versus a large index 1,000s of times per second.
The result is that 1000 dynamic queues with one message each may actually be faster than a single queue with 1000 threads getting by CORRELID, depending on the characteristics of the app and of the load. I would recommend testing this at scale before committing to a particular design.
Using selector on correlation ID on a shared queue will scale very well with multiple consumers.
1000 requests / s will however be a lot. You may want to divide the load a bit between different instances if the performance turns out to be a problem.
You might want to elaborate on the requests vs clients numbers. If the number of clients are < 10 and will stay rather static, and the request numbers are very high, the most resilient and fast solution might be to have static reply queues for each client.
Creating temporary queues isn't free. After all it is allocating resources on the broker(s). Having said that, if you have a unknown (before hand) potentially unbound number of clients (multiple JVMs, multiple concurrent threads per JVM, etc) you may not have a choice. Per-allocating client queues and assigning them to clients would get out of hand fast.
Certainly what you've sketched is the simplest possible solution. And if you can get real numbers for transaction volume and it scales enough, fine.
Before I'd look at avoiding temporary queues, I'd look more at limiting the number of clients and making the clients long lived. That is to say create a client pool on the client side, and have the clients in the pool create the temporary queue, session, connection, etc. on startup, reuse them on subsequent requests, and tear them down on shutdown. Then the tuning problem become one of max/min size on the pool, what the idle time is to prune the pool, and what the behavior is (fail vs block) when the pool is maxed. Unless you're creating an arbitrarily large number of transient JVMs (in which case you've got bigger scaling issues just from JVM startup overhead), that ought to scale as well as anything. After all, at that point the resources you are allocating reflect the actual usage of the system. There really is no opportunity to use less than that.
The thing to avoid is creating and destroying a large gratuitous number of of queues, sessions, connections, etc. Design the server side to allow streaming from the get go. Then pool if/when you need to. Like as not, for anything non-trivial, you will need to.
Using temporary queue will cost creating relyToProducers each every time. Instead of using a cached producers for a static replyToQueue, the method createProducer will be more costly and impact performance in a highly concurrent invocation environment.
Ive been facing the same problem and decided to pool connections myself inside a stateless bean. One client connection has one tempQueue and lays inside JMSMessageExchanger object (which contains connectionFactory,Queue and tempQueue), which is bind to one bean instance. Ive tested it in JSE/EE environments. But im not really sure about Glassfish JMS pool behaviour.
Will it actually close JMS connections, obtained "by hand" after bean method ends?Am I doing something terribly wrong?
Also Ive turned off transaction in client bean (TransactionAttributeType.NOT_SUPPORTED) to send request messages immediately to the request queue.
package net.sf.selibs.utils.amq;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.DeliveryMode;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.TemporaryQueue;
import lombok.Getter;
import lombok.Setter;
import net.sf.selibs.utils.misc.UHelper;
public class JMSMessageExchanger {
#Setter
#Getter
protected long timeout = 60 * 1000;
public JMSMessageExchanger(ConnectionFactory cf) {
this.cf = cf;
}
public JMSMessageExchanger(ConnectionFactory cf, Queue queue) {
this.cf = cf;
this.queue = queue;
}
//work
protected ConnectionFactory cf;
protected Queue queue;
protected TemporaryQueue tempQueue;
protected Connection connection;
protected Session session;
protected MessageProducer producer;
protected MessageConsumer consumer;
//status
protected boolean started = false;
protected int mid = 0;
public Message makeRequest(RequestProducer producer) throws Exception {
try {
if (!this.started) {
this.init();
this.tempQueue = this.session.createTemporaryQueue();
this.consumer = this.session.createConsumer(tempQueue);
}
//send request
Message requestM = producer.produce(this.session);
mid++;
requestM.setJMSCorrelationID(String.valueOf(mid));
requestM.setJMSReplyTo(this.tempQueue);
this.producer.send(this.queue, requestM);
//get response
while (true) {
Message responseM = this.consumer.receive(this.timeout);
if (responseM == null) {
return null;
}
int midResp = Integer.parseInt(responseM.getJMSCorrelationID());
if (mid == midResp) {
return responseM;
} else {
//just get other message
}
}
} catch (Exception ex) {
this.close();
throw ex;
}
}
public void makeResponse(ResponseProducer producer) throws Exception {
try {
if (!this.started) {
this.init();
}
Message response = producer.produce(this.session);
response.setJMSCorrelationID(producer.getRequest().getJMSCorrelationID());
this.producer.send(producer.getRequest().getJMSReplyTo(), response);
} catch (Exception ex) {
this.close();
throw ex;
}
}
protected void init() throws Exception {
this.connection = cf.createConnection();
this.session = this.connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
this.producer = this.session.createProducer(null);
this.producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
this.connection.start();
this.started = true;
}
public void close() {
UHelper.close(producer);
UHelper.close(consumer);
UHelper.close(session);
UHelper.close(connection);
this.started = false;
}
}
The same class is used in client (stateless bean) and server (#MessageDriven).
RequestProducer and ResponseProducer are interfaces:
package net.sf.selibs.utils.amq;
import javax.jms.Message;
import javax.jms.Session;
public interface RequestProducer {
Message produce(Session session) throws Exception;
}
package net.sf.selibs.utils.amq;
import javax.jms.Message;
public interface ResponseProducer extends RequestProducer{
void setRequest(Message request);
Message getRequest();
}
Also I`ve read AMQ article about request-response implementation over AMQ:
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
Maybe I'm too late but I spent some hours this week to get sync Request/Reply working within JMS. What about extending the QueueRequester with timeout. I did and at least testing on one single machine (running broker, requestor and replyer) showed that this solution outperforms the discussed ones. On the other side it depends on using a QueueConnection and that means you may be forced to open multiple Connections.

Categories