I am developing a jax-ws webservice that pushes messages asynchronously to the subscribed consumers using one-way operation.
Unfortunatelly with each notification, server awaits a HTTP202 response confirmation which blocks the thread for a fraction of a second. This is affecting the performance of the system and I am looking for a way around this.
Is there any way to execute a web-service one-way call and ignore the HTTP response status?
Ok, so after spending a lot of time on this I have found two solutions:
1) Using Apache HTTPComponents, which provide AsyncHTTPClient with nice API allowing us to build a HTTP response from the scratch.
2) More web-service oriented solution based on Apache CXF platform (which includes the HTTPClient implementation) - first we need to set the global Bus property:
Bus bus = BusFactory.getDefaultBus();
bus.setProperty(AsyncHTTPConduit.USE_ASYNC, Boolean.TRUE);
Then, we use custom interceptor to set the message exchange property to asynchronous:
final class SkipWaitInterceptor extends AbstractSoapInterceptor {
SkipWaitInterceptor() {
super(Phase.SETUP);
}
#Override
public void handleMessage(final SoapMessage message) throws Fault {
message.getExchange().setSynchronous(false);
}
}
Finally, we register the interceptor on our asynchronous Endpoint
org.apache.cxf.endpoint.Client client =
org.apache.cxf.frontend.ClientProxy.getClient(this.notificationConsumer);
org.apache.cxf.endpoint.Endpoint cxfEndpoint = client.getEndpoint();
cxfEndpoint.getOutInterceptors().add(new SkipWaitInterceptor());
That's all, one-way operation responses no longer block the communication.
Related
can somebody please clarify which type of message communication patterns:
Point-To-Point
Request-Reply
Publish/Subscribe
... is used in GENERATED Vert.X Service Proxy classes for RESTful CRUD app (which has 4 HttpServerVerticles which communicates with DatabaseVerticle and those are deployed by MainVerticle)?
Thank you in advance.
I persume it's Request-Reply since it sends Http Request and recieves Http Response since in "Vert.x in action" it states (in Chapter 3.1.4):
If you need message consumers to get back to the entity that sent the event then go for request-reply.
Any help/advice is greatly appreciated.
TL;DR: Request-Reply
If you look in the docs for service proxy (https://vertx.io/docs/vertx-service-proxy/java/) you can see in the beginning that it saves you from doing the following "boiler-plate" code:
JsonObject message = new JsonObject();
message.put("collection", "mycollection")
.put("document", new JsonObject().put("name", "tim"));
DeliveryOptions options = new DeliveryOptions().addHeader("action", "save");
vertx.eventBus().request("database-service-address", message, options, res2 -> {
if (res2.succeeded()) {
// done
} else {
// failure
}
});
Also from the same link:
A service is described with a Java interface containing methods
following the async pattern. Under the hood, messages are sent on the event bus to invoke the service and get the response back. But for
ease of use, it generates a proxy that you can invoke directly (using
the API from the service interface).
I have a requirement were some of the STOMP websocket connections needs to be handled synchronously.
Meaning I have a client (spring) subscribed to a topic ("/topic").
I have a server (spring) that has defined the broker ("/topic") also defined handlers ("/app/hello").
Now is it possible for the client to send a request to /app/hello and then wait for a response before sending the next request to /app/hello.
How do I return value on my sever (STOMP spec says about RECEIPT frames but I don't think this is something that can be manually controlled).
How do I wait for the value on my client after a send.
To connect a Java client to a websocket endpoint you can use the tyrus reference implementation of JSR356 - Websockets For Java.
Basically you will need to implement a client endpoint (javax.websocket.Endpoint) and a message handler (javax.websocket.MessageHandler). In the endpoint you register the message handler with the current session on open:
public class ClientEndpoint extends Endpoint {
...
#Override
public void onOpen(final Session aSession, final EndpointConfig aConfig) {
aSession.addMessageHandler(yourMessageHandler);
}
}
To connect to the server endpoint you can use the ClientManager:
final ClientManager clientManager = ClientManager.createClient();
clientManager.connectToServer(clientEndpoint, config, uriToServerEndpoint);
The message handler's onMessage method will be invoked, if the server endpoint sends something to the topic.
Depending on your needs you can either choose to implement the mentioned interfaces or use the corresponding annotations.
UPDATE:
The STOMP website lists several implementations of the STOMP protocol. For Java there are Gozirra and Stampy. I have no experience with these frameworks but the examples are pretty straight forward.
I have a hub and spoke architecture similar to this:
where a GET request comes into the hub and it routes it to one of the spokes for processing. On the hub I also put the request in a map with a UUID so that I can return the proper response when I get the data back from processing. The spokes are identical and are used to balance the load. I then need to pass the information back to the hub from the spoke and return the proper reponse.
I would like to do the messaging using JMS.
What is the best combination of integration patterns to accomplish this?
You already have Request/Reply within Vert.x, so you can achieve this behavior with about 20 lines of code:
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
Router router = Router.router(vertx);
router.get("/").handler((request) -> {
// When hub receives request, it dispatches it to one of the Spokes
String requestUUID = UUID.randomUUID().toString();
vertx.eventBus().send("processMessage", requestUUID, (spokeResponse) -> {
if (spokeResponse.succeeded()) {
request.response().end("Request " + requestUUID + ":" + spokeResponse.result().body().toString());
}
// Handle errors
});
});
// We create two Spokes
vertx.deployVerticle(new SpokeVerticle());
vertx.deployVerticle(new SpokeVerticle());
// This is your Hub
vertx.createHttpServer().requestHandler(router::accept).listen(8888);
}
And here's what Spoke looks like:
/**
* Static only for the sake of example
*/
static class SpokeVerticle extends AbstractVerticle {
private String id;
#Override
public void start() {
this.id = UUID.randomUUID().toString();
vertx.eventBus().consumer("processMessage", (request) -> {
// Do something smart
// Reply
request.reply("I'm Spoke " + id + " and my reply is 42");
});
}
}
Try accessing it in your browser on http://localhost:8888/
You should see that request ID is generated every time, while only one of two Spokes answers your request.
Well if I understand your design correctly this seems to request/reply scenario since the spoke is actually returning some response. If it didn't it would be publish/subscribe.
You can use ActiveMQ for jms and request/reply. See here:
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
As for the details it all depends on your requirements, will the response be sent fairly immediately or is it a long running process?
If it is a long running process you can avoid request/reply and use a fire and forget scenario.
Basically, the hub fires a message on a queue which is being listened by one of the spoke components. Once the backend processing is done it returns the response to a queue monitored by the hub. You can correlate the request/response via some correlationId. During the request part, you can save the correlationId in a cache to match against the response. In a request/reply scenario this is done for you by the infrastructure but don't use for long running process.
To summarise:
Use ActiveMQ for your message processing with JMS.
Use Camel for the REST bits.
Use request/reply if you are sure you expect a response fairly rapidly.
Use fire and forget if you expect the response to take a long time but have to match the message correlationIds.
If you wish to use Camel with JMS, then you should use Request-Reply EIP, and as far as examples go, you have a pretty good one provided via Camel's official examples - it may be a bit old but it's still very much valid.
While you can just ignore the example's Camel configuration through Spring, its route definitions provide sufficient information:
public class SpokeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("jms:queue:spoke")
.process(e -> {
Object result = ...; // do some processing
e.getIn().setBody(result); // publish the result
// Camel will automatically reply if it finds a ReplyTo and CorrelationId headers
});
}
}
Then all HUB needs to do is invoke:
ProducerTemplate camelTemplate = camelContext.createProducerTemplate();
Object response = camelTemplate.sendBody("jms:queue:spoke", ExchangePattern.InOut, input);
How can I send a message to an endpoint without waiting for that endpoint's route to be process (that is, my route should just dispatch the message and finish)?
Using wireTap or multicast is what you're after. A direct: endpoint will modify the Exchange for the next step no matter what ExchangePattern is specified. You can see by using this failing test:
public class StackOverflowTest extends CamelTestSupport {
private static final String DIRECT_INPUT = "direct:input";
private static final String DIRECT_NO_RETURN = "direct:no.return";
private static final String MOCK_OUTPUT = "mock:output";
private static final String FIRST_STRING = "FIRST";
private static final String SECOND_STRING = "SECOND";
#NotNull
#Override
protected RouteBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from(DIRECT_INPUT)
.to(ExchangePattern.InOnly, DIRECT_NO_RETURN)
.to(MOCK_OUTPUT)
.end();
from(DIRECT_NO_RETURN)
.bean(new CreateNewString())
.end();
}
};
}
#Test
public void testShouldNotModifyMessage() throws JsonProcessingException, InterruptedException {
final MockEndpoint myMockEndpoint = getMockEndpoint(MOCK_OUTPUT);
myMockEndpoint.expectedBodiesReceived(FIRST_STRING);
template.sendBody(DIRECT_INPUT, FIRST_STRING);
assertMockEndpointsSatisfied();
}
public static class CreateNewString {
#NotNull
public String handle(#NotNull Object anObject) {
return SECOND_STRING;
}
}
}
Now if you change the above to a wireTap:
from(DIRECT_INPUT)
.wireTap(DIRECT_NO_RETURN)
.to(MOCK_OUTPUT)
.end();
and you'll see it works as expected. You can also use multicast:
from(DIRECT_INPUT)
.multicast()
.to(DIRECT_NO_RETURN)
.to(MOCK_OUTPUT)
.end();
wireTap(endpoint) is the answer.
you can use a ProducerTemplate's asyncSend() method to send an InOnly message to an endpoint...
template.asyncSend("direct:myInOnlyEndpoint","myMessage");
see http://camel.apache.org/async.html for some more details
That might depend on what endpoints etc you are using, but one common method is to put a seda endpoint in between is one option.
from("foo:bar")
.bean(processingBean)
.to("seda:asyncProcess") // Async send
.bean(moreProcessingBean)
from("seda:asyncProcess")
.to("final:endpoint"); // could be some syncrhonous endpoint that takes time to send to. http://server/heavyProcessingService or what not.
The seda endpoint behaves like a queue, first in - first out. If you dispatch several events to a seda endpoint faster than the route can finish processing them, they will stack up and wait for processing, which is a nice behaviour.
You can use inOnly in your route to only send your message to an endpoint without waiting for a response. For more details see the request reply documentation or the event message documentation
from("direct:testInOnly").inOnly("mock:result");
https://people.apache.org/~dkulp/camel/async.html
Both for InOnly and InOut you can send sync or async. Seems strange that you can send InOnly but async, but at last here it explains that it waits for Camel processing and then fire and forget.
The Async Client API
Camel provides the Async Client API in the ProducerTemplate where we have added about 10 new methods to Camel 2.0. We have listed the most important in the table below:
Method
Returns
Description
setExecutorService
void
Is used to set the Java ExecutorService. Camel will by default provide a ScheduledExecutorService with 5 thread in the pool.
asyncSend
Future
Is used to send an async exchange to a Camel Endpoint. Camel will imeddiately return control to the caller thread after the task has been submitted to the executor service. This allows you to do other work while Camel processes the exchange in the other async thread.
asyncSendBody
Future
As above but for sending body only. This is a request only messaging style so no reply is expected. Uses the InOnly exchange pattern.
asyncRequestBody
Future
As above but for sending body only. This is a Request Reply messaging style so a reply is expected. Uses the InOut exchange pattern.
extractFutureBody
T
Is used to get the result from the asynchronous thread using the Java Concurrency Future handle.
The Async Client API with callbacks
In addition to the Client API from above Camel provides a variation that uses callbacks when the message Exchange is done.
Method
Returns
Description
asyncCallback
Future
In addition a callback is passed in as a parameter using the org.apache.camel.spi.Synchronization Callback. The callback is invoked when the message exchange is done.
asyncCallbackSendBody
Future
As above but for sending body only. This is a request only messaging style so no reply is expected. Uses the InOnly exchange pattern.
asyncCallbackRequestBody
Future
As above but for sending body only. This is a Request Reply messaging style so a reply is expected. Uses the InOut exchange pattern.
These methods also returns the Future handle in case you need them. The difference is that they invokes the callback as well when the Exchange is done being routed.
The Future API
The java.util.concurrent.Future API have among others the following methods:
Method
Returns
Description
isDone
boolean
Returns a boolean whether the task is done or not. Will even return true if the tasks failed due to an exception thrown.
get()
Object
Gets the response of the task. In case of an exception was thrown the java.util.concurrent.ExecutionException is thrown with the caused exception.
I'm writing a simple Google Web Toolkit service which acts as a proxy, which will basically exist to allow the client to make a POST to a different server. The client essentially uses this service to request an HTTP call. The service has only one asynchronous method call, called ajax(), which should just forward the server response. My code for implementing the call looks like this:
class ProxyServiceImpl extends RemoteServiceServlet implements ProxyService {
#Override
public Response ajax(String data) {
RequestBuilder rb = /*make a request builder*/
RequestCallback rc = new RequestCallback() {
#Override
public void onResponseReceived(Response response) {
/* Forward this response back to the client as the return value of
the ajax method... somehow... */
}
};
rb.sendRequest(data, requestCallback);
return /* The response above... except I can't */;
}
}
You can see the basic form of my problem, of course. The ajax() method is used asynchronously, but GWT decides to be smart and hide that from the dumb old developer, so they can just write normal Java code without callbacks. GWT services basically just do magic instead of accepting a callback parameter.
The trouble arises, then, because GWT is hiding the callback object from me. I'm trying to make my own asynchronous call from the service implementation, but I can't, because GWT services assume that you behave synchronously in service implementations. How can I work around this and make an asynchronous call from my service method implementation?
You are mixing up client and server side code. In ProxyServiceImpl, you CANNOT use RequestBuilder. RequestBuilder is a client side class which will only execute in the browser.
A server-to-server http call is always synchronous. Instead of using RequestBuilder, you should make use of a library like HttpClient, get the results and then send it back to the client. That would solve the problem you are facing.
But I should add, you DO NOT want to build a proxy at the application level. You could just as well use a http proxy such as apache's mod_proxy.