I'm designing a REST API where some operations are propagated to AMQP queues. When message is processed (with error or successfully) the client must be notified. My first thoughts were to boot lightweight embedded HTTP server when client library is initialized, so message processing mechanism can also emit an HTTP request to the server about how went the operation execution. Any other / better ideas of how to implement this?
Related
Currently trying to make 2 Java implementations of Camel work to fire-and-forget Kafka messages for concurrency.
First implementation I got working by using a ProducerTemplate and sending an Exchange to a Direct Camel route that sends to Kafka using template.asyncSend - this allowed me to send messages concurrently because from my understanding, I’m not waiting to get a response because of asyncSend letting me fire and forget messages.
Now in the Rest implementation, I’ve configured the HTTP route to go to a Direct route using rest(), and then from there Direct sets the Exchange Pattern to InOnly and goes to a SEDA and the SEDA route handles sending the exchange message to Kafka. However when I run a concurrency test using the Rest DSL, I run into a RejectedExecutionException from DefaultAsyncProcessorAwaitManager saying that it's waiting on an asynchronous callback for the exchange id?
How can I make the Rest DSL just fire and forget?? In the ProducerTemplate version I don’t run into this error since asyncSend() returns a Future but I never invoke the method that gets the callback/result from the future. I tried seeing if using an async processor in the from(seda).process(async processor).to(kafka) part but I’m stuck on how else to achieve this.
TL;DR: Can I make the camel rest route not wait for the callback and just behave to fire and forget???
EDIT: In either implementation, can I receive acknowledgement that the message was successfully sent to Kafka but not waiting to see if the message was processed over on consumer-side and that it has consumed from the queue? I still haven't been able to wrap my head around this - I want to have concurrent users fire-and-forget messages/send asynchronously and receive acknowledgement that the message is in Kafka, but are these fundamentally contradicting each other? Would wanting to receive acks make this synchronous? But also, I don't understand why we are getting blocked for an asynchronous callback??
Thank you!
I first tried changing rest route from “from(direct).to(kafka)” to “from(direct).setPattern(InOnly).to(seda)” and “from(seda).to(kafka)” Then I tried using async processor instead of regular processor. Both times I expected maybe running the concurrency test would allow me to send a bunch of requests at once without running into this error where I'm waiting for an async callback.
I use microservice architecture in my project. And for interservice communication I use message queue NATS. I wrote a gateway, that handle all http requests and put it to queue. All end point services are subscribed to this queue.
At endpoint services I use Xitrum based on Netty IO. When I get request from queue, I deserialise it to FullHttpRequest. But I don't know how to send it to my netty server, that can handle it according to business logic (without using external httpclient, for example, that can send it to localhost)
Is there any possibility to send FullHttpRequest instance to netty server (listening localhost:8000) using netty api? Or may be another solution. What is the common approach?
Please see the netty examples which has everything you need:
https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/http/snoop
can an asynchronous web service be achieved with java spring-ws framework like how it's explained in here
basically when a client sends a request to the server for the first time, the web service on the server will callback the the client whenever it has information based on the request. so that means the server may reply more than once based on the first initial request from the client.
Suggested approach as per my experience:
Asynchronous web services are generally implemented in the following model:
CLIENT SUBMIT REQUEST -> SERVER RETURNS 202 ACCEPTED RESPONSE(polling/JOB URL in header) -> CLIENT KEEP POLLING THE JOB URL -> SERVER RETURNS 200 OK for the JOB URL ALONG WITH JOB RESPONSE IN BODY.
You may need to define few response body for job in progress. When client polls the server and server is still processing the request, the body should contain the IN PROGRESS message in a predefined form for the client. If server finished processing, then the desired response should be available in the body.
Hope it helps!
I'm using jax-ws with Spring. The client is a JMS consumer application that will make a call to the server to do some additional processing including sending an email. One situation that I have failed to handle is if a message comes through the consumer while the "server" application is restarting. Right now the client will just timeout and the message will not be fully processed. Any thoughts?
Setup a dead letter queue in which you'll place messages / web service requests that fail to be processed for some reason. You can now develop a scheduled service that will poll the dead letter queue at interval to retry sending the message.
Be sure to have setup your client to timeout gracefully (see this answer for details on timeout config) and use a persistent store (file/db) for your dead letter queue
One of our products implements the following one-way web service structure:
Server <--------------------- Middleware <---------------- Client
SOAP over JMS (queue) SOAP over HTTP
In this model, clients send SOAP messages over HTTP to our middleware (Progress SonicMQ). The messages get pushed into JMS queues by SonicMQ and our server fetches them from there. However, as you can see, the server does not send a response to the client (asynchronous JMS).
We would like to implement a response-channel to this model. The often suggested solution is to create a temporary replyTo-queue (on the fly) in the middleware, allowing server to send a response to that queue. Then, client can fetch the response and the replyTo-queue is closed. This sounds convenient enough, but unfortunately our clients operate over plain HTTP and not over JMS, so their clients can not easily set up replyTo queues.
One approach to achieving a response channel in such hybrid HTTP/JMS SOAP model would be to configure the middleware to open the replyTo queue on each succesful SOAP receive, append the replyTo-queue and sender information to the SOAP message and push the message to the queue, where it would be fetched by the server. After receiving and processing the message, the server could send a response to the indicated replyTo-queue in the middleware. Finally, the middleware would send the response (SOAP) over HTTP back to the original client by using the data from the SOAP message (the data that was inserted there in the middleware procedures when the request was first received).
While propably possible, this sounds kind of a hacky. So the question is: any cleaner ways of achieving such request/response model on our case? The server end has been implemented in Java.
The solution:
Progress SonicMQ supports "Content Reply Send" HTTP Acceptor, which allows to easily send JMS reply. The Content Reply Send acceptor works in a following way:
Acceptor receives the HTTP message a client sent
Acceptor creates a temporary JMS queue
Acceptor builds up a JMS message, containing the HTTP body, and adds the temporary queue's identification to the newly created JMS message
Acceptor pushes the JMS message into its destination queue (not the temporary queue)
Acceptor starts consuming the temporary reply-To queue
When client fetches message from original destination queue, it contains the set reply-To queue identification
Client consumes message
Client sends reply to the reply-To queue
Acceptor receives message from the queue
Acceptor sends message as HTTP to the client that originally sent the HTTP message
Should consumer ("server" in our case) fail and not send reply causing timeout, Sonic's HTTP Acceptor sends an HTTP message to the client indicating the timeout. This is a very standard feature in SonicMQ. I suppose it exists in other products as well.
This allows using standard SOAP over JMS (see skaffman's answer) in the "server" end avoids any custom programming in the middleware.
I still see some problems in the JMS model though, but this is definitely an improvement.
Update 2009-11-05:
After researching this issue even more, it turns out my suspicion against HTTP<-->middleware<-->JMS has been relevant.
There are a few critical problems in this model. Synchronous-asynchronous model with middleware simply isn't convenient. Either have both ends implement JMS connection (which should rock) or go with HTTP in both ends. Mixing them results only in headaches. Of these two, SOAP-over-HTTP is simpler and better supported than SOAP-over-JMS.
Once more: if you are designing this kind of a system... DON'T.
I don't think your suggested solution is hack at all, I think that's the right solution. You have the client-middle layer with a synchronous protocol, and then the middle-server layer using an asynchronous layer, to which you have to add a reply path in order to satisfy the synchronous semantics. That's what middleware is for. Remember that that JMS provides explicit support for temporary reply-to queues, you won't need to mess with the payload at all.
A more left-field possibility is the leverage the fact that SOAP 1.2 was designed with JMS in mind, and so you could use web service layer between middleware and server layer which does SOAP-over-JMS. That means you can keep SOAP from end-to-end, with the middleware changing only the transport.
The only web service stack that I know of that supports JMS transport is Spring Web Services, where the process and development is documented here. This would also give you the opportunity to port your SOAP layer to Spring-WS, which kicks ass :)
Why not add a link to a page that lets users check to see when a response is ready, a la a Fed Ex tracker ID? Give your users the tracker ID when they send the request.
This would fit into the HTTP request/response idiom, and your users would still know that the request is "fire and forget".