Play Framework SSE Closing Chunked Response - java

I'm trying to implement Server-Side Events server in Play Framework 1.2.5
How can I know if the client called EventSource.close() (or closed its browser window, for example)? This is a simplified piece of server code I'm using:
public class SSE extends Controller {
public static void updater() {
response.contentType = "text/event-stream";
response.encoding = "UTF-8";
response.status = 200;
response.chunked = true;
while (true) {
Promise<String> promise = Producer.getNextMessage();
String msg = await(promise);
response.writeChunk("data: " + msg + "\n\n");
}
}
}
Producer should deal with queuing, Promise objects, and produce the output, but I should know when to stop it (filling its queue). I would expect some exception thrown by response.writeChunk() if the output stream is closed, but there's no any.
There's a similar example that does not deal with SSE, but only chunks instead, at http://www.playframework.com/documentation/1.2.5/asynchronous#HTTPresponsestreaming

Since play.mvc.Controller doesn't let me know if the output stream is closed during the execution, I solved the problem through the Producer itself:
In Producer.getNextMessage(), the current time is remembered.
In Producer.putMessage(String), the time since last 'get' is checked. If it's greater than some threshold, we can consider that SSE channel is closed.
There's also this class play.libs.F.EventStream which can be useful within the Producer.
Plus, Producer might not be the right name here, since it's more of a dispatching queue...

Related

Is there any way to wait for a JMS message to be dequeued in a unit test?

I'm writing a spring-boot based project where I have some synchronous (eg. RESTI API calls) and asynchronous (JMS) pieces of code (the broker I use is a dockerized instance of ActiveMQ in case there's some kind of trick/workaround).
One of the problems I'm currently struggling with is: my application receives a REST api call (I'll call it "a sync call"), it does some processing and then sends a JMS message to a queue (async) whose message in then handled and processed (let's say I have a heavy load to perform, so that's why I want it to be async).
Everything works fine when running the application, async messages are enqueued and dequeued as expecting.
When I'm writing tests, (and I'm testing the whole service, which includes the sync and async call in rapid succession) it happens that the test code is too fast, and the message is still waiting to be dequeued (we are talking about milliseconds, but that's the problem).
Basically as soon as i receive the response from the API call, the message is still in the queue, so if, for example I make a query to check for its existence -> ka-boom the test fails because (obviously) it doesn't find the object (that probably meanwhile is being processed and created).
Is there any way, or any pattern, I can use to make my test wait for that async message to be dequeued? I can attach code to my implementation if needed, It's a bachelors degree thesis project.
One obvious solution I'm temporarily using is adding a hundred milliseconds sleep between the method call and the assert section (hoping everything is done and persisted), but honestly I kinda dislike this solution because it seems so non-deterministic to me. Also creating a latch between development code and testing doesn't sound really good to me.
Here's the code I use as an entry-point to al the mess I explained before:
public TransferResponseDTO transfer(Long userId, TransferRequestDTO transferRequestDTO) {
//Preconditions.checkArgument(transferRequestDTO.amount.compareTo(BigDecimal.ZERO) < 0);
Preconditions.checkArgument(userHelper.existsById(userId));
Preconditions.checkArgument(walletHelper.existsByUserIdAndSymbol(userId, transferRequestDTO.symbol));
TransferMessage message = new TransferMessage();
message.userId = userId;
message.symbol = transferRequestDTO.symbol;
message.destination = transferRequestDTO.destination;
message.amount = transferRequestDTO.amount;
messageService.send(message);
TransferResponseDTO response = new TransferResponseDTO();
response.status = PENDING;
return response;
}
And here's the code that receives the message (although you wouldn't need it):
public void handle(TransferMessage transferMessage) {
Wallet source = walletHelper.findByUserIdAndSymbol(transferMessage.userId, transferMessage.symbol);
Wallet destination = walletHelper.findById(transferMessage.destination);
try {
walletHelper.withdraw(source, transferMessage.amount);
} catch (InsufficientBalanceException ex) {
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Insufficient Balance in your account";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been DECLINED due to insufficient balance.";
messageService.send(email);
}
walletHelper.deposit(destination, transferMessage.amount);
String u = userHelper.findEmailByUserId(transferMessage.userId);
EmailMessage email = new EmailMessage();
email.subject = "Transfer executed";
email.to = u;
email.text = "Your transfer of " + transferMessage.amount + " " + transferMessage.symbol + " has been ACCEPTED.";
messageService.send(email);
}
Im' sorry if the code sounds "a lil sketchy or wrong" it's a primordial implementation.
I'm willing to write a utility to share with you all if that's the case, but, as you've probably noticed, I'm low on ideas right now.
I'm an ActiveMQ developer working mainly on ActiveMQ Artemis (the next-gen broker from ActiveMQ). We run into this kind of problem all the time in our test-suite given the asynchronous nature of the broker, and we developed a little utility class that automates & simplifies basic polling operations.
For example, starting a broker is asynchronous so it's common for our tests to include an assertion to ensure the broker is started before proceeding. Using old-school Java 6 syntax it would look something like this:
Wait.assertTrue(new Condition() {
#Override
public boolean isSatisfied() throws Exception {
return server.isActive();
}
});
Using a Java 8 lambda would look like this:
Wait.assertTrue(() -> server.isActive());
Or using a Java 8 method reference:
Wait.assertTrue(server::isActive);
The utility is quite flexible as the Condition you use can test anything you want as long as it ultimately returns a boolean. Furthermore, it is deterministic unlike using Thread.sleep() (as you noted) and it keeps testing code separate from the "product" code.
In your case you can check to see if the "object" being created by your JMS process can be found. If it's not found then it can keep checking until either the object is found or the timeout elapses.

Convert a for loop to a Multi-threaded chunk

I have a following piece for loop in a function which I intended to parallelize but not sure if the load of multiple threads will overweight the benefit of concurrency.
All I need is to send different log files to corresponding receivers. For the timebeing lets say number of receivers wont more than 10. Instead of sending log files back to back, is it more efficient if I send them all parallel?
for(int i=0; i < receiversList.size(); i++)
{
String receiverURL = serverURL + receiversList.get(i);
HttpPost method = new HttpPost(receiverURL);
String logPath = logFilesPath + logFilesList.get(i);
messagesList = readMsg(logPath);
for (String message : messagesList) {
StringEntity entity = new StringEntity(message);
log.info("Sending message:");
log.info(message + "\n");
method.setEntity(entity);
if (receiverURL.startsWith("https")) {
processAuthentication(method, username, password);
}
httpClient.execute(method).getEntity().getContent().close();
}
Thread.sleep(500); // Waiting time for the message to be sent
}
Also please tell me how can I make it parallel if it is gonna work? Should I do it manual or use ExecutorService?
All I need is to send different log files to corresponding receivers. For the time being lets say number of receivers won't be more than 10. Instead of sending log files back to back, is it more efficient if I send them all parallel?
There are a lot of questions to be asked before we can determine if doing this in parallel will buy you anything. You mentioned "receivers" but are you really talking about different receiving servers on different web addresses or are all threads sending their log files to the same server? If it is the latter then chances are you will get very little improvement in speed with concurrency. A single thread should be able to fill the network pipeline just fine.
Also, you probably would get no speed up if the messages are small. Only large messages would take any time and give you any true savings if they were sent in parallel.
I'm most familiar with the ExecutorService classes. You could do something like:
ExecutorService threadPool = Executors.newFixedThreadPool(10);
...
threadPool.submit(new Runnable() {
// you could create your own Runnable class if each one needs its own httpClient
public void run() {
StringEntity entity = new StringEntity(message);
...
// we assume that the client is some sort of pooling client
httpClient.execute(method).getEntity().getContent().close();
}
}
});
What will be good is if you want to queue up these messages and send them in a background thread to not slow down your program. Then you could submit the messages to the threadPool and keep on moving. Or you could put them in BlockingQueue<String> and have a thread taking from the BlockingQueue and calling the httpClient.execute(...).
More implementation details from this good ExecutorService tutorial.
Lastly, how about putting all of your messages into one entity and divide the messages on the server. That would be the most efficient although you might not control the server handler code.
Hello ExecutorService is certainly an option. You have 4 ways to do it in Java.
Using Threads (exposes to many details easy to make mistake)
Executor service as you have already mentioned. It comes from Java 6
Here is a tutorial demonstrating ExecutorService http://tutorials.jenkov.com/java-util-concurrent/executorservice.html
ForkJoin framework comes from Java 7
ParallelStreams comes from Java 8 bellow is a solution using ParallelStreams
Going for higher level api will spare you some errors you might otherwise do.
receiversList.paralelstream().map(t->{
String receiverURL = serverURL + receiversList.get(i);
HttpPost method = new HttpPost(receiverURL);
String logPath = logFilesPath + logFilesList.get(i);
return readMsg(logPath);
})
.flatMap(t->t.stream)
.forEach(t->{
StringEntity entity = new StringEntity(message);
log.info("Sending message:");
log.info(message + "\n");
method.setEntity(entity);
if (receiverURL.startsWith("https")) {
processAuthentication(method, username, password);
}
httpClient.execute(method).getEntity().getContent().close();})

Java - can two threads on client side use the same input stream from server?

I'm working on a Java client/server application with a pretty specific set of rules as to how I have to develop it. The server creates a ClientHandler instance that has input and output streams to the client socket, and any input and output between them is triggered by events in the client GUI.
I have now added in functionality server-side that will send out periodic updates to all connected clients (done by storing each created PrintWriter object from the ClientHandlers in an ArrayList<PrintWriter>). I need an equivalent mechanism client-side to process these messages, and have been told this needs to happen in a second client-side thread whose run() method uses a do...while(true) loop until the client disconnects.
This all makes sense to me so far, what I am struggling with is the fact that the two threads will have to share the one input stream, and essentially 'ignore' any messages that aren't of the type that they handle. In my head, it should look something like this:
Assuming that every message from server sends a boolean of value true on a message-to-all, and one of value false on a message to an individual client...
Existing Client Thread
//method called from actionPerformed(ActionEvent e)
//handles server response to bid request
public void receiveResponse()
{
//thread should only process to-specific-client messages
if (networkInput.nextBoolean() == false)
{
//process server response...
}
}
Second Client-side Thread
//should handle all messages set to all clients
run()
{
do {
if (networkInput.nextBoolean() == true)
{
//process broadcasted message...
} while (true);
}
As they need to use the same input stream, I would obviously be adding some synchronized, wait/notify calls, but generally, is what I'm looking to do here possible? Or will the two threads trying to read in from the same input stream interfere with each other too much?
Please let me know what you think!
Thanks,
Mark
You can do it, though it will be complicated to test and get right. How much is "too much" depends on you. A simpler solution is to have a reader thread pass messages to the two worker threads.
ExecutorService thread1 = Executors.newSingleThreadedExecutors();
ExecutorService thread2 = Executors.newSingleThreadedExecutors();
while(running) {
Message message = input.readMessage();
if (message.isTypeOne())
thread1.submit(() -> process(message));
else if (message.isTypeTwo())
thread2.submit(() -> process(message));
else
// do something else.
}
thread1.shutdown();
thread2.shutdown();

Multiple Private WebSocket Messages with Spring

I am using Spring #RequestMapping for REST synchronous services consuming and producing JSON. I now want to add asynchronous responses were the client sends a list of ids and the server sends back the details as it gets them to only that one client.
I been searching a while and have not found what I am looking for. I have seen two different approaches for Spring. The most common is a message broker approach where it appears that everybody gets every message by subscribing to a queue or topic. This is VERY unacceptable since this is private data. I also have a finite number of data points to return. The other approach is a Callable, AsyncResult or DeferredResult. This appears to keep the data private but I want to send multiple responses.
I have seen something similar to what I want but is uses Jersey SSE on the server. I would like to stick with Spring.
This is what I currently have using pseudo code.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public GetAccountBalancesResponse getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
GetAccountBalancesResponse ret = new GetAccountBalancesResponse();
ret.setBalances(synchronousService.getBalances(request.getIds());
return ret;
}
This is what I am looking to do. It is rather rough since I have no clue of the details. Once I figure out sending I will work on the asynchronous part but would take any suggestions.
#RequestMapping(value = BASE_PATH + "/balances", method = RequestMethod.POST, consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public ???<BalanceDetails> getAccountBalances(#RequestBody GetAccountBalancesRequest request) {
final ???<BalanceDetails> ret = new ???<>();
new Thread(new Runnable() {
public void run() {
List<Future<BalanceDetails>> futures = asynchronousService.getBalances(request.getIds());
while(!stillWaiting(futures)) {
// Probably use something like a Condition to block until there is some details.
ret.send(getFinishedDetails(futures));
}
ret.close();
}
}).start();
return ret;
}
Thanks, Wes.
It doesn't work like this: you are using plain Spring actions, which are intended to be processed in a single thread that possilby blocks until the full request is computed. You don't create threads inside controllers - or at least not in that way.
If the computation lasts really long and you want to give your users a visual feedback, these are the steps:
optimize the procedure :) use indices, caching, whatever
if still isn't enough, the computation still lasts forever and users demand feedback, you'll have two options
poll with javascript and show a visual feedback (easier). Basically you submit the task to a thread pool and return immediately, and there's another controller method that reads the current state of the computation and returns it to the user. This method is called by javascript every 10 seconds or so
use a backchannel (server push, websocket) - not so easy because you have to implement both the client and the server part. Obviously there are libraries and protocols that will make this only a handful lines of code long, but if you have never tried it before, you'll spend some time understanding the setup - plus debugging websockets is not as easy as regular HTTP because of the debugging tools
Took a lot of digging but it looks like Spring Web 4.2 does support server side events. I was using Spring Boot 1.2.7 that uses Spring Web 4.1.7. Switching to Spring Boot 1.3.0.RC1 adds the SseEmitter.
Here is my pseudo code.
#RequestMapping(value = BASE_PATH + "/getAccountBalances", method = RequestMethod.GET)
public SseEmitter getAccountBalances(#QueryParam("accountId") Integer[] accountIds) {
final SseEmitter emitter = new SseEmitter();
new Thread(new Runnable() {
#Override
public void run() {
try {
for (int xx = 0; xx < ids.length; xx++) {
Thread.sleep(2000L + rand.nextInt(2000));
BalanceDetails balance = new BalanceDetails();
...
emitter.send(emitter.event().name("accountBalance").id(String.valueOf(accountIds[xx]))
.data(balance, MediaType.APPLICATION_JSON));
}
emitter.send(SseEmitter.event().name("complete").data("complete"));
emitter.complete();
} catch (Exception ee) {
ee.printStackTrace();
emitter.completeWithError(ee);
}
}
}).start();
return emitter;
}
Still working out closing the channel gracefully and parsing the JSON object using Jersey EventSource but it is a lot better than a message bus.
Also spawning a new thread and using a sleep are just for the POC. I wouldn't need either since we already have an asynchronous process to access a slow back end system.
Wes.

Async NIO: Same client sending multiple messages to Server

Regarding Java NIO2.
Suppose we have the following to listen to client requests...
asyncServerSocketChannel.accept(null, new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel, Object attachment) {
// Put the execution of the Completeion handler on another thread so that
// we don't block another channel being accepted.
executer.submit(new Runnable() {
public void run() {
handle(asyncSocketChannel);
}
});
// call another.
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
// TODO Auto-generated method stub
}
});
This code will accept a client connection process it and then accept another.
To communicate with the server the client opens up an AsyncSocketChannel and fires the message.
The Completion handler completed() method is then invoked.
However, this means if the client wants to send another message on the same AsyncSocket instance it can't.
It has to create another AsycnSocket instance - which I believe means another TCP connection - which is performance hit.
Any ideas how to get around this?
Or to put the question another way, any ideas how to make the same asyncSocketChannel receive multipe CompleteionHandler completed() events?
edit:
My handling code is like this...
public void handle(AsynchronousSocketChannel asyncSocketChannel) {
ByteBuffer readBuffer = ByteBuffer.allocate(100);
try {
// read a message from the client, timeout after 10 seconds
Future<Integer> futureReadResult = asyncSocketChannel.read(readBuffer);
futureReadResult.get(10, TimeUnit.SECONDS);
String receivedMessage = new String(readBuffer.array());
// some logic based on the message here...
// after the logic is a return message to client
ByteBuffer returnMessage = ByteBuffer.wrap((RESPONSE_FINISHED_REQUEST + " " + client
+ ", " + RESPONSE_COUNTER_EQUALS + value).getBytes());
Future<Integer> futureWriteResult = asyncSocketChannel.write(returnMessage);
futureWriteResult.get(10, TimeUnit.SECONDS);
} ...
So that's it my server reads a message from the async channe and returns an answer.
The client blocks until it gets the answer. But this is ok. I don't care if client blocks.
Whent this is finished, client tries to send another message on same async channel and it doesn't work.
There are 2 phases of connection and 2 different kind of completion handlers.
First phase is to handle a connection request, this is what you have programmed (BTW as Jonas said, no need to use another executor). Second phase (which can be repeated multiple times) is to issue an I/O request and to handle request completion. For this, you have to supply a memory buffer holding data to read or write, and you did not show any code for this. When you do the second phase, you'll see that there is no such problem as you wrote: "if the client wants to send another message on the same AsyncSocket instance it can't".
One problem with NIO2 is that on one hand, programmer have to avoid multiple async operations of the same kind (accept, read, or write) on the same channel (or else an error occur), and on the other hand, programmer have to avoid blocking wait in handlers. This problem is solved in df4j-nio2 subproject of the df4j actor framework, where both AsyncServerSocketChannel and AsyncSocketChannel are represented as actors. (df4j is developed by me.)
First, you should not use an executer like you have in the completed-method. The completed-method is already handled in a new worker-thread.
In your completed-method for .accept(...), you should call asychSocketChannel.read(...) to read the data. The client can just send another message on the same socket. This message will be handled with a new call to the completed-method, perhaps by another worker-thread on your server.

Categories