How to design publish-subscribe pattern properly in grpc? - java

i'm trying to implement pub sub pattern using grpc but i'm confusing a bit about how to do it properly.
my proto: rpc call (google.protobuf.Empty) returns (stream Data);
client:
asynStub.call(Empty.getDefaultInstance(), new StreamObserver<Data>() {
#Override
public void onNext(Data value) {
// process a data
#Override
public void onError(Throwable t) {
}
#Override
public void onCompleted() {
}
});
} catch (StatusRuntimeException e) {
LOG.warn("RPC failed: {}", e.getStatus());
}
Thread.currentThread().join();
server service:
public class Sender extends DataServiceGrpc.DataServiceImplBase implements Runnable {
private final BlockingQueue<Data> queue;
private final static HashSet<StreamObserver<Data>> observers = new LinkedHashSet<>();
public Sender(BlockingQueue<Data> queue) {
this.queue = queue;
}
#Override
public void data(Empty request, StreamObserver<Data> responseObserver) {
observers.add(responseObserver);
}
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
// waiting for first element
Data data = queue.take();
// send head element
observers.forEach(o -> o.onNext(data));
} catch (InterruptedException e) {
LOG.error("error: ", e);
Thread.currentThread().interrupt();
}
}
}
}
How to remove clients from global observers properly? How to received some sort of a signal when connection drops?
How to manage client-server reconnections? How to force client reconnect when connection drops?
Thanks in advance!

In the implementation of your service:
#Override
public void data(Empty request, StreamObserver<Data> responseObserver) {
observers.add(responseObserver);
}
You need to get the Context of the current request, and listen for cancellation. For single-request, multi-response calls (a.k.a. Server streaming) the gRPC generated code is simplified to pass in the the request directly. This means that you con't have direct access to the underlying ServerCall.Listener, which is how you would normally listen for clients disconnecting and cancelling.
Instead, every gRPC call has a Context associated with it, which carries the cancellation and other request-scoped signals. For your case, you just need to listen for cancellation by adding your own listener, which then safely removes the response observer from your linked hash set.
As for reconnects: gRPC clients will automatically reconnect if the connection is broken, but usually will not retry the RPC unless it is safe to do so. In the case of server streaming RPCs, it is usually not safe to do, so you'll need to retry the RPC on your client directly.

Related

Waiting for an http response to return from vertx WebClient using CountDownLatch

I am writing some tests for some rest api, i have a dispatcher that diapctch my rest request to a WebClient from vertx, in some cases i want to wait for the response from the rest api to return before i can continue further with my assertions, the code that dispatch the requests is wrapped inside other classes, so i am not directly making those requests from the tests, i have 2 implementations from my request dispatcher,one for production and one for tests, the tests dispatcher looks like this :
public class TestRequestDispatcher extends AbstractRequestDispatcher {
#Override
protected void dispatchRequest(ServerRequest request, ServerRequestEventFactory requestEventFactory) {
request.getSender()
.send(request,
new ServerRequestCallBack() {
#Override
public <T> void onSuccess(T response) {
requestEventFactory.makeSuccess(request, response).fire();
}
#Override
public void onFailure(FailedResponseBean failedResponse) {
requestEventFactory.makeFailed(request, failedResponse).fire();
}
});
}
}
this should then call some code that builds a WebClient and call its send method to send the request to the server.
And in order to wait for the response i decided to use the CountDownLatch and modified my code to the following
public class TestRequestDispatcher extends AbstractRequestDispatcher {
#Override
protected void dispatchRequest(ServerRequest request, ServerRequestEventFactory requestEventFactory) {
CountDownLatch requestWait = new CountDownLatch(1);
request.getSender()
.send(request,
new ServerRequestCallBack() {
#Override
public <T> void onSuccess(T response) {
requestWait.countDown();
requestEventFactory.makeSuccess(request, response).fire();
}
#Override
public void onFailure(FailedResponseBean failedResponse) {
requestWait.countDown();
requestEventFactory.makeFailed(request, failedResponse).fire();
}
});
try {
requestWait.await(20, TimeUnit.SECONDS);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
I am using a large timeout here to make sure that the response should return before the timeout is up, so what happens is that i can breakpoint and see the WebCLient.send method being called, and then it pauses at requestWait.wait(...) but the callbacks are never invoked until the CountDownLatch time out is up. while i was expecting the WebClient to send the request and when ever the response is returned it will invoke the callbacks which in return will count down and break the wait before the timwout is up.
Testing with a normal thread things seems to work, i created some runnable class with some sleep period ..less that the CountDownTime latch. like the the following
public class SenderWorker implements Runnable {
private CountDownLatch countDownLatch;
public SenderWorker(CountDownLatch countDownLatch) {
this.countDownLatch = countDownLatch;
}
#Override
public void run() {
try {
Thread.sleep(5000L);
countDownLatch.countDown();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
then in the dispatcher :
public class TestRequestDispatcher extends AbstractRequestDispatcher {
#Override
protected void dispatchRequest(ServerRequest request, ServerRequestEventFactory requestEventFactory) {
CountDownLatch requestWait = new CountDownLatch(1);
new Thread(new SenderWorker(requestWait))
.start();
try {
requestWait.await(20, TimeUnit.SECONDS);
System.out.println("i am here");
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
And this works.. it calls the run method..the sleeps, then call the requestWait.wait(..) and after 5 seconds it breaks out of the wait.
I tried to execute the code that calls the WebClient inside an executeBlocking and also tried the runOnContext and even tried to run it inside a thread just like how i did with the SenderWorker but still the same result .. the WebClient is blocked until the timeout is up.
Any idea what i am doing wrong here and how i can make this works.?!
You may want to consider vertx-unit or vertx-junit5 for testing asynchronous code with Vert.x.
Other than that asynchronous operations should be composed rather than spinning threads and waiting for count-down latches. Vert.x offers several options to do that:
callbacks chaining
future composition
RxJava (1 and 2)
Kotlin coroutines
Quasar.

RxJava: PublishSubject acts synchronously

I need a functionality that would allow to push asynchronously messages to my PublishSubject and to process them at a certain pace (actually one by one) via a ConnectableObservable. Unfortunately it seems that the call to onNext of the PublishSubject is not released until the underlying Subscriber processes the message.
It takes good few seconds to process each message and in debug mode I see that it executes before invocation of the method that pushes the message to PublishSubject is removed from stack - "After push..." always appear in console after internal logs inside the Subscriber...
So I have this RestEndpoint:
#PUT
#Path("{id}")
#TokenAuthenticated
public Response postResource(#PathParam(value="id") final String extId) {
executorService.execute(new Runnable() {
#Override
public void run() {
try {
Message metadata = processor.apply(extId);
log.info("Before push...");
dataImporter.pushData(metadata);
log.info("After push...");
} catch (Exception e) {
e.printStackTrace();
}
}
});
return Response.ok("Request received successfully").build();
}
Here's the constructor of the DataImporter:
public DataImporter(final String configFile) {
dataToImportSubject = PublishSubject.create();
dataToImportObservable = dataToImportSubject.publish();
dataToImportObservable.connect();
dataToImportObservable
.onBackpressureBuffer(1, new Action0() {
#Override
public void call() {
logger.debug("Buffer full...");
}
})
.subscribeOn(Schedulers.io())
.subscribe(new Subscriber<Message>() {
#Override
public void onCompleted() {
// TODO Auto-generated method stub
}
#Override
public void onError(Throwable e) {
logger.error("Error importing "+e.getMessage());
}
#Override
public void onNext(Message value) {
request(1);
importResult(configFile, value);
}
#Override
public void onStart() {
request(1);
}
});
}
Then pushData of DataImporter is just pushing to PublishSubject's onNext method..:
public void pushData(Message metadata) {
dataToImportSubject.onNext(metadata);
}
And here're the declaration of PublishSubject and ConnectableObservable:
public class DataImporter implements ImporterProxy{
private final PublishSubject<Message> dataToImportSubject;
private final ConnectableObservable<Message> dataToImportObservable;
PublishSubjects emit to their consumers on the thread of the original onXXX call:
JavaDocs
Scheduler:
PublishSubject does not operate by default on a particular Scheduler and the Observers get notified on the thread the respective onXXX methods were invoked.
You have to move the processing to some other thread with observeOn because the observeOn can move the onXXX calls to another thread.
subscribeOn does not have any practical effect on Subjects in general because it only affects the subscription thread, and won't modulate the subsequent onXXX calls to those subjects.
RxJava, by default, is synchronous. You need to introduce operators into your observer chain to perform actions on other threads. When you read the documentation on each operator in Observable, you will see statements like "... does not operator on a particular scheduler" -- this indicates that data flows through that operator synchronously.
To get an observer chain to perform actions on other threads, you can use an operator like subscribeOn() with a scheduler to have operations performed on that scheduler. In your example, you likely will want to use Schedulers.io() to provide a background thread.

How to send more then one response to client from Netty

I want to send more than one response to client based on back end process. But in Netty examples I saw echo server is sending back the response at the same time.
My requirement is, I need to validate the client and send him OK response, then send him the DB updates when available.
How can I send more responses to client? Pls direct me to an example or any guide?
at every point in your pipeline you can get the pipeline Channel object from the MessageEvent object (or ChannelEvent) which is passed from handler to handler. you can use this information to send multiple responses at different points in the pipeline.
if we take the echo server example as a base, we can add a handler which send the echo again (that can be done also in the same handler, but the example is to show that multiple handlers can respond).
public class EchoServerHandler extends ChannelHandlerAdapter {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
Channel ch = e.getChannel();
// first message
ch.write(e.getMessage());
}
// ...
}
public class EchoServerHandler2 extends ChannelHandlerAdapter {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
Channel ch = e.getChannel();
// send second message
ch.write(e.getMessage());
}
// ...
}
You can do that as long as you have the reference to the relevant Channel (or ChannelHandlerContext). For example, you can do this:
public class MyHandler extends ChannelHandlerAdapter {
...
public void channelRead(ctx, msg) {
MyRequest req = (MyRequest) msg;
ctx.write(new MyFirstResponse(..));
executor.execute(new Runnable() {
public void run() {
// Perform database operation
..
ctx.write(new MySecondResponse(...));
}
}
}
...
}
You can do this as long as Netty doesn't close the Channel. Its better you call close() yourself when you're done.
Here's a sample: https://stackoverflow.com/a/48128514/2557517

Java nio server client asynchonous

I made a server myself using java nio and a selector. I can receive the data and answer directly from the client if needed.
But now I want a thread that will process data, and anytime it will send data to each client.
So how can I do that? Also how to keep in memory all channels to write the data to each client ?
If you need I can post the part of my code with java nio.
Create a new thread with a runnable and make sure it knows your server because your server should know all clients. If a client sends a message parse it through the data processor thread and let it do it's job. When it's done processing your task then let the server know so he can update all clients.
Tip: you should make a waiting queue for the processing thread with something like a LinkedBlockingQueue so you can always put tasks on the queue without waiting for the task to finish. Then the thead will wait for things on the queue that need to be processed. This way the processing thread will only use CPU resources when there are actually tasks on the queue
Here is a code example
public abstract class Queue implements Runnable {
private final LinkedBlockingQueue<Message> queue;
public Queue() {
this.queue = new LinkedBlockingQueue<Message>();
}
/**
* Adds a message to the queue.
* #param message
*/
public void add(final Message message) {
try {
queue.put(message);
} catch (final InterruptedException e) {
e.printStackTrace();
}
}
/**
* Waits for new messages
*/
#Override
public void run() {
while(true) {
try {
final Message message = queue.take();
processMessage(message);
} catch (final InterruptedException e) {
e.printStackTrace();
}
}
}
/**
* Processes the new message
*/
protected abstract void processMessage(Message message);
}

How Polling mechanism can be realized with RMI?

Following the design/architecture i created for multiuser/network turn-based game with RMI server callbacks, I have tried to create a distributed animation in which my model(Ball) is remote object and it updates the clients via callback mechanism from server.
The current situation of code is :
The model remote object, which is iterating client list and calling update method of them,
public class BallImpl extends UnicastRemoteObject implements Ball,Runnable {
private List<ICallback> clients = new ArrayList<ICallback>();
protected static ServerServices chatServer;
static ServerServices si;
BallImpl() throws RemoteException {
super();
}
....
public synchronized void move() throws RemoteException {
loc.translate((int) changeInX, (int) changeInY);
}
public void start() throws RemoteException {
if (gameThread.isAlive()==false )
if (run==false){
gameThread.start();
}
}
/** Start the ball bouncing. */
// Run the game logic in its own thread.
public void run() {
while (true) {
run=true;
// Execute one game step
try {
updateClients();
} catch (RemoteException e) {
e.printStackTrace();
}
try {
Thread.sleep(50);
} catch (InterruptedException ex) {
}
}
}
public void updateClients() throws RemoteException {
si = new ServerServicesImpl();
List<ICallback> j = si.getClientNames();
System.out.println("in messimpl " + j.size());
if (j != null) {
System.out.println("in ballimpl" + j.size());
for (ICallback aClient : j) {
aClient.updateClients(this);
}
} else
System.err.println("Clientlist is empty");
}
}
The client which is implementing callback interface and has update method implementation :
public final class thenewBallWhatIwant implements Runnable, ICallback {
.....
#Override
public void updateClients(final Ball ball) throws RemoteException {
try {
ball.move();
try {
Thread.sleep(50);
} catch (Exception e) {
System.exit(0);
}
} catch (Exception e) {
System.out.println("Exception: " + e);
}
}
.....
}
My general perception is that i m implementing pushing mechanism with RMI and in that scenario i need to implement polling)
if that is the case how can i implement the polling mechanism with RMI?
thanks for any feedback.
jibbylala
Polling is independent of the protocol you use to implement the client and server.
A client polls by looping endlessly. Inside the loop there's a request to the server for information. The server sends either the desired information or a "not ready" message back. The client does its thing and waits until the next request needs to be sent.
If you happen to choose RMI, it means an RMI client and server. But the polling mechanism is the same regardless.
Break the problem into pieces - it'll be easier to think about and solve that way.
Forget about polling to start. Can you write an RMI server, start it up, and create a separate client to make a single request? If you can do that, then you put it inside a loop with a sleep to implement the delay and you're done.
I don't belive you can implement a callback via Java RMI. You need to either setup polling as you have suggested, or make your "client" RMI servers can you can send message to them directly.
How could you do this differently? I would suggest using JMS messaging to send command objects to the clients, this would handle all the distribution for you.

Categories