Currently we have two separate API endpoints.
public Mono<ServerResponse> get(ServerRequest request) {
Sinks.StandaloneMonoSink<String> sink = Sinks.promise();
sinkMap.putIfAbsent(randomID, sink);
return sink.asMono().timeout(Duration.ofSeconds(60))
.flatMap(val -> ServerResponse.ok().body(BodyInserters.fromValue(val)))
}
public Mono<ServerResponse> push(ServerRequest request) {
Sinks.StandaloneMonoSink<String> sink = sinkMap.remove(randomID);
if (sink == null) {
return ServerResponse.notFound().build(); }
else {
return request.bodyToMono(String.class)
.flatMap(data -> {
sink.success(data);
return ServerResponse().ok().build();
}
}
}
The intention is for client to do a get request and to keep the connection open for 1 min or so waiting for some data to arrive. And then on push request data will be published to the open connection for get and the connection will close upon receipt of first element.
The issue with current approach is that the data may be emitted after get request times out and subscription is canceled, thus losing the data. Is it possible if no subscribers then if I try to emit item throw error or perform another action with data (from the push request side).
Thanks.
I had to read this question multiple times to understand what you are looking for!
I tried something like this and it seems to work.
private final DirectProcessor<String> processor = DirectProcessor.create();
private final FluxSink<String> sink = processor.sink();
// processor has a method to check if there are any active subscribers.
// if it is true, lets push data, otherwise we can throw exception / store it somehwere
#GetMapping("/push/{val}")
public boolean push(#PathVariable String val){
boolean status = processor.hasDownstreams();
if(status)
sink.next(val);
return status;
}
#GetMapping("/get")
public Mono<String> get(){
return processor
.next()
.timeout(Duration.ofMinutes(1));
}
Question:
Will you be running only one instance of this application? What will happen when you run multiple instances of this application?
For ex: User A might push the data to app-instance-1 and User B might subscribe to app-instance-2. In this case, User B might not get data. In this case, you might need something like Redis to store this data and share among all the instances for pub/sub behavior.
Related
I'm coding a game, when a player end its turn, I want to notify the opponent that it's his turn to play.
So I'm storing WebSocketSessions in "Player" classes, so I just need to get an instance of a player to have access to his websocketsession.
The problem is that nothing is happening when I use the "send" method of a websocketsession stored in a "player" instance.
Here is my code to store a WebSocketSession in a player object, it actually receive properly messages from front end, and it is able to send a message back and it works:
#Component("ReactiveWebSocketHandler")
public class ReactiveWebSocketHandler implements WebSocketHandler {
#Autowired
private AuthenticationService authenticationService;
#Override
public Mono<Void> handle(WebSocketSession webSocketSession) {
Flux<WebSocketMessage> output = webSocketSession.receive()
.map(msg -> {
String payloadAsText = msg.getPayloadAsText();
Account account = authenticationService.getAccountByToken(payloadAsText);
Games.getInstance().getGames().get(account.getIdCurrentGame()).getPlayerById(account.getId()).setSession(webSocketSession);
return "WebSocketSession id: " + webSocketSession.getId();
})
.map(webSocketSession::textMessage);
return webSocketSession
.send(output);
}
}
And here is the code I use to notify the opponent player that it is its turn to play, the "opponentSession.send" method seems to produce no result, there is no error message, and it looks like I receive nothing on the front end. The sessions has the same ID than in the handle method so I think the session object is good, also the websocket session was opened and ready when I did my tests:
#RequestMapping(value = "/game/endTurn", method = RequestMethod.POST)
GameBean endTurn(
#RequestHeader(value = "token", required = true) String token) {
ObjectMapper mapper = new ObjectMapper();
Account account = authenticationService.getAccountByToken(token);
gameService.endTurn(account);
Game game = gameService.getGameByAccount(account);
//GameBean opponentGameBean = game.getOpponentGameState(account.getId());
//WebSocketMessage webSocketMessage = opponentSession.textMessage(mapper.writeValueAsString(opponentGameBean));
WebSocketSession opponentSession = game.getPlayerById(game.getOpponentId(account.getId())).getSession();
WebSocketMessage webSocketMessage = opponentSession.textMessage("test message");
opponentSession.send(Mono.just(webSocketMessage));
return gameService.getGameStateByAccount(account);
}
}
You can see on the screenshot that the handle method is working correctly, I'm able to send and receive message.
Websocket input and output
Does someone know how can I make the opponentSession.send method works correctly so that I can receive messages on the front end?
You are using the reactive stack for your websocket and WebSocketSession#send return a Mono<Void> but you don't subscribe to this Mono (you just assembled it) so nothing will happen until something subscribe to it.
In your endpoint it doesn't look like you are using webflux so you are in synchronous world so you don't have other choice than to block
opponentSession.send(Mono.just(webSocketMessage)).block();
If you are using webflux then you should change your method to return a Mono and do something like:
return opponentSession.send(Mono.just(webSocketMessage)).then(gameService.getGameStateByAccount(account));
If you are not familiar with this you should look into projectreactor and WebFlux
I have a DTO class like this :
public class User {
#Field("id")
private String id;
private String userName;
private String emailId;
}
I have to provide an update and delete feature through API.
I have written the following code to delete the record:
public Mono<String> userData(User body) {
repo.removeUserDetails(userObj).subscribe();
return Mono.just("Remove Successful");
}
RemoveUserDetails method is something like this :
public Mono<User> removeUserDetails(User userObj) {
return findByUsername(userObj.getUsername())
.flatMap(existingUser -> {
// logic to delete the data from database which working as expected
}).switchIfEmpty(
Mono.defer(() -> {
return Mono.error(new Exception("User Name " + userObj.getUsername() + " doesn't exist."));
})
);
}
The problem with this code is even if the user is not existing, it is not showing the Mono error I'm returning. In every case, this always returns "Remove Successful".
How can I change my service layer method so that it can return whatever is received by the repo method? I'm new to Reactor code, so unable to figure out how to write it.
Whenever you call subscribe, consider it an immediate red flag. Subscription is something that should be handled by the framework you're using (Webflux in this case.)
If you subscribe yourself, such as in this example:
public Mono<String> userData(User body) {
repo.removeUserDetails(userObj).subscribe();
return Mono.just("Remove Successful");
}
...then you've essentially created a "fire and forget" type subscription, where you have no way of knowing if that publisher completed successfully, if it caused an error, how long it took to complete, whether it completed at all, or whether it emitted an element. So in this case, you're saying "send a request to remove user details, forget you sent it, and then before waiting for any kind of result, always return 'Remove successful'." This is almost never what you want.
You could use something like:
public Mono<String> userData(User body) {
return repo.removeUserDetails(userObj)
.then(Mono.just("Remove Successful"));
}
...which is much better as it includes everything as part of the reactive chain. In this case, you'll either get an error signal, or you'll get "Remove Successful".
However, chances are you don't need that String to be returned at all - you just need to know if it's successful or not. The standard way of doing that (I just need to know that it's completed successfully or not, I don't need it to return a value) is to use Mono<Void> as the return type and then(), something like:
public Mono<Void> userData(User body) {
return repo.removeUserDetails(userObj).then();
}
...which will give you a standard completion if the deletion was successful, and an error signal otherwise.
A common pattern you find when using reactive java code is handling nulls when collecting a list.
The following code is a simple example showing how to handle nulls returned by a Location by wrapping getLocation in a Mono.defer then handling a null using onErrorReturn.
The test code
List<String> items = inventory.testList().block();
items.forEach(System.out::println);
USA
Not Found
SPAIN
private List<Integer> clusters;
private List<Mono<Location>> locations;
private List<String> countryCodes;
public Mono<List<String>> testList() {
clusters = Arrays.asList(0, 1, 2);
locations = Arrays.asList(Mono.just(new Location(0)), null, Mono.just(new Location(2)));
countryCodes = Arrays.asList("USA", "FRANCE", "SPAIN");
return Flux.fromIterable(clusters)
.flatMap(cluster -> getLocation(cluster))
.collectList();
}
public Mono<String> getLocation(int clusterID) {
return Mono.defer(() -> locations.get(clusterID))
.flatMap(location -> Mono.just(location.id))
.flatMap(id -> Mono.just(countryCodes.get(id)))
.onErrorReturn(Exception.class, "Not Found");
}
I am running into a bit of a chicken and an egg problem.
Case: A file is generated on a remote client. The client should transmit the file to the server over an asynccstub. The client must also transmit metadata via a blocking stub to be stored in a database.
Problems:
If I do the asynchronous operation first, then the file data is sent prior to the metadata, and therefore the server has no context as to what to name the file is, or where to put it. I originally intended to return this information from the server (bidirectionally), however stream observers do not lend themselves to set variables outside their anonymous definition.
If I do the synchronous operation first, I can get file naming information back from the server;however, I will need to package this into the "Chunks" of data. This would also require constantly opening and closing of the save file while GRPC iterates over it's stream data, as iterators are not easily reset (so i cant just peel off the first request).
As a last option, I could package all of this to the asynchronous request and dispatch with any synchronous call. I believe this will provide a working solution, but am concerned about the amount of data being sent on already large requests as well as the inefficiency mentioned before.
So my question is:
Is there a way to set a global variable to 'value.Message' from the response observer.
Alternatively, is there a way to pass information from the syncronous call to the asynchronous call on the server side?
Async response observer:
StreamObserver<GrpcServerComm.UploadStatus> responseObserver = new StreamObserver<GrpcServerComm.UploadStatus>() {
#Override
public void onNext(GrpcServerComm.UploadStatus value) {
if (value.getCode() != 1) {
Log.d("Error", "Upload Procedure Failure");
finishLatch.countDown();
}
}
#Override
public void onError(Throwable t) {
Log.d("Error", "Upload Response");
finishLatch.countDown();
}
#Override
public void onCompleted() {
finishLatch.countDown();
}
};
Relevant protobufs
message UploadStatus {
string filename=1;
int32 code = 2;
}
message DataChunk
{
string filename=1;
bytes chunk = 2;
}
message VideoMetadata
{
string publisher =1;
string description =2;
string tags = 3;
double videolat= 4;
double videolong=5;
}
service DataUpload
{
rpc UploadData (stream DataChunk) returns(UploadStatus);
}
service ContentMetaData
{
rpc UploadMetaData(VideoMetadata) returns (UploadStatus);
}
Python Server-side functions
class DataUploadServicer(proto_test_pb2_grpc.DataUploadServicer):
def UploadData(self,request_it,context):
response = proto_test_pb2.UploadStatus()
filename = str(random.getrandbits(32)) #server decides filename
response = filestream.writefile(filename,request_it)
return response
def writefile(filename, chunks):
response = proto_test_pb2.UploadStatus()
filename='tmp/'+filename
app_file = open(filename,"ab")
for chunk in chunks:
app_file.write(chunk.chunk)
app_file.close()
print('File Written')
response.Code=1
response.Message = "Succsesful write"
return response
Java users, a detailed article on this here.
I think it is not good idea to have these as 2 separate requests. Instead Metadata and DataChunk should be combined as 1 single type as shown here.
message FileUploadRequest {
VideoMetadata metaData = 1;
DataChunk dataChunk = 2;
}
Now you might ask why we have to send the metadata for every request! This is where gRPC oneof type helps.
message FileUploadRequest {
oneof upload_data {
VideoMetadata metaData = 1;
DataChunk dataChunk = 2;
}
}
Your service would be like this.
service FileuploadService {
rpc UploadData (stream FileUploadRequest) returns(UploadStatus);
}
When you use Oneof, In your generated code, oneof fields have the same getters and setters as regular fields. You also get a special method for checking which value (if any) in the oneof is set. First you send the metatdata and then you send the chunk. Based on, which oneof is set, then you can take the decision accordingly.
ok, so i'm trying to implement rxJava2 with retrofit2. The goal is to make a call only once and broadcast the results to different classes. For exmaple: I have a list of geofences in my backend. I need that list in my MapFragment to dispaly them on the map, but I also need that data to set the pendingIntent service for the actual trigger.
I tried following this awnser, but I get all sorts of errors:
Single Observable with Multiple Subscribers
The current situation is as follow:
GeofenceRetrofitEndpoint:
public interface GeofenceEndpoint {
#GET("geofences")
Observable<List<Point>> getGeofenceAreas();
}
GeofenceDAO:
public class GeofenceDao {
#Inject
Retrofit retrofit;
private final GeofenceEndpoint geofenceEndpoint;
public GeofenceDao(){
InjectHelper.getRootComponent().inject(this);
geofenceEndpoint = retrofit.create(GeofenceEndpoint.class);
}
public Observable<List<Point>> loadGeofences() {
return geofenceEndpoint.getGeofenceAreas().subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.share();
}
}
MapFragment / any other class where I need the results
private void getGeofences() {
new GeofenceDao().loadGeofences().subscribe(this::handleGeoResponse, this::handleGeoError);
}
private void handleGeoResponse(List<Point> points) {
// handle response
}
private void handleGeoError(Throwable error) {
// handle error
}
What am I doing wrong, because when I call new GeofenceDao().loadGeofences().subscribe(this::handleGeoResponse, this::handleGeoError); it's doing a separate call each time. Thx
new GeofenceDao().loadGeofences() returns two different instances of the Observable. share() only applies to the instance, not the the method. If you want to actually share the observable, you'd have to subscribe to the same instance. You could share the it with a (static) member loadGeofences.
private void getGeofences() {
if (loadGeofences == null) {
loadGeofences = new GeofenceDao().loadGeofences();
}
loadGeofences.subscribe(this::handleGeoResponse, this::handleGeoError);
}
But be careful not to leak the Obserable.
Maybe it's not answering your question directly, however I'd like to suggest you a little different approach:
Create a BehaviourSubject in your GeofenceDao and subscribe your retrofit request to this subject. This subject will act as a bridge between your clients and api, by doing this you will achieve:
Response cache - handy for screen rotations
Replaying response for every interested observer
Subscription between clients and subject doesn't rely on subscription between subject and API so you can break one without breaking another
I'm playing around with the Play Framework (v2.2.2), and I'm trying to figure out how to suspend an HTTP request. I'm trying to create a handshake between users, meaning, I want user A to be able to fire off a request and wait until user B "connects". Once the user B has connected, user A's request should return with some information (the info is irrelevant; let's just say some JSON for now).
In another app I've worked on, I use continuations to essentially suspend and replay an HTTP request, so I have something like this...
#Override
public JsonResponse doGet(HttpServletRequest request, HttpServletResponse response) {
Continuation reqContinuation = ContinuationSupport.getContinuation(request);
if (reqContinuation.isInitial()) {
...
reqContinuation.addContinuationListener(new ContinuationListener() {
public void onTimeout(Continuation c) {...}
public void onComplete(Continuation c) {...}
});
...
reqContinuation.suspend();
return null;
}
else {
// check results and return JsonResponse with data
}
}
... and at some point, user B will connect and the continuation will be resumed/completed in a different servlet. Now, I'm trying to figure out how to do this in Play. I've set up my route...
GET /test controllers.TestApp.test()
... and I have my Action...
public static Promise<Result> test() {
Promise<JsonResponse> promise = Promise.promise(new Function0<JsonResponse>() {
public JsonResponse apply() {
// what do I do now...?
// I need to wait for user B to connect
}
});
return promise.map(new Function<JsonResponse, Result>() {
public Result apply(JsonResponse json) {
return ok(json);
}
});
}
I'm having a hard time understanding how to construct my Promise. Essentially, I need to tell user A "hey, you're waiting on user B, so here's a promise that user B will eventually connect to you, or else I'll let you know when you don't have to wait anymore".
How do I suspend the request such that I can return a promise of user B connecting? How do I wait for user B to connect?
You need to create a Promise that can be redeemed later. Strangely, the Play/Java library (F.java) doesn't seem to expose this API, so you have to reach into the Scala Promise class.
Create a small Scala helper class for yourself, PromiseUtility.scala:
import scala.concurrent.Promise
object PromiseUtility {
def newPromise[T]() = Promise[T]()
}
You can then do something like this in a controller (note, I don't fully understand your use case, so this is just a rough outline of how to use these Promises):
if (needToWaitForUserB()) {
// Create an unredeemed Scala Promise
scala.concurrent.Promise<Json> unredeemed = PromiseUtility.newPromise();
// Store it somewhere so you can access it later, e.g. a ConcurrentMap keyed by userId
storeUnredeemed(userId, unredeemed);
// Wrap as an F.Promise and, when redeemed later on, convert to a Result
return F.Promise.wrap(unredeemed.future()).map(new Function<Json, Result>() {
#Override
public Result apply(Json json) {
return ok(json);
}
});
}
// [..]
// In some other part of the code where user B connects
scala.concurrent.Promise<Json> unredeemed = getUnredeemed(userId);
unredeemed.success(jsonDataForUserB);