My Situation
I'm building a small web chat to learn about Spring and Spring WebSocket. You can create different rooms, and each room has it's own channel at /topic/room/{id}.
My goal is to detect when users join and leave a chat room and I thought I could use Spring WebSocket's SessionSubscribeEvent and SessionUnsubscribeEvent for this.
Getting the Destination from the SessionSubscribeEvent is trivial:
#EventListener
public void handleSubscribe(final SessionSubscribeEvent event) {
final String destination =
SimpMessageHeaderAccessor.wrap(event.getMessage()).getDestination();
//...
}
However, the SessionUnsubscribeEvent does not seem to carry the destination channel, destination is null in the following snippet:
#EventListener
public void handleUnsubscribe(final SessionUnsubscribeEvent event) {
final String destination =
SimpMessageHeaderAccessor.wrap(event.getMessage()).getDestination();
//...
}
My Question
Is there a better way to watch for subscribe/unsubscribe events and should I even be using those as a way for a user to "log in" to a chat room, or should I rather use a separate channel to send separate "log in"/"log out" messages and work with those?
I thought using subscribe/unsubscribe would've been very convenient, but apparently Spring makes it very hard, so I feel like there has to be a better way.
STOMP Headers only appear in the frames relevant to your question as described here: https://stomp.github.io/stomp-specification-1.2.html#SUBSCRIBE and here: https://stomp.github.io/stomp-specification-1.2.html#UNSUBSCRIBE
Only the SUBSCRIBE frame has both destination and id, the UNSUBSCRIBE frame has only an id.
This means you have to remember the subscription id with the destination for future lookup. Care must be taken because different Websocket connections usually use/assign the same subscription ids, so to save destinations reliably, you have to include the websocket session id in your storage key.
I wrote the following method to get it:
protected String getWebsocketSessionId(StompHeaderAccessor headerAccessor)
{
// SimpMessageHeaderAccessor.SESSION_ID_HEADER seems to be set in StompSubProtocolHandler.java:261 ("headerAccessor.setSessionId(session.getId());")
return headerAccessor.getHeader(SimpMessageHeaderAccessor.SESSION_ID_HEADER).toString();
}
StompHeaderAccessor is created like this:
StompHeaderAccessor headerAccessor=StompHeaderAccessor.wrap(((SessionSubscribeEvent)event).getMessage());
StompHeaderAccessor headerAccessor=StompHeaderAccessor.wrap(((SessionUnsubscribeEvent)event).getMessage());
This can then be used to create a unique subscription id which can be used as a key for a map to save data about the subscription, including the destination:
protected String getUniqueSubscriptionId(StompHeaderAccessor headerAccessor)
{
return getWebsocketSessionId(headerAccessor)+"--"+headerAccessor.getSubscriptionId();
}
Like this:
Map<String, String> destinationLookupTable=...;
// on subscribe:
destinationLookupTable.put(getUniqueSubscriptionId(headerAccessor), destination);
// on other occasions, including unsubscribe:
destination=destinationLookupTable.get(getUniqueSubscriptionId(headerAccessor));
I think using SessionSubscribeEvent and SessionUnsubscribeEvent is a good idea for that matter. You can get the destination if you keep track of the SessionID:
private Map<String, String> destinationTracker = new HashMap<>();
#EventListener
public void handleSubscribe(final SessionSubscribeEvent event) {
SimpMessageHeaderAccessor headers = SimpMessageHeaderAccessor.wrap(event.getMessage());
destinationTracker.put(headers.getSessionId(), headers.getDestination());
//...
}
#EventListener
public void handleUnsubscribe(final SessionUnsubscribeEvent event) {
SimpMessageHeaderAccessor headers = SimpMessageHeaderAccessor.wrap(event.getMessage());
final String destination = destinationTracker.get(headers.getSessionId());
//...
}
Related
I'm coding a game, when a player end its turn, I want to notify the opponent that it's his turn to play.
So I'm storing WebSocketSessions in "Player" classes, so I just need to get an instance of a player to have access to his websocketsession.
The problem is that nothing is happening when I use the "send" method of a websocketsession stored in a "player" instance.
Here is my code to store a WebSocketSession in a player object, it actually receive properly messages from front end, and it is able to send a message back and it works:
#Component("ReactiveWebSocketHandler")
public class ReactiveWebSocketHandler implements WebSocketHandler {
#Autowired
private AuthenticationService authenticationService;
#Override
public Mono<Void> handle(WebSocketSession webSocketSession) {
Flux<WebSocketMessage> output = webSocketSession.receive()
.map(msg -> {
String payloadAsText = msg.getPayloadAsText();
Account account = authenticationService.getAccountByToken(payloadAsText);
Games.getInstance().getGames().get(account.getIdCurrentGame()).getPlayerById(account.getId()).setSession(webSocketSession);
return "WebSocketSession id: " + webSocketSession.getId();
})
.map(webSocketSession::textMessage);
return webSocketSession
.send(output);
}
}
And here is the code I use to notify the opponent player that it is its turn to play, the "opponentSession.send" method seems to produce no result, there is no error message, and it looks like I receive nothing on the front end. The sessions has the same ID than in the handle method so I think the session object is good, also the websocket session was opened and ready when I did my tests:
#RequestMapping(value = "/game/endTurn", method = RequestMethod.POST)
GameBean endTurn(
#RequestHeader(value = "token", required = true) String token) {
ObjectMapper mapper = new ObjectMapper();
Account account = authenticationService.getAccountByToken(token);
gameService.endTurn(account);
Game game = gameService.getGameByAccount(account);
//GameBean opponentGameBean = game.getOpponentGameState(account.getId());
//WebSocketMessage webSocketMessage = opponentSession.textMessage(mapper.writeValueAsString(opponentGameBean));
WebSocketSession opponentSession = game.getPlayerById(game.getOpponentId(account.getId())).getSession();
WebSocketMessage webSocketMessage = opponentSession.textMessage("test message");
opponentSession.send(Mono.just(webSocketMessage));
return gameService.getGameStateByAccount(account);
}
}
You can see on the screenshot that the handle method is working correctly, I'm able to send and receive message.
Websocket input and output
Does someone know how can I make the opponentSession.send method works correctly so that I can receive messages on the front end?
You are using the reactive stack for your websocket and WebSocketSession#send return a Mono<Void> but you don't subscribe to this Mono (you just assembled it) so nothing will happen until something subscribe to it.
In your endpoint it doesn't look like you are using webflux so you are in synchronous world so you don't have other choice than to block
opponentSession.send(Mono.just(webSocketMessage)).block();
If you are using webflux then you should change your method to return a Mono and do something like:
return opponentSession.send(Mono.just(webSocketMessage)).then(gameService.getGameStateByAccount(account));
If you are not familiar with this you should look into projectreactor and WebFlux
Currently we have two separate API endpoints.
public Mono<ServerResponse> get(ServerRequest request) {
Sinks.StandaloneMonoSink<String> sink = Sinks.promise();
sinkMap.putIfAbsent(randomID, sink);
return sink.asMono().timeout(Duration.ofSeconds(60))
.flatMap(val -> ServerResponse.ok().body(BodyInserters.fromValue(val)))
}
public Mono<ServerResponse> push(ServerRequest request) {
Sinks.StandaloneMonoSink<String> sink = sinkMap.remove(randomID);
if (sink == null) {
return ServerResponse.notFound().build(); }
else {
return request.bodyToMono(String.class)
.flatMap(data -> {
sink.success(data);
return ServerResponse().ok().build();
}
}
}
The intention is for client to do a get request and to keep the connection open for 1 min or so waiting for some data to arrive. And then on push request data will be published to the open connection for get and the connection will close upon receipt of first element.
The issue with current approach is that the data may be emitted after get request times out and subscription is canceled, thus losing the data. Is it possible if no subscribers then if I try to emit item throw error or perform another action with data (from the push request side).
Thanks.
I had to read this question multiple times to understand what you are looking for!
I tried something like this and it seems to work.
private final DirectProcessor<String> processor = DirectProcessor.create();
private final FluxSink<String> sink = processor.sink();
// processor has a method to check if there are any active subscribers.
// if it is true, lets push data, otherwise we can throw exception / store it somehwere
#GetMapping("/push/{val}")
public boolean push(#PathVariable String val){
boolean status = processor.hasDownstreams();
if(status)
sink.next(val);
return status;
}
#GetMapping("/get")
public Mono<String> get(){
return processor
.next()
.timeout(Duration.ofMinutes(1));
}
Question:
Will you be running only one instance of this application? What will happen when you run multiple instances of this application?
For ex: User A might push the data to app-instance-1 and User B might subscribe to app-instance-2. In this case, User B might not get data. In this case, you might need something like Redis to store this data and share among all the instances for pub/sub behavior.
I'm trying to implement the following logic with help of Kafka Streams:
Listen to some reference data from topic eg. ref-data-topic and creates a global StateStore from it.
Listen to messages from another topic data-topic which must be validated against ref data and either sent to success or errors topics.
Here is example pseudocode:
class SomeProcessor implements Processor<String, String> {
private KeyValueStore<String, String> refDataStore;
#Override
public void init(final ProcessorContext context) {
refDataStore = (KeyValueStore) context.getStateStore("ref-data-store");
}
#Override
public void process(String key String value) {
Object refData = refDataStore.get("some_key");
// business logic here
if(ok) {
sendValueToTopic("success");
} else {
sendValueToTopic("errors");
}
}
}
Or what would be the canonical way to achieve such a desired behavior?
Just like an alternative that I have now in my mind is to enrich data within Processor with validation info and send everything then into only one topic, making a client to deal with e.g. validationStatus in the received message.
Although, I really would like to have a solution with two topics because e.g in such a case I could, using Kafka Connect, link success topic directly with some datastore and deal with error topic somehow differently. In the approach with only one topic, again, I have no idea how to achieve this "store_only_successfully_validated_entities" use case.
Any ideas and suggestions?
If you use Processor API, you can forward data to different processor by name:
class SomeProcessor implements Processor<String, String> {
private KeyValueStore<String, String> refDataStore;
private ProcessorContext processorContext;
#Override
public void init(final ProcessorContext context) {
refDataStore = (KeyValueStore) context.getStateStore("ref-data-store");
processorContext = context;
}
#Override
public void process(String key String value) {
Object refData = refDataStore.get("some_key");
// business logic here
if(ok) {
processorContext.forward(key, value, To.child("success"));
} else {
processorContext.forward(key, value, To.child("error"));
}
}
}
When you plug in your topology, you add two sink nodes, names "success" and "error" that write to success and error topic respectively.
Or you forward data to a single sink node and add the sink with a TopicNameExtractor instead of a hard coded topic name. (Requires version 2.0.)
If you use DSL, you can use KStream#branch() to split a stream and pile different data to different topics via KStream#to(...) (or you use the dynamic routing via KStream#to(TopicNameExtractor) -- required version 2.0)
ok, so i'm trying to implement rxJava2 with retrofit2. The goal is to make a call only once and broadcast the results to different classes. For exmaple: I have a list of geofences in my backend. I need that list in my MapFragment to dispaly them on the map, but I also need that data to set the pendingIntent service for the actual trigger.
I tried following this awnser, but I get all sorts of errors:
Single Observable with Multiple Subscribers
The current situation is as follow:
GeofenceRetrofitEndpoint:
public interface GeofenceEndpoint {
#GET("geofences")
Observable<List<Point>> getGeofenceAreas();
}
GeofenceDAO:
public class GeofenceDao {
#Inject
Retrofit retrofit;
private final GeofenceEndpoint geofenceEndpoint;
public GeofenceDao(){
InjectHelper.getRootComponent().inject(this);
geofenceEndpoint = retrofit.create(GeofenceEndpoint.class);
}
public Observable<List<Point>> loadGeofences() {
return geofenceEndpoint.getGeofenceAreas().subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.share();
}
}
MapFragment / any other class where I need the results
private void getGeofences() {
new GeofenceDao().loadGeofences().subscribe(this::handleGeoResponse, this::handleGeoError);
}
private void handleGeoResponse(List<Point> points) {
// handle response
}
private void handleGeoError(Throwable error) {
// handle error
}
What am I doing wrong, because when I call new GeofenceDao().loadGeofences().subscribe(this::handleGeoResponse, this::handleGeoError); it's doing a separate call each time. Thx
new GeofenceDao().loadGeofences() returns two different instances of the Observable. share() only applies to the instance, not the the method. If you want to actually share the observable, you'd have to subscribe to the same instance. You could share the it with a (static) member loadGeofences.
private void getGeofences() {
if (loadGeofences == null) {
loadGeofences = new GeofenceDao().loadGeofences();
}
loadGeofences.subscribe(this::handleGeoResponse, this::handleGeoError);
}
But be careful not to leak the Obserable.
Maybe it's not answering your question directly, however I'd like to suggest you a little different approach:
Create a BehaviourSubject in your GeofenceDao and subscribe your retrofit request to this subject. This subject will act as a bridge between your clients and api, by doing this you will achieve:
Response cache - handy for screen rotations
Replaying response for every interested observer
Subscription between clients and subject doesn't rely on subscription between subject and API so you can break one without breaking another
I am trying to implement a content-based router in my Akka actor system and according to this document the ConsistentHashingRouter is the way to go. After reading through its official docs, I still find myself confused as to how to use this built-in hashing router. I think that’s because the router itself is hash/key-based, and the example the Akka doc author chose to use was a scenario involving key-value based caches…so I can’t tell which keys are used by the cache and which ones are used by the router!
Let’s take a simple example. Say we have the following messages:
interface Notification {
// Doesn’t matter what’s here.
}
// Will eventually be emailed to someone.
class EmailNotification implements Notification {
// Doesn’t matter what’s here.
}
// Will eventually be sent to some XMPP client and on to a chatroom somewhere.
class ChatOpsNotifications implements Notification {
// Doesn’t matter what’s here.
}
etc. In theory we might have 20 Notification impls. I’d like to be able to send a Notification to an actor/router at runtime and have that router route it to the correct NotificationPubisher:
interface NotificationPublisher<NOTIFICATION implements Notification> {
void send(NOTIFICATION notification)
}
class EmailNotificationPublisher extends UntypedActor implements NotificationPubisher<EmailNotification> {
#Override
void onReceive(Object message) {
if(message instanceof EmailNotification) {
send(message as EmailNotification)
}
}
#Override
void send(EmailNotification notification) {
// Use Java Mail, etc.
}
}
class ChatOpsNotificationPublisher extends UntypedActor implements NotificationPubisher<ChatOpsNotification> {
#Override
void onReceive(Object message) {
if(message instanceof ChatOpsNotification) {
send(message as ChatOpsNotification)
}
}
#Override
void send(ChatOpsNotification notification) {
// Use XMPP/Jabber client, etc.
}
}
Now I could do this routing myself, manually:
class ReinventingTheWheelRouter extends UntypedActor {
// Inject these via constructor
ActorRef emailNotificationPublisher
ActorRef chatOpsNotificationPublisher
// ...20 more publishers, etc.
#Override
void onReceive(Object message) {
ActorRef publisher
if(message instanceof EmailNotification) {
publisher = emailNotificationPublisher
} else if(message instanceof ChatOpsNotification) {
publisher = chatOpsNotificationPublisher
} else if(...) { ... } // 20 more publishers, etc.
publisher.tell(message, self)
}
}
Or I could use the Akka-Camel module to defined a Camel-based router and send Notifications off to the Camel router, but it seems that Akka aready has this built-in solution, so why not use it? I just cant figure out how to translate the Cache example from those Akka docs to my Notification example here. What’s the purpose of the “key” in the ConsistentHashingRouter? What would the code look like to make this work?
Of course I would appreciate any answer that helps me solve this, but would greatly prefer Java-based code snippets if at all possible. Scala looks like hieroglyphics to me.
I agree that a Custom Router is more appropriate than ConsistentHashingRouter. After reading the docs on custom routers, it seems I would:
Create a GroupBase impl and send messages to it directly (notificationGroup.tell(notification, self)); then
The GroupBase impl, say, NotificationGroup would provide a Router instance that was injected with my custom RoutingLogic impl
When NotificationGroup receives a message, it executes my custom RoutingLogic#select method which determines which Routee (I presume some kind of an actor?) to send the message to
If this is correct (and please correct me if I’m wrong), then the routing selection magic happens here:
class MessageBasedRoutingLogic implements RoutingLogic {
#Override
Routee select(Object message, IndexedSeq<Routee> candidates) {
// How can I query the Routee interface and deterine whether the message at-hand is in fact
// appropriate to be routed to the candidate?
//
// For instance I'd like to say "If message is an instance of
// an EmailNotification, send it to EmailNotificationPublisher."
//
// How do I do this here?!?
if(message instanceof EmailNotification) {
// Need to find the candidate/Routee that is
// the EmailNotificationPublisher, but how?!?
}
}
}
But as you can see I have a few mental implementation hurdles to cross. The Routee interface doesn’t really give me anything I can intelligently use to decide whether a particular Routee (candidate) is correct for the message at hand.
So I ask: (1) How can I map messages to Routees (effectively performing the route selection/logic)? (2) How do I add my publishers as routees in the first place? And (3) Do my NotificationPublisher impls still need to extend UntypedActor or should they now implement Routee?
Here is a simple little A/B router in Scala. I hope this helps even though you wanted a Java based answer. First the routing logic:
class ABRoutingLogic(a:ActorRef, b:ActorRef) extends RoutingLogic{
val aRoutee = ActorRefRoutee(a)
val bRoutee = ActorRefRoutee(b)
def select(msg:Any, routees:immutable.IndexedSeq[Routee]):Routee = {
msg match{
case "A" => aRoutee
case _ => bRoutee
}
}
}
The key here is that I am passing in my a and b actor refs in the constructor and then those are the ones I am routing to in the select method. Then, a Group for this logic:
case class ABRoutingGroup(a:ActorRef, b:ActorRef) extends Group {
val paths = List(a.path.toString, b.path.toString)
override def createRouter(system: ActorSystem): Router =
new Router(new ABRoutingLogic(a, b))
val routerDispatcher: String = Dispatchers.DefaultDispatcherId
}
Same thing here, I am making the actors I want to route to available via the constructor. Now a simple actor class to act as a and b:
class PrintingActor(letter:String) extends Actor{
def receive = {
case msg => println(s"I am $letter and I received letter $msg")
}
}
I will create two instances of this, each with a different letter assignment so we can verify that the right ones are getting the right messages per the routing logic. Lastly, some test code:
object RoutingTest extends App{
val system = ActorSystem()
val a = system.actorOf(Props(classOf[PrintingActor], "A"))
val b = system.actorOf(Props(classOf[PrintingActor], "B"))
val router = system.actorOf(Props.empty.withRouter(ABRoutingGroup(a,b)))
router ! "A"
router ! "B"
}
If you ran this, you would see:
I am A and I received letter A
I am B and I received letter B
It's a very simple example, but one that shows one way to do what you want to do. I hope you can bridge this code into Java and use it to solve your problem.