This is question about websockets and architecture of messaging between users. What i've done:
Client side:
Send message to server with parameter - conversation uuid. And also i subscribed to topic where new messages must be appeared.
Server side:
When i receive message with conversation uuid, i launch scheduler which sends new messages for conversation to topic.
But there alot of conversation could be, so i my controller class i got field of class "conversationSchedulers" - it is HashMap where key is a username and value is current scheduler which sends new messages for conversation. When user wants to recieve new messages for other conversation, he click on conversation in web application and next code works:
Cliend side:
Send messages with new conversation uuid.
Server side:
Gain previous running scheduler - if it is - cancel it, and run new scheduler with new conversation uuid.
And everything is works... when there are ONE tab with messages. When user open two or more tabs - all architecture gone to the hell. Because i accept only one scheduler for messaging... only one conversation could be opened.
In that moment i got an idea - accept many schedulers to messaging with more than one user at the same time, but i did not do this implementation because on messaging page i have got a button to write a NEW message, when user want to write new message all new messages from other users must stop to sends to client, but i can not stop them because user got a two tabs. Stop all schedulers that means stop messaging on all tabs. And this is a problem. May be i choose wrong architecture? Or websockets it is a bad idea for such task?
Related
I am subscribed to a Azure servicebus topic made by an external department. The way I want the code to work is as follows:
Trigger an http endpoint that starts processorClient and listens to the topic.
Fetches one message
Does required actions to that message
Closes processorClient connection.
Repeat
I am using the ServiceBusProcessorClient class as shown in the following documentation Receive messages from a subscription
Is there a way to utilize this code in order to only fetch one message from the topic at a time before calling the processorClient.close();
I have tried using setting maxConcurrentCalls and setting prefetchCount(0).
I'm currently working on the implementation of Twilio Video in my Android app, where the normal behavior (and the one I need) should be:
If client A calls a client B, and client B rejects the call, client A receives (onConversation) an error (object TwilioConversationsException) containing code:107, message:Participant rejects the call..
Or if client A calls client B, and client B isn't connected to Twilio, Client A receives an error immediately with code:106, message:Participant is unavailable at the moment.. At this point I retry several times until the user connects and responds (accepting or rejecting), or 30 seconds pass after the call was initiated.
I'm working based on this, but I've encountered an issue after client B loses internet connection or the app closes unexpectedly. After reconnecting to Twilio, when client B rejects a call, client A receives an error code:106, message:Participant is unavailable at the moment. instead of code:107, message:Participant rejects the call., deceiving client A into thinking that cliente B is disconnected from Twilio (when he actually is connected), which triggers a new call try. For what I've been observing, this problem is associated to the client B identity, where somehow it remained registered as unavailable and is not letting it work properly. If I change client B identity, the behavior goes back to regular, but it's not the idea. My intention is for the identity to be my users id: unique and fixed.
In iOS is happening the same, according to this thread:
Twilio iOS Video Call: Getting "User is unavailable" error message when user rejects the call
I would appreciated some help! Best regards!
A co-worker asked Twilio support and was told this:
Hey Deneb,
These workflows have some challenges with the current Conversations
API in Programmable Video, and we're working on solving them in an
upcoming addition to the product: A new Rooms API. Rooms will allow
your users to connect to named Room (a multi-party conference call) by
a name that you define, or by its unique ID (RoomSid). Using this API,
you won't have to worry about if/when your endpoints are online--you
can just have your users connect to the proper Room and they'll be
able to share voice and video with one another.
The Rooms API will be rolling out in just a few weeks, and I think
it'll be a much better fit for your use case. If you're in need of a
solution more urgently, I'd recommend using a third-party
notifications product, like Firebase or PubNub, to make sure that both
participants are "awake" and connected, then initiating the invite
flow.
Let me know if you have any questions on this. Thanks for trying
Programmable Video,
Regards, Rob Brazier
Actually for my server game I'm not using Netty. I've created a socket multithreaded system for send packet object who is serialized into and deserialized from (Int and Out)StreamBuffer.
I've discovered Netty and I think it's better to use it for my network system.
I've actually created a Client and Server handler (both extends ChannelInboundHandlerAdapter) to serialized and deserialized my packet from the ByteBuf, it's work FINE :) !
Now I want migrate my authentication system.
Actually, I've two Handler, the LoginHandler which can receive and process only the Login Packet (defined by an id when I send a buffer packet), and the ServerHandler which can receive and process all others packets.
Actual algorithme
Client side :
User launch a client a new window ask him to enter his username and password When he click on "Login", the client connect to the server, and after send a LoginPacket.
If a AuthServerPacket is sent by the server with the auth flag to true, he continue and open all others features.
If a AuthServerPacket is sent by the server with the auth flag to false, it display a popup with the reason, and re-open the window login.
Server side :
When a user is connecting to the server it's the LoginHandler which is attached to the client.
In this LoginHandler, only LoginPacket is processed, so, when it receive a LoginPacket it check the informations in a database, if these are correct, the client are added to the ServerHandler and deleted from the LoginHandler, and now he can receive and send all others packets.
ServerHandler send a AuthServerPacket with auth flag to true.
My question is, what is the best way to re-create this system with Netty ?
I don't know if I can add the login handler in the pipeline which it will be not check it if a channel is authentified. I don't know how or if the process is stopped if one of the handler reject the channel.
Someone can help me to understand what is the best way to do what I want with Netty ?
Thanks you in advance for your answers.
Programmatically, beaucoralk.
we talked on IRC #netty today :)
My suggestion is:
In your Pipeline Initializer, always add the LoginHandler
Once Login is successful, then the LoginHandler should:
ctx.pipeline.addAfter(this, "gameHandler", new GameLogicHandler());
ctx.pipeline.remove(this);
So basically your LoginHandler removes itself, after a successful authentication. Important: add the new Handler before removing the old Handler. :)
best regards
I've solved my problem with this :
In my Pipeline Initialize always add the LoginHandler (don't add my ServerHandler)
Once Login is successful, then the LoginHandler do :
ctx.pipeline.addLast(new GameLogicServerHandler());
ctx.pipeline.remove(this);
In fact I have not succeeded to use the addAfter like said Franz Bettag, no method was appropriate.
But thanks you to Bettag who help me to understand many things on #netty IRC.
I'm using Jetty 9.3.5 and I would like to know what is the proper way to handle unreliable connections when sending websocket messages, specifically: I noticed cases when a websocket connection does not close normally so, even though the client side is down, it takes a lot of time until onClose() is triggered on the server (for ex. a user closes the laptop lid and puts it in standby - it can take 1-2 hours until the close event is received on the server side).
Thus, because the client is still registered, the server keeps sending messages that begin to build up. This becomes an issue when sending a large number of messages.
I've tested sending byte messages with:
Session.getRemote().sendBytes(ByteBuffer, WriteCallback)
Session.getRemote().sendBytesByFuture(ByteBuffer);
To simulate the connection down on one side (ie. user puts laptop in standby), on Linux, I assigned an IP address to eth0 interface, started sending the messages and then brought it down:
ifconfig eth0 192.168.1.1
ifconfig eth0 up
--- start sending messages (simple incremented numbers) and connect using Chrome browser and print them ---
ifconfig eth0 down
This way: the messages were still being sent by Jetty, the Chrome client did not receive them, the onCllose or onError was not triggered on server-side
My questions regarding Jetty are:
Is there a way to clear queued messages that were not delivered?
I've tried, but with no luck:
Session.getRemote().flush();
Can a max number of queued messages be set?
I've tried:
WebSocketServletFactory.getPolicy().setMaxBinaryMessageBufferSize(1)
Can I detect if the client does not receive the message? (or if the connection is in abnormal state let's say)
I've tried:
session.getRemote().sendBytes(bb, new WriteCallback() {
#Override
public void writeSuccess() {
//print success }
#Override
public void writeFailed(Throwable arg0) {
//print fail
}
});
But this prints success even though the messages are not received.
I also tried to use, but couldn't find a solution:
factory.getPolicy().setIdleTimeout(...);
factory.getPolicy().setAsyncWriteTimeout(3000);
sendPing()
Thanks in advance!
Unfortunately, the WebSocket protocol, being a message passing protocol isn't really designed for this level of nuance between messages.
The first message MUST complete before you can even think of sending the next message. So if you have a message in process, then there is no way to safely cancel that message.
At best, an API could exist to truncate that message with a CONTINUATION / empty payload / fin=true.
But even then the remote endpoint wouldn't know that you canceled the message, it would just see a partial message.
Detecting connectivity issues is best handled with either OS level events (like Android's Connectivity intents), or via periodic websocket PING (which inserts itself in front of the line for outgoing websocket frames.
However, even with PING, if your outgoing websocket frame is in-progress, even the PING cannot be sent until that websocket frame is done sending.
RemoteEndpoint.flush() will attempt to flush any pending messages (and frames), not clear out pending messages (or frames).
As for detecting if client got the message, you'll need to implement some sort of message ACK into your own layer to verify that, the protocol has no such concept. (Some libs/apis built on top of websocket have implemented message ACK in that layer. The cometd message ack extension comes to mind as a real world example)
What sort of situation are you attempting to solve for?
Perhaps using the RemoteEndpoint.sendPartialString(String, boolean) or RemoteEndpoint.sendPartialBytes(ByteBuffer, boolean) to send smaller frames of the whole message could be useful to you. However, the other side might not have an API that can read those partial frames (eg: Javascript in a browser).
I am using quickfixj where I have acceptor from which I am sending fix message using Logout() method "8=FIX.4.29=8235=534=38749=TEST152=20130409-08:01:47.86256=TEST2-1136558=User Is Blocked10=231" to initiator , but I can see heart beat sent from Acceptor itself how do we over come this ? I am using the below code
Logout oLogout = new Logout();
quickfix.field.Text aText = new quickfix.field.Text("User Is Blocked");
oLogout.set(aText);
Session.sendToTarget(oLogout, "TEST2-11365, "TEST1");
You should not manually send a Logout like this. Logout is an admin message; you should trust the engine to send/receive all admin message types.
What is happening is that you are sending this message outside of the engine's control logic. The engine is treating it as any other outgoing application-level message, and not initiating the engine's internal shutdown logic.
If you call Acceptor.stop(), then engine will initiate its shutdown logic and send the Logout for you.