I've created server (10.32.240.50) with SslHandler. Client (10.32.240.5) connects to server and everything works fine. After some time client disconects with no reason. I've took tcp dump and saw there ncrypted alert right before disconect:
I have no idea what client send me in this alert - it's encrypted. What could be the cause of this alert and why it leads to disconect? Is there any way to trace this events with netty?
At this stage it is difficult to see if your question is really related to programming, and hence ontopic here or not.
A TLS 1.2 alert can be many things, see https://www.rfc-editor.org/rfc/rfc5246#section-7.2 which gives you the whole list:
enum { warning(1), fatal(2), (255) } AlertLevel;
enum {
close_notify(0),
unexpected_message(10),
bad_record_mac(20),
decryption_failed_RESERVED(21),
record_overflow(22),
decompression_failure(30),
handshake_failure(40),
no_certificate_RESERVED(41),
bad_certificate(42),
unsupported_certificate(43),
certificate_revoked(44),
certificate_expired(45),
certificate_unknown(46),
illegal_parameter(47),
unknown_ca(48),
access_denied(49),
decode_error(50),
decrypt_error(51),
export_restriction_RESERVED(60),
protocol_version(70),
insufficient_security(71),
internal_error(80),
user_canceled(90),
no_renegotiation(100),
unsupported_extension(110),
(255)
} AlertDescription;
struct {
AlertLevel level;
AlertDescription description;
} Alert;
Of course it is encrypted so if you really wanted to see it, you need to:
change the client so that it outputs the master secret and client random when doing the connection that triggers this error
record the relevant connection with wireshark
and then you will be able, inside wireshark, with the items in first point, to decrypt it (you can find numerous tutorials on how to do that)
From experience, if the alert happens after some application data the most probable case is "close_notify". It is a "normal" case it just means that the server decides to shutdown the TLS socket (but not necessarily the TCP one) and hence warns (alerts) the other party about it.
If it is this case, then it is expected for the other party to send the same alert, and then the connection is shut down at the TCP level with FIN. So the chain of observations you have is expected. The only reason remaining is about the initial alert.
After clarification, since the first alert comes from .5 which is the client, and not the server, it means the client that you do not control has decided to shutdown the TLS stream, for reasons only known by it
(if we still guess correctly that the alert is "close_notify" which is still only a guess that can be tested only if you decrypt the exchange per the instructions above, or maybe increase server verbosity, like this idea given by #dave_thompson_085 in comment: "If you set sysprop javax.net.debug=ssl it will trace all JSSE (SSL/TLS) operations, which includes the received alert. ")
Other than that, except asking the client operator/developer I see no way to understand why the client decided not to talk to you anymore. It also depends on the underlying application data exchanged, maybe it was indeed the end of the transmission and the client does not need the TLS stream anymore?
Related
I'd like to listen on a websocket using akka streams. That is, I'd like to treat it as nothing but a Source.
However, all official examples treat the websocket connection as a Flow.
My current approach is using the websocketClientFlow in combination with a Source.maybe. This eventually results in the upstream failing due to a TcpIdleTimeoutException, when there are no new Messages being sent down the stream.
Therefore, my question is twofold:
Is there a way – which I obviously missed – to treat a websocket as just a Source?
If using the Flow is the only option, how does one handle the TcpIdleTimeoutException properly? The exception can not be handled by providing a stream supervision strategy. Restarting the source by using a RestartSource doesn't help either, because the source is not the problem.
Update
So I tried two different approaches, setting the idle timeout to 1 second for convenience
application.conf
akka.http.client.idle-timeout = 1s
Using keepAlive (as suggested by Stefano)
Source.<Message>maybe()
.keepAlive(Duration.apply(1, "second"), () -> (Message) TextMessage.create("keepalive"))
.viaMat(Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri)), Keep.right())
{ ... }
When doing this, the Upstream still fails with a TcpIdleTimeoutException.
Using RestartFlow
However, I found out about this approach, using a RestartFlow:
final Flow<Message, Message, NotUsed> restartWebsocketFlow = RestartFlow.withBackoff(
Duration.apply(3, TimeUnit.SECONDS),
Duration.apply(30, TimeUnit.SECONDS),
0.2,
() -> createWebsocketFlow(system, websocketUri)
);
Source.<Message>maybe()
.viaMat(restartWebsocketFlow, Keep.right()) // One can treat this part of the resulting graph as a `Source<Message, NotUsed>`
{ ... }
(...)
private Flow<Message, Message, CompletionStage<WebSocketUpgradeResponse>> createWebsocketFlow(final ActorSystem system, final String websocketUri) {
return Http.get(system).webSocketClientFlow(WebSocketRequest.create(websocketUri));
}
This works in that I can treat the websocket as a Source (although artifically, as explained by Stefano) and keep the tcp connection alive by restarting the websocketClientFlow whenever an Exception occurs.
This doesn't feel like the optimal solution though.
No. WebSocket is a bidirectional channel, and Akka-HTTP therefore models it as a Flow. If in your specific case you care only about one side of the channel, it's up to you to form a Flow with a "muted" side, by using either Flow.fromSinkAndSource(Sink.ignore, mySource) or Flow.fromSinkAndSource(mySink, Source.maybe), depending on the case.
as per the documentation:
Inactive WebSocket connections will be dropped according to the
idle-timeout settings. In case you need to keep inactive connections
alive, you can either tweak your idle-timeout or inject ‘keep-alive’
messages regularly.
There is an ad-hoc combinator to inject keep-alive messages, see the example below and this Akka cookbook recipe. NB: this should happen on the client side.
src.keepAlive(1.second, () => TextMessage.Strict("ping"))
I hope I understand your question correctly. Are you looking for asSourceOf?
path("measurements") {
entity(asSourceOf[Measurement]) { measurements =>
// measurement has type Source[Measurement, NotUsed]
...
}
}
I'm currently working on the implementation of Twilio Video in my Android app, where the normal behavior (and the one I need) should be:
If client A calls a client B, and client B rejects the call, client A receives (onConversation) an error (object TwilioConversationsException) containing code:107, message:Participant rejects the call..
Or if client A calls client B, and client B isn't connected to Twilio, Client A receives an error immediately with code:106, message:Participant is unavailable at the moment.. At this point I retry several times until the user connects and responds (accepting or rejecting), or 30 seconds pass after the call was initiated.
I'm working based on this, but I've encountered an issue after client B loses internet connection or the app closes unexpectedly. After reconnecting to Twilio, when client B rejects a call, client A receives an error code:106, message:Participant is unavailable at the moment. instead of code:107, message:Participant rejects the call., deceiving client A into thinking that cliente B is disconnected from Twilio (when he actually is connected), which triggers a new call try. For what I've been observing, this problem is associated to the client B identity, where somehow it remained registered as unavailable and is not letting it work properly. If I change client B identity, the behavior goes back to regular, but it's not the idea. My intention is for the identity to be my users id: unique and fixed.
In iOS is happening the same, according to this thread:
Twilio iOS Video Call: Getting "User is unavailable" error message when user rejects the call
I would appreciated some help! Best regards!
A co-worker asked Twilio support and was told this:
Hey Deneb,
These workflows have some challenges with the current Conversations
API in Programmable Video, and we're working on solving them in an
upcoming addition to the product: A new Rooms API. Rooms will allow
your users to connect to named Room (a multi-party conference call) by
a name that you define, or by its unique ID (RoomSid). Using this API,
you won't have to worry about if/when your endpoints are online--you
can just have your users connect to the proper Room and they'll be
able to share voice and video with one another.
The Rooms API will be rolling out in just a few weeks, and I think
it'll be a much better fit for your use case. If you're in need of a
solution more urgently, I'd recommend using a third-party
notifications product, like Firebase or PubNub, to make sure that both
participants are "awake" and connected, then initiating the invite
flow.
Let me know if you have any questions on this. Thanks for trying
Programmable Video,
Regards, Rob Brazier
I'm using Jetty 9.3.5 and I would like to know what is the proper way to handle unreliable connections when sending websocket messages, specifically: I noticed cases when a websocket connection does not close normally so, even though the client side is down, it takes a lot of time until onClose() is triggered on the server (for ex. a user closes the laptop lid and puts it in standby - it can take 1-2 hours until the close event is received on the server side).
Thus, because the client is still registered, the server keeps sending messages that begin to build up. This becomes an issue when sending a large number of messages.
I've tested sending byte messages with:
Session.getRemote().sendBytes(ByteBuffer, WriteCallback)
Session.getRemote().sendBytesByFuture(ByteBuffer);
To simulate the connection down on one side (ie. user puts laptop in standby), on Linux, I assigned an IP address to eth0 interface, started sending the messages and then brought it down:
ifconfig eth0 192.168.1.1
ifconfig eth0 up
--- start sending messages (simple incremented numbers) and connect using Chrome browser and print them ---
ifconfig eth0 down
This way: the messages were still being sent by Jetty, the Chrome client did not receive them, the onCllose or onError was not triggered on server-side
My questions regarding Jetty are:
Is there a way to clear queued messages that were not delivered?
I've tried, but with no luck:
Session.getRemote().flush();
Can a max number of queued messages be set?
I've tried:
WebSocketServletFactory.getPolicy().setMaxBinaryMessageBufferSize(1)
Can I detect if the client does not receive the message? (or if the connection is in abnormal state let's say)
I've tried:
session.getRemote().sendBytes(bb, new WriteCallback() {
#Override
public void writeSuccess() {
//print success }
#Override
public void writeFailed(Throwable arg0) {
//print fail
}
});
But this prints success even though the messages are not received.
I also tried to use, but couldn't find a solution:
factory.getPolicy().setIdleTimeout(...);
factory.getPolicy().setAsyncWriteTimeout(3000);
sendPing()
Thanks in advance!
Unfortunately, the WebSocket protocol, being a message passing protocol isn't really designed for this level of nuance between messages.
The first message MUST complete before you can even think of sending the next message. So if you have a message in process, then there is no way to safely cancel that message.
At best, an API could exist to truncate that message with a CONTINUATION / empty payload / fin=true.
But even then the remote endpoint wouldn't know that you canceled the message, it would just see a partial message.
Detecting connectivity issues is best handled with either OS level events (like Android's Connectivity intents), or via periodic websocket PING (which inserts itself in front of the line for outgoing websocket frames.
However, even with PING, if your outgoing websocket frame is in-progress, even the PING cannot be sent until that websocket frame is done sending.
RemoteEndpoint.flush() will attempt to flush any pending messages (and frames), not clear out pending messages (or frames).
As for detecting if client got the message, you'll need to implement some sort of message ACK into your own layer to verify that, the protocol has no such concept. (Some libs/apis built on top of websocket have implemented message ACK in that layer. The cometd message ack extension comes to mind as a real world example)
What sort of situation are you attempting to solve for?
Perhaps using the RemoteEndpoint.sendPartialString(String, boolean) or RemoteEndpoint.sendPartialBytes(ByteBuffer, boolean) to send smaller frames of the whole message could be useful to you. However, the other side might not have an API that can read those partial frames (eg: Javascript in a browser).
Context:
I am working on a piece of Java code where I am reading mails from an array (which works fine). I was wondering if someone can help me with the callback in order to show a fancy message like Your email was sent.
Questions:
How do I implement this?
Is there any way to get any Boolean type return value from javax.mail to check if the message was sent or not?
Maybe I should create a pool? If yes, how do I do that? Is there any signal to kill the pool?
Code:
// addressTo is the array.
Transport t = sesion.getTransport(this.beanMail.getProtocolo());
t.connect(this.beanMail.getUsuario(), this.beanMail.getPassword());
t.sendMessage(mensaje, addressTo);
t.close();
Quoting from the JavaMail API FAQ (in the context of tracking bounced messages):
While there is an Internet standard for reporting such errors (the multipart/report MIME type, see RFC1892), it is not widely implemented yet. RFC1211 discusses this problem in depth, including numerous examples.In Internet email, the existence of a particular mailbox or user name can only be determined by the ultimate server that would deliver the message. The message may pass through several relay servers (that are not able to detect the error) before reaching the end server. Typically, when the end server detects such an error, it will return a message indicating the reason for the failure to the sender of the original message. There are many Internet standards covering such Delivery Status Notifications but a large number of servers don't support these new standards, instead using ad hoc techniques for returning such failure messages. This makes it very difficult to correlate a "bounced" message with the original message that caused the problem. (Note that this problem is completely independent of JavaMail.)
Source
I´ve been looking for a good book or article about this topic but didnt find much. I didnt find a good example - piece of code - for a specific scenario. Like clients/server conversation.
In my application´s protocol they have to send/recieve messages. Like:
Server want to send a file to a client
Client can accpet or no,
if he accepts, server will send bytes over the same connection/socket.
The rest of my application all uses blocking methods, server has a method
Heres what I did:
Server method:
public synchronized void sendFile(File file)
{
//send messsage asking if I can send a file
//block on read, waiting for client responde
//if client answers yes, start sending the bytes
//else return
}
Client methods:
public void reciveCommand()
{
//read/listen for a command from socket
//if is a send file command handleSendFileCommand();
//after the return of handleSendFileCommand() listen for another command
}
public void handleSendFileCommand()
{
//get the file server want to send
//check if it already has the file
//if it already has, then send a command to the socket saying it already has and return
//else send a command saying server can send the file
//create a FileInputStream, recive bytes and then return method
}
I am 100% sure this is wrong because, there is no way server and clients would talk bidirecional, I mean, when server wants to send a command to a server, they have to follow an order of commands until that conversation is finished, only then, they can send/recive another sequence of commands. Thats why I made all methods that send requests synchronized
It didnt took me a lot of time to realize I need to study about design patterns for that kind of application...
I read about Chain of Responsibility design pattern but I dont get it how can I use it or another good design pattern in that situation.
I hope someone can help me with some code example-like.
Thanks in advance
synchronized keyword in Java means something completely different - it marks a method or a code block as a critical section that only single thread can execute at a time. You don't need it here.
Then, a TCP connection is bi-directional on the byte-stream level. The synchronization between the server and a client is driven by the messages exchanged. Think of a client (same pretty much applies to the server) as a state machine. Some types of messages are acceptable in the current state, some are not, some switch the node into different state.
Since you are looking into design patterns, the State pattern is very applicable here.