Have to implement a server for handling the following protocol through Ethernet connection:
Establishing a connection
The client connects to the configured server via TCP / IP.
After the connection has been established, the client initially sends a heartbeat message to the
Server:
{
"MessageID": "Heartbeat"
}
Response:
{
"ResponseCode": "Ok"
}
Communication process
To maintain the connection, the client sends every 10 seconds when inactive
Heartbeat message.
Server and client must close the connection if they are not receiving a message for longer than 20 seconds.
An answer must be given within 5 seconds to request.
If no response is received, the connection must also be closed.
The protocol does not contain numbering or any other form of identification.
Communication partner when sending the responses makes sure that they are in the same sequence.
Message structure:
The messages are embedded in an STX-ETX frame.
STX (0x02) message ETX (0x03)
An `escaping` of STX and ETX within the message is not necessary since it is in JSON format
Escape sequence are following:
JSON.stringify ({"a": "\ x02 \ x03 \ x10"}) → "{" a \ ": " \ u0002 \ u0003 \ u0010 \ "}"
Not only heartbeat messages should be used. A typical message should be like:
{
"MessageID": "CheckAccess"
"Parameters": {
"MediaType": "type",
"MediaData": "data"
}
}
And the appropriate response:
{
"ResponseCode": "some-code",
"DisplayMessage": "some-message",
"SessionID": "some-id"
}
It should be a multi-client server. And protocol doesn't have any identification.
However, we have to identify the client at least the IP address from which it was sent.
Could not find some solution on how to add such server to Spring Boot application and enable on startup & handle input and output logic for it.
Any suggestions are highly appreciated.
Solution
Configured following for TCP server:
#Slf4j
#Component
#RequiredArgsConstructor
public class TCPServer {
private final InetSocketAddress hostAddress;
private final ServerBootstrap serverBootstrap;
private Channel serverChannel;
#PostConstruct
public void start() {
try {
ChannelFuture serverChannelFuture = serverBootstrap.bind(hostAddress).sync();
log.info("Server is STARTED : port {}", hostAddress.getPort());
serverChannel = serverChannelFuture.channel().closeFuture().sync().channel();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
#PreDestroy
public void stop() {
if (serverChannel != null) {
serverChannel.close();
serverChannel.parent().close();
}
}
}
#PostConstruct launches server during startup of an application.
Configuration for it as well:
#Configuration
#RequiredArgsConstructor
#EnableConfigurationProperties(NettyProperties.class)
public class NettyConfiguration {
private final LoggingHandler loggingHandler = new LoggingHandler(LogLevel.DEBUG);
private final NettyProperties nettyProperties;
#Bean(name = "serverBootstrap")
public ServerBootstrap bootstrap(SimpleChannelInitializer initializer) {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup(), workerGroup())
.channel(NioServerSocketChannel.class)
.handler(loggingHandler)
.childHandler(initializer);
bootstrap.option(ChannelOption.SO_BACKLOG, nettyProperties.getBacklog());
bootstrap.childOption(ChannelOption.SO_KEEPALIVE, nettyProperties.isKeepAlive());
return bootstrap;
}
#Bean(destroyMethod = "shutdownGracefully")
public NioEventLoopGroup bossGroup() {
return new NioEventLoopGroup(nettyProperties.getBossCount());
}
#Bean(destroyMethod = "shutdownGracefully")
public NioEventLoopGroup workerGroup() {
return new NioEventLoopGroup(nettyProperties.getWorkerCount());
}
#Bean
#SneakyThrows
public InetSocketAddress tcpSocketAddress() {
return new InetSocketAddress(nettyProperties.getTcpPort());
}
}
Initialization logic:
#Component
#RequiredArgsConstructor
public class SimpleChannelInitializer extends ChannelInitializer<SocketChannel> {
private final StringEncoder stringEncoder = new StringEncoder();
private final StringDecoder stringDecoder = new StringDecoder();
private final QrReaderProcessingHandler readerServerHandler;
private final NettyProperties nettyProperties;
#Override
protected void initChannel(SocketChannel socketChannel) {
ChannelPipeline pipeline = socketChannel.pipeline();
pipeline.addLast(new DelimiterBasedFrameDecoder(1024 * 1024, Delimiters.lineDelimiter()));
pipeline.addLast(new ReadTimeoutHandler(nettyProperties.getClientTimeout()));
pipeline.addLast(stringDecoder);
pipeline.addLast(stringEncoder);
pipeline.addLast(readerServerHandler);
}
}
Properties configuration:
#Getter
#Setter
#ConfigurationProperties(prefix = "netty")
public class NettyProperties {
#NotNull
#Size(min = 1000, max = 65535)
private int tcpPort;
#Min(1)
#NotNull
private int bossCount;
#Min(2)
#NotNull
private int workerCount;
#NotNull
private boolean keepAlive;
#NotNull
private int backlog;
#NotNull
private int clientTimeout;
}
and a snippet from application.yml:
netty:
tcp-port: 9090
boss-count: 1
worker-count: 14
keep-alive: true
backlog: 128
client-timeout: 20
And handler is quite trivial.
Checked locally by running at the console:
telnet localhost 9090
It works fine there. I hope it will be fine for access from clients.
Since the protocol is NOT based on top of HTTP (unlike WebSocket which piggyback on HTTP in the first place), your only option is to use TCP server yourself and wire it up within spring context to gain full advantage of spring along with.
Netty is best known for low-level TCP/IP communication and it's easy to wrap up Netty server within spring app.
In fact, spring boot provides Netty HTTP server out of the box but this is not what you need.
TCP communication server with Netty And SpringBoot project is a simple and effective example of what you need.
Take a look at TCPServer from this project which uses Netty's ServerBootstrap for starting custom TCP server.
Once you have the server, you can wire up either Netty codecs OR Jackson OR any other message converter as you seem fit for your application domain data marshalling/unmarshalling.
[Update - July 17, 2020]
Against the updated understanding of the question (both HTTP and TCP requests are terminating on the same endpoint), following is the updated solution proposal
----> HTTP Server (be_http)
|
----> HAProxy -
|
----> TCP Server (be_tcp)
Following changes/additions are required for this solution to work:
Add Netty based listener in your existing spring boot app OR create a separate spring boot app for TCP server. Say this endpoint is listening for TCP traffic on port 9090
Add HAProxy as the terminating endpoint for ingress traffic
Configure HAProxy so that it sends all HTTP traffic to your existing spring boot HTTP endpoint (mentioned as be_http) on port 8080
Configure HAProxy so that all non HTTP traffic is sent to the new TCP spring boot endpoint (mentioned as be_tcp) on port 9090.
Following HAProxy configuration will suffice. These are excerpt which are relevant for this problem, please add other HAProxy directives as applicable for normal HAProxy setup:
listen 443
mode tcp
bind :443 name tcpsvr
/* add other regular directives */
tcp-request inspect-delay 1s
tcp-request content accept if HTTP
tcp-request content accept if !HTTP
use-server be_http if HTTP
use-server be_tcp if !HTTP
/* backend server definition */
server be_http 127.0.0.1:8080
server be_tcp 127.0.0.1:9090 send-proxy
Following HAProxy documentation links are particularly useful
Fetching samples from buffer contents - Layer 6
Pre-defined ACLs
tcp-request inspect-delay
tcp-request content
Personally I'll play around and validate tcp-request inspect-delay and adjust it as per actual needs since this has potential for adding delay in request in worst case scenario where connection has been made but no contents are available yet to evaluate if request is HTTP or not.
Addressing the need for we have to identify the client at least the IP address from which it was sent, you have an option to use Proxy Protocol while sending it back to backend. I have updated the sample config above to include proxy protocol in be_tcp (added send_proxy). I have also removed send_proxy from be_http since it's not needed for spring boot, instead you'll likely rely upon regular X-Forwarded-For header for be_http backend instead.
Within be_tcp backend, you can use Netty's HAProxyMessage to get the actual source IP address using sourceAddress() API. All in all, this is a workable solution. I myself have used HAProxy with proxy protocol (at both ends, frontend and backend) and it's much more stable for the job.
Related
I am using Mina V2.1.3 for a network application; especially I have a udp session problem: I want to publish messages to specific "ips" and "ports" of some receivers via Udp. But I don't know (and I dont care) if such receivers are really listening to their ports and really received the packages.
In case that no receiver is listening to its port, Mina will receive an ICMP-Package back, which means, that the port is unreachable. Mina caughts this Package and throws an UnreachablePortException. Then Mina closes the session object and stops sending. My aim is to ignore the "destination unreachable" packages and therefore i want to still send udp packages ("fire-and-forget principle").
Here is my approach (some kind of pseudo code):
NioDatagramConnector connector = new NioDatagramConnector();
((DatagramSessionConfig) connector.getSessionConfig()).setCloseOnPortUnreachable(false);
connector.getStatistics().setThroughputCalculationInterval(1);
connector.getFilterChain().addLast("logger", new LoggingFilter());
DefaultIoFilterChainBuilder filterChainBuilder = connector.getFilterChain();
filterChainBuilder.addFilter(...);
connector.setHandler(this);
for (UDPClient client : udpClients) {
((NioDatagramConnector)connector).connect(new InetSocketAddress(client.getIP(), client.getPort()));
}
//Sending data
while(true) {
connector.broadcast("Message");
Thread.sleep(10);
}
public void sessionClosed(IoSession session) throws Exception {
System.out.println("Called after ICMP package is received");
}
//Further methods which are based of IoHandler
Based on debugging, I can see that Mina will remove this session and finally calls the closeSession()-method (given from IoHandler).
As discussed in the bug report https://issues.apache.org/jira/browse/DIRMINA-1137
A UDP server socket needs to be open on the remote device at the remote port otherwise you will get UnreachablePortException. This exception is not done by MINA or JAVA but rather POSIX and is universal.
Quick disclaimer, I am very new to gRPC and RPC in general, so please have patience
I have two gRPC servers running on the same java application, Service A and Service B.
Service A creates multiple clients of Service B which then synchronously makes calls to the various instances of Service B
The server
Service A has a rpc call defined by the .proto file as
rpc notifyPeers(NotifyPeersRequest) returns (NotifyPeersResponse);
the server side implementation,
#Override
public void notifyPeers(NotifyPeersRequest request, StreamObserver<NotifyPeersResponse> responseObserver) {
logger.debug("gRPC 'notifyPeers' request received");
String host = request.getHost();
for (PeerClient c : clients.values()) {
c.addPeer(host); // <---- this call
}
NotifyPeersResponse response = NotifyPeersResponse.newBuilder()
.setResult(result)
.build();
responseObserver.onNext(response);
responseObserver.onCompleted();
}
The list of peers, clients are built up in previous rpc calls.
ManagedChannel channel = ManagedChannelBuilder.forTarget(peer).usePlaintext().build();
ClientB client = new ClientB(channel);
clients.put(peer, client);
The client
rpc addPeer(AddPeerRequest) returns (AddPeerResponse);rpc addPeer(AddPeerRequest) returns (AddPeerResponse);
the server side implementation,
#Override
public void addPeer(AddPeerRequest addPeerRequest, StreamObserver<AddPeerResponse> responseObserver) {
logger.info("gRPC 'addPeer' request received");
boolean result = peer.addPeer(host);
AddPeerResponse response = AddPeerResponse.newBuilder()
.setResponse(result)
.build();
responseObserver.onNext(response);
responseObserver.onCompleted();
the client side implementation,
public boolean addPeer(String host) {
AddPeerRequest request = AddPeerRequest.newBuilder().setHost(host).build();
logger.info("Sending 'addPeer' request");
AddPeerResponse response = blockingStub.addPeer(request);
return response.getResponse();
}
When I run this application, and an RPC is made to Service A and the client connection is created that calls addPeer, an ambiguous exception is thrown, io.grpc.StatusRuntimeException: UNKNOWN which then causes the JVM to shut down. I have no idea how to fix this, or whether it is even possible to create an gRPC client connection within a gRPC server
for all of my gRPC server implementations I'm using blocking stubs.
<grpc.version>1.16.1</grpc.version>
<java.version>1.8</java.version>
I've pretty much hit a brick wall, so any information will be appreciated
The UNKNOWN message is an exception on the server side that was not passed to the client.
You probably need to increase the log level on the server to try to find the root cause.
In this post here ,
creating the channel like below, enable it to see a more meaningful error message:
ManagedChannel channel = NettyChannelBuilder.forAddress( host, port )
.protocolNegotiator(ProtocolNegotiators.serverPlaintext() )
If A and B are in the same application have you considered making direct function calls or at least using the InProcessChannelBuilder and InProcessServerBuilder?
As mentioned elsewhere, in the current setup you can try increasing the log level on the server side (in B) to see the source of the exception.
I am new to CometD.I have a written a basic CometD Server in java and simple CometD Client.I am getting Successful response from postman for /meta/handshake,/meta/connect,/meta/subscribe channels. But when i start using my cometD java client(which i reused from the https://protect-us.mimecast.com/s/vLH6CNk58of1Ow1GsmVz4u?domain=github.com), handshake is failing with the below message.
Failing {supportedConnectionTypes=[long-polling],
channel=/meta/handshake, id=22, version=1.0}
I am using cometdVersion - '4.0.0',jettyVersion - '9.4.0.v20161208', springbootVersion - '1.5.14.RELEASE' in my code.
I have done a dynamic servlet registration to AnnotationCometDServlet and added /notifications as mapping.
I have created a channel as below in bayeuxServer configuration class.
bayeuxServerImpl.createChannelIfAbsent("/updates",
(ServerChannel.Initializer) channel -> channel.addAuthorizer(GrantAuthorizer.GRANT_ALL));
In client code, i have used /notifications as the defaulturl and channel as /updates
#Service("cometListener")
#Slf4j
public class BayeuxListener implements BayeuxServer.SessionListener {
#Inject
private BayeuxServer bayeuxServer;
#Session
private ServerSession serverSession;
#Configure({"/updates**,/notifications**"})
protected void configureChannel(ConfigurableServerChannel channel) {
channel.addAuthorizer(GrantAuthorizer.GRANT_ALL);
channel.addAuthorizer(GrantAuthorizer.GRANT_PUBLISH);
channel.setPersistent(true);
}
#Listener("/meta/*")
public void monitorMeta(ServerSession session, ServerMessage message) {
log.info("monitoring meta"+message.toString()+"channel "+message.getChannel()+"session id "+session.getId());
}
#Listener("/meta/subscribe")
public void monitorSubscribe(ServerSession session, ServerMessage message) {
log.info("Monitored Subscribe from " + session + " for " + message.get(Message.SUBSCRIPTION_FIELD));
}
#Listener("/meta/unsubscribe")
public void monitorUnsubscribe(ServerSession session, ServerMessage message) {
log.info("Monitored Unsubscribe from " + session + " for " + message.get(Message.SUBSCRIPTION_FIELD));
}
#Listener("/updates")
public void handlesrgUpdates(ServerSession client, ServerMessage message) {
ServerSession cilentSession = bayeuxServer.getSession(client.getId());
client.deliver(cilentSession,"/updates", "Received message back from client");
}
}
You have a strange combination of CometD version, Jetty version and Spring Boot version. I recommend that you stick with the default versioning declared in the CometD POM, i.e. CometD 4.0.2, Jetty 9.4.14 and Spring Boot 2.0.6.
The handshake failure you mention is not complete or it is not a failed handshake reply. This is because handshake replies have the successful field, and what you mention {supportedConnectionTypes=[long-polling], channel=/meta/handshake, id=22, version=1.0} looks like the handshake request. As such it's difficult to say what the problem is, because the failure reason is typically reported in the handshake reply.
If you have dynamically registered the CometD Servlet under the /notifications Servlet mapping, then the client should have a URL that ends with /notifications.
Note that the Servlet mapping /notifications and the CometD channel /notifications are two different things and are not related - they just happen to have the same name.
Your code is mostly fine, but contains a few errors.
#Configure({"/updates**,/notifications**"})
protected void configureChannel(ConfigurableServerChannel channel) {
channel.addAuthorizer(GrantAuthorizer.GRANT_ALL);
channel.addAuthorizer(GrantAuthorizer.GRANT_PUBLISH);
channel.setPersistent(true);
}
The code above must instead be:
#Configure({"/updates/**,/notifications/**"})
protected void configureChannel(ConfigurableServerChannel channel) {
channel.addAuthorizer(GrantAuthorizer.GRANT_ALL);
channel.setPersistent(true);
}
Note that the channel globbing must be after a /.
There is no need to GRANT_PUBLISH after a GRANT_ALL, which include GRANT_PUBLISH.
The configuration method should be public, not protected.
#Listener("/updates")
public void handlesrgUpdates(ServerSession client, ServerMessage message) {
ServerSession cilentSession = bayeuxServer.getSession(client.getId());
client.deliver(cilentSession,"/updates", "Received message back from client");
}
There is no need to retrieve clientSession from bayeuxServer because it is already been passed as parameter client to the method.
The method can be better implemented as:
#Listener("/updates")
public void handlesrgUpdates(ServerSession client, ServerMessage message) {
client.deliver(serverSession, "/updates", "Received message back from client");
}
Note how the "sender" of the message is the serverSession reference that has been injected as a field of the class.
The code above is still possibly wrong.
Because /updates is a broadcast channel, if the client is subscribed to the /updates channel, when the client publishes a message to the /updates channel it will receive it back from the server (because the client is subscribed to the /updates channel) and the code above will also send another message to the client on the /updates channel via deliver(), so the client will receive two different messages on the /updates channel.
This may be what you want, but most of the times it's not.
Please have a read at the difference between broadcast channels and service channels.
Update the question with the details of your handshake failure, and use consistent versioning for CometD, Jetty and Spring Boot.
I have a few problems with using websockets:
java.io.IOException: Broken Pipe
Client doesn't receive messages
TL;DR
Main things I want to know:
Please list all possible scenarios why the client side closes the connection (apart from refreshing or closing the tab).
Can a Broken Pipe Exception occur, apart from the server sending a message to the client over a broken connection? If yes, then how?
What are the possible scenarios why a server doesn't send a message, although the server does send heartbeats? (When this happens, I need to restart the application for it to work again. This is a terrible solution, because it already is in production.)
I have a SpringMVC project that uses websockets; SockJS client side and org.springframework.web.socket.handler.TextWebSocketHandler server side.
A JSON is generated server side and send to the client. Sometimes, I get a java.io.IOException: Broken Pipe. I googled/StackOverflowed a lot and found too many things I don't understand, but the reason is probably the connection is closed client side and the server still sends a message (for example, a heartbeat). Does this sound okay? What are other causes for this exception to arise? What are the reasons for the client side to close the connection (apart from refreshing or closing the tab)?
Also, sometimes the client side doesn't get any messages from the server, although the server should send them. I log before and after sending the message, and both log statements are printed. Does anyone has an idea why this can occur? I have no errors in the console log of Chrome. Refreshing the page doesn't work, I need to restart the spring project...
If you need more info, please leave a comment.
Client side
function connect() {
var socket = new SockJS('/ws/foo');
socket.onopen = function () {
socket.send(fooId); // ask server for Foo with id fooId.
};
socket.onmessage = function (e) {
var foo = JSON.parse(e.data);
// Do something with foo.
};
}
Server side
Service
#Service
public class FooService implements InitializingBean {
public void updateFoo(...) {
// Update some fields of Foo.
...
// Send foo to clients.
FooUpdatesHandler.sendFooToSubscribers(foo);
}
}
WebSocketHandler
public class FooUpdatesHandler extends ConcurrentTextWebSocketHandler {
// ConcurrentTextWebSocketHandler taken from https://github.com/RWTH-i5-IDSG/BikeMan (Apache License version 2.0)
private static final Logger logger = LoggerFactory.getLogger(FooUpdatesHandler.class);
private static final ConcurrentHashMap<String, ConcurrentHashMap<String, WebSocketSession>> fooSubscriptions =
new ConcurrentHashMap<>();
public static void sendFooToSubscribers(Foo foo) {
Map<String, WebSocketSession> sessionMap = fooSubscriptions.get(foo.getId());
if (sessionMap != null) {
String fooJson = null;
try {
fooJson = new ObjectMapper().writeValueAsString(foo);
} catch (JsonProcessingException ignored) {
return;
}
for (WebSocketSession subscription : sessionMap.values()) {
try {
logger.info("[fooId={} sessionId={}] Sending foo...", foo.getId(), subscription.getId());
subscription.sendMessage(new TextMessage(fooJson));
logger.info("[fooId={} sessionId={}] Foo send.", foo.getId(), subscription.getId());
} catch (IOException e) {
logger.error("Socket sendFooToSubscribers [fooId={}], exception: ", foo.getId(), e);
}
}
}
}
}
Just an educated guess: Check your networking gear. Maybe there is a misconfigured firewall terminating these connections; or even worse, broken networking gear causing the connections to terminate. If your server has multiple NICs (which is likely the case), it's also possible that there is some misconfiguration using these NICs, or in connecting to the server via different NICs.
If this problem occurs accidently than it is possible that you have some problem with any cache - please check if spring or SocksJS has own caches for socket interaction.
Is this happens on your devices (or on devices that you control)?
Additionally I can suggest you to use some network packet analyzer like wireshark. With such tool you'll see current network activity 'online'
Some external reasons that can desctroy connection without correct stopping it (and you cannot manage it without connection checkups):
device suspend/poweroff
network failure
browser closing on some error
I think that is a small part of full list of possible reasons to destroy connection.
I am using Durable subscription of RabbitMQ Stomp (documentation here). As per the documentation, when a client reconnects (subscribes) with the same id, he should get all the queued up messages. However, I am not able to get anything back, even though the messages are queued up on the server side. Below is the code that I am using:
RabbitMQ Version : 3.6.0
Client code:
var sock;
var stomp;
var messageCount = 0;
var stompConnect = function() {
sock = new SockJS(options.url);
stomp = Stomp.over(sock);
stomp.connect({}, function(frame) {
debug('Connected: ', frame);
console.log(frame);
var id = stomp.subscribe('<url>' + options.source + "." + options.type + "." + options.id, function(d) {
console.log(messageCount);
messageCount = messageCount + 1;
}, {'auto-delete' : false, 'persistent' : true , 'id' : 'unique_id', 'ack' : 'client'});
}, function(err) {
console.log(err);
debug('error', err, err.stack);
setTimeout(stompConnect, 10);
});
};
Server Code:
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableStompBrokerRelay("<endpoint>", "<endpoint>").setRelayHost(host)
.setSystemLogin(username).setSystemPasscode(password).setClientLogin(username)
.setClientPasscode(password);
}
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("<endpoint>").setAllowedOrigins("*").withSockJS();
}
}
Steps I am executing:
Run the script at client side, it sends subscribe request.
A queue gets created on server side (with name stomp-subscription-*), all the messages are pushed in the queue and client is able to stream those.
Kill the script, this results in disconnection. Server logs show that client is disconnected and messages start getting queued up.
Run the script again with the same id. It somehow manages to connect to server, however, no message is returned from the server. Message count on that queue remains the same (also, RabbitMQ Admin console doesn't show any consumer for that queue).
After 10 seconds, the connection gets dropped and following gets printed on the client logs:
Whoops! Lost connection to < url >
Server also shows the same messages (i.e. client disconnected). As shown in the client code, it tries to establish the connection after 10 seconds and then, same cycle gets repeated again.
I have tried the following things:
Removed 'ack' : 'client' header. This results in all the messages getting drained out of queue, however, none reaches to client. I added this header after going through this SO answer.
Added d.ack(); in the function, before incrementing messageCount. This results in error at server side as it tries to ack the message after session is closed (due to disconnection).
Also, in some cases, when I reconnect with number of queued up messages is less than 100, I am able to get all the messages. However, once it crosses 100, nothing happens(not sure whether this has anything to do with the problem).
I don't know whether the problem exists at server or client end. Any inputs?
Finally, I was able to find (and fix) the issue. We are using nginx as proxy and it had proxy_buffering set to on (default value), have a look at the documentation here.
This is what it says:
When buffering is enabled, nginx receives a response from the proxied
server as soon as possible, saving it into the buffers set by the
proxy_buffer_size and proxy_buffers directives.
Due to this, the messages were getting buffered (delayed), causing disconnection. We tried bypassing nginx and it worked fine, we then disabled proxy buffering and it seems to be working fine now, even with nginx proxy.