protection agains DOS websocket with ip address - java

I was wondering if storing the ip address of a user into the handshake of a websocket would be a good way to protect my java ee server agains DDOS :
when the server receive an abnormal amount of connections, he switches to 'secure' mode, where, if a given connection request provides an ip address that is not known to the server (stored in database, first time connection), then I can simply refuse that connection.
could that help ? (My main concern is to protect my websocket server as much as possible. I've looked into the origin thingy but with no success so far.)
Thanks for the help !

Protection against DDoS must be at the network level (routing, balancing, switching, etc...). A server cannot do anything if a massive amount of request arrives to it. Even if the server is quickly dispatching them with errors, the channel is saturated and legit requests cannot reach the server, or they reach with very bad throughput. Put aside, that a DDoS can be done even with ICMP packets that are not even at TCP/UDP layer, but just IP layer, so a WebSocket server cannot do much about this.
Protection against DoS is related with the logic more than with the infrastructure. In essence, an attack vector that allows to hang your server. A practical example would be, if sending a malformed WebSocket request the thread that is dispatching sockets in your server dies or gets stuck, preventing the app from accepting more connections. To protect your server against DoS, check these kind of things.

Related

Is java socket support options like "SO_SNDTIMEO“ in C?

Currently I got a situation which client will totally disconnect without sending an EOF(Such as the client is a phone and suddenly change network for wifi to 4G), but my server will still send message to this client. This will take at least 10 mins until server found out the peer is unreachable.
So is there an option in Java to reduce sending timeout, just like the SO_SNDTIMEO in C?
Android docs are pretty much straightforward with what they have: https://developer.android.com/reference/java/net/SocketOptions.html
SO_TIMEOUT is among the list, but it applies to reading operations only. Send operation completion usually doesn't indicate that a packet has been received by the remote host, but rather indicates that the packet has been accepted by kernel's network queue and will be sent "soon".
I won't blame Android team for not having (or at least not advertising) a socket option for sending timeout, because you don't get much information from completion of a send. It's actually up to the application level to detect disconnects. Enhance your protocol, introduce app level keepalives, try non-blocking socket mode to avoid long operations, keep track of what was actually received by a remote host - send is not enough. This will result in a much more robust application.

How could a server check the availability of a client?

I have classic http client/server application where the server serves the clients data at their will but also performs some kind of call-backs to the list of clients' addresses it has. My two questions are :
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
Option #1: direct communication
Client tells server "I'm alive" at a periodic interval. You could make your client to ping your server at a configurable interval, and if the server does not receive the signal for a certain time, it'll mark the client as down. Client could even tell server more info(e.g. It's status) in each heartbeat if necessary, this is also the way used in many distributed systems(e.g. Hadoop/Hbase).
Option #2: distributed coordination service
You could treat all clients connected to a server as a group, and use a 3rd party distributed coordination service like Zookeeper to facilitate the membership management. Client registers itself to Zookeeper as a new member of the group right after booting up, and leaves the group if it's down. Zookeeper notifies the server whenever the membership changes.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open) ?
I think this can only be done by the way Option #1 listed above. It could be either the way clients tell server "My callback port is OK" at a fixed interval, or the server asks clients "Are your callback port OK?" and wait its response at a fixed interval
You would have to establish some sort of protocol; and simply spoken: the server keeps track of "messages" that it tried to sent to clients.
If that "send" is acknowledged, fine; if not: then the server might do a limited number of retries; and then regard that client as "gone"; and then drop any other messages for that client.
1- How would the server know if a client is down (the client did not disconnect but the connection got suddenly interrupted) ?
A write to the client will fail.
2- Is there a way to know from the server-side if the process at client-side listening on the call-back port is still up (i.e. client call-back socket is still open
A write to the client will fail.
The write won't necessarily fail immediately, due to TCP buffering, but the write will eventually provoke retries and retry timeouts that will cause a subsequent read or write to fail.
In Java the failure will manifest itself as an IOException: connection reset.

Elasticsearch unclosed client. Live threads after Tomcat shutdown. Memory usage impact?

I am using Elasticsearch 1.5.1 and Tomcat 7. Web application creates a TCP client instance as Singleton during server startup through Spring Framework.
Just noticed that I failed to close the client during server shutdown.
Through analysis on various tools like VisualVm, JConsole, MAT in Eclipse, it is evident that threads created by the elasticsearch client are live even after server(tomcat) shutdown.
Note: after introducing client.close() via Context Listener destroy methods, the threads are killed gracefully.
But my query here is,
how to check the memory occupied by these live threads?
Memory leak impact due to this thread?
We have got few Out of memory:Perm gen errors in PROD. This might be a reason but still I would like to measure and provide stats for this.
Any suggestions/help please.
Typically clients run in a different process than the services they communicate with. For example, I can open a web page in a web browser, and then shutdown the webserver, and the client will remain open.
This has to do with the underlying design choices of TCP/IP. Glossing over the details, under most cases a client only detects it's server is gone during the next request to the server. (Again generally speaking) it does not continually poll the server to see if it is alive, nor does the server generally send a "please disconnect" message on shutting down.
The reason that clients don't generally poll servers is because it allows the server to handle more clients. With a polling approach, the server is limited by the number of clients running, but without a polling approach, it is limited by the number of clients actively communicating. This allows it to support more clients because many of the running clients aren't actively communicating.
The reason that servers typically don't send an "I'm shutting down" message is because many times the server goes down uncontrollably (power outage, operating system crash, fire, short circuit, etc) This means that an protocol which requires such a message will leave the clients in a corrupt state if the server goes down in an uncontrolled manner.
So losing a connection is really a function of a failed request to the server. The client will still typically be running until it makes the next attempt to do something.
Likewise, opening a connection to a server often does nothing most of the time too. To validate that you really have a working connection to a server, you must ask it for some data and get a reply. Most protocols do this automatically to simplify the logic; but, if you ever write your own service, if you don't ask for data from the server, even if the API says you have a good "connection", you might not. The API can report back a good "connections" when you have all the stuff configured on your machine successfully. To really know if it works 100% on the other machine, you need to ask for data (and get it).
Finally servers sometimes lose their clients, but because they don't waste bandwidth chattering with clients just to see if they are there, often the servers will put a "timeout" on the client connection. Basically if the server doesn't hear from the client in 10 minutes (or the configured value) then it closes the cached connection information for the client (recreating the connection information as necessary if the client comes back).
From your description it is not clear which of the scenarios you might be seeing, but hopefully this general knowledge will help you understand why after closing one side of a connection, the other side of a connection might still think it is open for a while.
There are ways to configure the network connection to report closures more immediately, but I would avoid using them, unless you are willing to lose a lot of your network bandwidth to keep-alive messages and don't want your servers to respond as quickly as they could.

Can websocket messages get lost or not?

I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?
It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.
To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.
TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).
Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)
In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.
TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery.
In other words TCP is a protocol that checks constantly whether the data arrived.
This protocol has different mechanisms to ensure that.
You can see the difference between TCP and UDP (which has no control flow) in the link below.
Difference between tcp and udp

Redirect a TCP connection

I have something like a proxy server (written in java) running between my clients and the actual video server (made in c++). Everything the clients send goes through this proxy and is then redirected to the server.
It is working fine, but I have some issues and think it would be better if I could make this proxy server only to listen to the clients requests and then somehow tell the server that a request has been made from the client side, and that it is supposed to create a connection with the client directly.
Basically in the TCP level what I want to happen is something like this:
1- whenever a client sends a SYN to my proxy, the proxy just sends a message to the real server telling the ip and port of the client.
2- The server would then send the corresponding SYN-ACK to the specified client creating a direct connection between client and server.
The proxy would then be just relaying the initial requests (but not the later data transfer) to the actual server. I just don't know if that is possible.
Thank you very much
Nelson R. Perez
That's very much the way some games (and Fog Creek CoPilot) do it, but it requires support on both the server and the client. Basically the proxy has to say to the client and server "try communicating with the directly on this ip and this port" and if they can't get through (because one or both is behind a NAT or firewall), they fall back to going through the proxy.
I found this good description of "peer to peer tcp hole punching" at http://www.brynosaurus.com/pub/net/p2pnat/
Does the proxy and server lives on the same machine? If so, you can pass the connection to the server using Socket Transfer or File Descriptor Passing. You can find examples in C here,
http://www.wsinnovations.com/softeng/articles/uds.html
If they are on the different machines, there is no way to pass connection to the server. However, it's possible to proxy the IP packets to server using VIP (Virtual IP). This is below socket so you have to use Link layer interface, like DLPI.
You don't have control of TCP handshake in userland like that. This is what firewalls/routers do but it all happens in the kernel. Take a look at the firewalling software for your platform - you might not even have to code anything.

Categories