Java ServerSocket Ban IP-address - java

How can specific IP address be prevented from connecting to a Java server through ServerSocket?
The official approach suggested on multiple forums is:
ServerSocket serverSocket = new ServerSocket(PORT);
while (true) {
Socket socket = serverSocket.accept();
InetAddress address = socket.getInetAddress();
if (isBanned(address)) {
socket.close();
continue;
}
//execute business logic here...
}
But in this approach a Socket object is created for every new client (even for banned clients).
A Socket object is extremely heavyweight: at least 100 bytes per new object.
There is nothing stopping clients from connecting thousands of times per second and causing DDOS attack.
How can banned clients be handled safely in ServerSocket without creating new objects in memory?

Related

Client seems to be connected without the server accepting it

I was trying to illustrate a problem with a socket server and client, where the server was supposed to handle only one client at a time, and so multiple clients would have to wait for each other.
This is my server class. I can accept a connection from one client, read some string, upper case it and send it back. Then the next client would be served. So if I start multiple clients, the first would connect, but the rest would have to wait for the server to call accept() again. That was the assumption.
System.out.println("Starting server...");
try {
ServerSocket welcomeSocket = new ServerSocket(2910);
while(true) {
Socket socket = welcomeSocket.accept();
System.out.println("client connected");
ObjectInputStream inFromClient = new ObjectInputStream(socket.getInputStream());
String o = (String)inFromClient.readObject();
System.out.println("Received: " + o);
String answer = o.toUpperCase();
ObjectOutputStream outToClient = new ObjectOutputStream(socket.getOutputStream());
System.out.println("Sending back: " + answer);
outToClient.writeObject(answer);
}
} catch (IOException | ClassNotFoundException e) {
// socket stuff went wrong
}
Here's my client code:
try {
Socket socket = new Socket("localhost", 2910);
System.out.println("Connected to server");
ObjectOutputStream outToServer = new ObjectOutputStream(socket.getOutputStream());
Scanner in = new Scanner(System.in);
System.out.println("What to send?");
String toSend = in.nextLine();
System.out.println("Sending " + toSend);
outToServer.writeObject(toSend);
ObjectInputStream inFromServer = new ObjectInputStream(socket.getInputStream());
String o = (String)inFromServer.readObject();
System.out.println("Received: " + o);
} catch (IOException | ClassNotFoundException e) {}
I create the connection to the server, and then read from the console. The first client should connect successfully, and then print out "connected to server". The other clients should get stuck on creating the Socket, until the server calls accept(). Or so I thought.
But all my clients print out "Connected to server", and I can type into the console for all clients, and this is sent to the server. The server will then still respond to one client at a time.
But why does my client code move on from the initial connection before my server accepts the connection? This seemed to be the case in java 8, but now I'm using java 11.
Set the backlog to 1 if you want to prevent the operating system from accepting multiple connections until you call accept().
Otherwise, the OS will accept up to typically 5 connections before you call accept(). The clients believe to be connected because they are, indeed, connected at a TCP/IP level.
See the documentation:
public ServerSocket(int port,
int backlog)
throws IOException
Creates a server socket and binds it to the specified local port number, with the specified backlog. A port number of 0 means that the port number is automatically allocated, typically from an ephemeral port range. This port number can then be retrieved by calling getLocalPort.
The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused.
If the application has specified a server socket factory, that factory's createSocketImpl method is called to create the actual socket implementation. Otherwise a "plain" socket is created.
If there is a security manager, its checkListen method is called with the port argument as its argument to ensure the operation is allowed. This could result in a SecurityException. The backlog argument is the requested maximum number of pending connections on the socket. Its exact semantics are implementation specific. In particular, an implementation may impose a maximum length or may choose to ignore the parameter altogther. The value provided should be greater than 0. If it is less than or equal to 0, then an implementation specific default will be used.
Parameters:
port - the port number, or 0 to use a port number that is automatically allocated.
backlog - requested maximum length of the queue of incoming connections.

ObjectOutputStream writeObject hangs when two clients send objects to server

I am writing client/server application in which multiple clients connect to servers and continiusly send serialized objects to servers at a high rate over TCP connection.
I am using ObjectOutputStream.writeObject on client and ObjectInputStream.readObject at server.
Server application accepts clients connection on the single port using serverSocket.accept() and passes Socket to a new thread for reading objects.
When a single client connects and sends about 25K objects/s - all works fine. Once I start a second client, after the short period of time, one or both clients hang on ObjectOutputStream.writeObject for one of the servers and the corresponding server hangs on the ObjectInputStream.readObject.
No exceptions thrown on the both sides.
If rate is very low, lets say 10-20/s in total - it will not hang but at 100-1000/s it will.
Using netstat -an on the client machine I can see that the send-Q of the corresponding link is about 30K. On the server side the receive-Q is also ~30K.
When running client/server on the local Windows I observe something similar - client hangs but the server continue to process incoming objects and once it catches up, client unlocks and continue to send objects.
Locally on windows the server is slower than client, but on linux, number of the server instances running on the deferent machines is more than enough for the rate that clients produce.
Any clue what is going on?
client code snip:
Socket socket = new Socket(address, port);
ObjectOutputStream outputStream = new ObjectOutputStream(socket.getOutputStream());
while(true)
{
IMessage msg = createMsg();
outputStream.writeObject(msg);
outputStream.flush();
outputStream.reset();
}
server code accepting connections:
while(active)
{
Socket socket = serverSocket.accept();
SocketThread socketThread = new SocketThread(socket);
socketThread.setDaemon(true);
socketThread.start();
}
server code reading objects:
public class SocketThread extends Thread
{
Socket socket;
public SocketThread(Socket socket)
{
this.socket = socket;
}
#Override
public void run() {
try {
ObjectInputStream inStream = new ObjectInputStream(socket.getInputStream());
while(true)
{
IMessage msg = (IMessage)inStream.readObject();
if(msg == null){
continue;
}
List<IMessageHandler> handlers = handlersMap.get(msg.getClass());
for(IMessageHandler handler : handlers){
handler.onMessage(msg);
}
}
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
}
}
You have just described the operation of TCP when the sender outruns the receiver. The receiver tells the sender to stop sending, so the sender stops sending. As you are using blocking I/O, the client blocks in send() internally.
There is no problem here to solve.
The problem was that handlers on the server side were using some not thread-safe resources (like Jedis connection) so it was all stack on the server side.
Doing it thread safe solved the issue.

Using same address and port for accepting and connecting in Java

(This might have been asked a thousand times, but I do not get it straight.)
Suppose I have the following snippet:
InetAddress localAddress = InetAddress.getByName("192.168.1.10");
int localPort = 65000;
InetAddress targetAddress = InetAddress.getByName("192.168.1.20");
int targetPort = 65000;
// Create a new serversocket
ServerSocket ss = new ServerSocket(localPort, 50, localAddress);
// Wait for an incoming connection...
Socket acceptedSocket = ss.accept();
// Do something with the accepted socket. Possibly in a new thread.
Set up new connection...
Socket socket = new Socket(targetAddress, targetPort, localAddress, localPort);
// Write something to the socket.
Now can I use the same address and port for both accepting an incoming connection and connecting to an address? If it can, then how? If not, then why not? According to this post, ports can be shared, so it shouldn't be a problem.
How does it work?
You can only establish a connection by having the connecting socket use the same address and port. (Ignoring the use of multi-homed servers)
A single connection is a unique combination of both the source address+port and destination address+port, so you can have the same destination if you have a different source.
In other words, can you write server program that that contains client connecting to itself? The answer is yes, surely. All integration tests do this running in-process server and connecting to it.

Multithreaded connections handling

I want to write a server which listens on the given port for connections and puts sockets into BlockingLinkedQueue from which the consumer thread will read messages. I accept incoming connections in this way:
ServerSocket serverSocket = new ServerSocket(port);
while (true)
{
Socket socket = null;
socket = serverSocket.accept();
queue.put(socket);
}
When I try to connect in parallel from two separate hosts it occurs that responses to the first one are sent to the second one after establishing the second connection. When I change my code to this listed below the second connection is merely refused:
while (true)
{
ServerSocket serverSocket = new ServerSocket(port);
Socket socket = serverSocket.accept();
queue.put(socket);
}
My questions are:
What's the difference between this two situations? Why in the first situation messages are sent to the second host?
How should I refactor my code in order to create separate connections between my server and both hosts and handle them in parallel?
What's the difference between this two situations?
In the first case, you are using the same port for multiple connections. In the second case, you are discarding the server port so any waiting connections for be refused.
Why in the first situation messages are sent to the second host?
Due to a bug in code, not shown here.
How should I refactor my code in order to create separate connections between my server and both hosts and handle them in parallel?
Create a thread for each connection.

ServerSocket constructor in laymans terms

What would this statement do:
ServerSocket ss=new ServerSocket(4646);
Please explain in layman terms.
The statement effectively tells the JVM to listen on port specified (4646) for incoming connections. By itself it doesn't mean anything since you will have to take incoming connections to that port and use them to build normal Socket objects that will be then used for ingoing/outgoing data.
You could say that the ServerSocket is the object through which real TCP sockets between clients and the server are created. When you create it, the JVM hooks to the operating system telling it to dispatch connections that arrive on that port to your program.
What you typically do is something like:
public AcceptThread extends Thread {
public void run() {
ServerSocket ss = new ServerSocket(4646);
while (true) {
Socket newConnection = ss.accept();
ClientThread thread = new ClientThread(newConnection);
thread.start();
}
}
}
So that you will accept incoming connections and open a thread for them.
Straight from the ServerSocket Java docs:
Creates a server socket, bound to the specified port.
What's a server socket?
This class implements server sockets. A server socket waits for requests to come in over the network. It performs some operation based on that request, and then possibly returns a result to the requester.
public ServerSocket(int port) throws IOException
documentation:
Creates a server socket, bound to the
specified port. A port of 0 creates a
socket on any free port.
That would bind your ServerSocket to port 4646 on the local machine.
You could then accept sockets on this connection with
// pick up server side of the socket
Socket s = ss.accept();
Now, your client can connect to your server, establishing a socket connection, like this
// pick up client side of the socket, this is in a different program (probably)
Socket connectionToServer = new Socket("myserver",4646);

Categories