I was trying to illustrate a problem with a socket server and client, where the server was supposed to handle only one client at a time, and so multiple clients would have to wait for each other.
This is my server class. I can accept a connection from one client, read some string, upper case it and send it back. Then the next client would be served. So if I start multiple clients, the first would connect, but the rest would have to wait for the server to call accept() again. That was the assumption.
System.out.println("Starting server...");
try {
ServerSocket welcomeSocket = new ServerSocket(2910);
while(true) {
Socket socket = welcomeSocket.accept();
System.out.println("client connected");
ObjectInputStream inFromClient = new ObjectInputStream(socket.getInputStream());
String o = (String)inFromClient.readObject();
System.out.println("Received: " + o);
String answer = o.toUpperCase();
ObjectOutputStream outToClient = new ObjectOutputStream(socket.getOutputStream());
System.out.println("Sending back: " + answer);
outToClient.writeObject(answer);
}
} catch (IOException | ClassNotFoundException e) {
// socket stuff went wrong
}
Here's my client code:
try {
Socket socket = new Socket("localhost", 2910);
System.out.println("Connected to server");
ObjectOutputStream outToServer = new ObjectOutputStream(socket.getOutputStream());
Scanner in = new Scanner(System.in);
System.out.println("What to send?");
String toSend = in.nextLine();
System.out.println("Sending " + toSend);
outToServer.writeObject(toSend);
ObjectInputStream inFromServer = new ObjectInputStream(socket.getInputStream());
String o = (String)inFromServer.readObject();
System.out.println("Received: " + o);
} catch (IOException | ClassNotFoundException e) {}
I create the connection to the server, and then read from the console. The first client should connect successfully, and then print out "connected to server". The other clients should get stuck on creating the Socket, until the server calls accept(). Or so I thought.
But all my clients print out "Connected to server", and I can type into the console for all clients, and this is sent to the server. The server will then still respond to one client at a time.
But why does my client code move on from the initial connection before my server accepts the connection? This seemed to be the case in java 8, but now I'm using java 11.
Set the backlog to 1 if you want to prevent the operating system from accepting multiple connections until you call accept().
Otherwise, the OS will accept up to typically 5 connections before you call accept(). The clients believe to be connected because they are, indeed, connected at a TCP/IP level.
See the documentation:
public ServerSocket(int port,
int backlog)
throws IOException
Creates a server socket and binds it to the specified local port number, with the specified backlog. A port number of 0 means that the port number is automatically allocated, typically from an ephemeral port range. This port number can then be retrieved by calling getLocalPort.
The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused.
If the application has specified a server socket factory, that factory's createSocketImpl method is called to create the actual socket implementation. Otherwise a "plain" socket is created.
If there is a security manager, its checkListen method is called with the port argument as its argument to ensure the operation is allowed. This could result in a SecurityException. The backlog argument is the requested maximum number of pending connections on the socket. Its exact semantics are implementation specific. In particular, an implementation may impose a maximum length or may choose to ignore the parameter altogther. The value provided should be greater than 0. If it is less than or equal to 0, then an implementation specific default will be used.
Parameters:
port - the port number, or 0 to use a port number that is automatically allocated.
backlog - requested maximum length of the queue of incoming connections.
Related
How can specific IP address be prevented from connecting to a Java server through ServerSocket?
The official approach suggested on multiple forums is:
ServerSocket serverSocket = new ServerSocket(PORT);
while (true) {
Socket socket = serverSocket.accept();
InetAddress address = socket.getInetAddress();
if (isBanned(address)) {
socket.close();
continue;
}
//execute business logic here...
}
But in this approach a Socket object is created for every new client (even for banned clients).
A Socket object is extremely heavyweight: at least 100 bytes per new object.
There is nothing stopping clients from connecting thousands of times per second and causing DDOS attack.
How can banned clients be handled safely in ServerSocket without creating new objects in memory?
I have a multithreaded chat server that I have converted to work with Java SSL sockets. You can see the version without SSL sockets compared to the one I converted here, on Github. (Master branch has SSL, other branch has regular sockets)
This original model (without SSL) uses "ServerThreads" controlled by the Client to communicate with other Clients by sending messages to their "ClientThreads" on the server side, which then will echo their messages out to all other ServerThreads.
Here is the run method of ServerThread_w_SSL (client side)
#Override
public void run(){
System.out.println("Welcome :" + userName);
System.out.println("Local Port :" + socket.getLocalPort());
System.out.println("Server = " + socket.getRemoteSocketAddress() + ":" + socket.getPort());
//setup handshake
socket.setEnabledCipherSuites(socket.getSupportedCipherSuites());
try{
PrintWriter serverOut = new PrintWriter(socket.getOutputStream(), false);
InputStream serverInStream = socket.getInputStream();
Scanner serverIn = new Scanner(serverInStream);
// BufferedReader userBr = new BufferedReader(new InputStreamReader(userInStream));
// Scanner userIn = new Scanner(userInStream);
socket.startHandshake();
while(!socket.isClosed()){
if(serverInStream.available() > 0){
if(serverIn.hasNextLine()){
System.out.println(serverIn.nextLine());
}
}
if(hasMessages){
String nextSend = "";
synchronized(messagesToSend){
nextSend = messagesToSend.pop();
hasMessages = !messagesToSend.isEmpty();
}
serverOut.println(userName + " > " + nextSend);
serverOut.flush();
}
}
Here is the run method of ClientThread_w_SSL (server side)
#Override
public void run() {
try{
// setup
this.clientOut = new PrintWriter(socket.getOutputStream(), false);
Scanner in = new Scanner(socket.getInputStream());
//setup handshake
socket.setEnabledCipherSuites(socket.getSupportedCipherSuites());
socket.startHandshake();
// start communicating
while(!socket.isClosed()){
if(in.hasNextLine()){
String input = in.nextLine();
// NOTE: if you want to check server can read input, uncomment next line and check server file console.
System.out.println(input);
for(ClientThread_w_SSL thatClient : server.getClients()){
PrintWriter thatClientOut = thatClient.getWriter();
if(thatClientOut != null){
thatClientOut.write(input + "\r\n");
thatClientOut.flush();
}
}
}
}
The original program works with regular sockets, but after converting to SSL sockets, I encountered a problem: input is not being echoed back from the ClientThreads (server side) to the ServerThreads (client side).
In my first attempt at converting to SSL I used certificates, keystores and truststores. I encountered the same problem then as I do here without them, instead only using the default socket factory which relies on the cacerts file that comes with the JDK.
Note that before this bug was encountered, the first problem to address was the handshake failure occurring between the client and server. Because of the way SSL and the Java PrintWriter class work, the handshake gets initiated the first time PrintWriter.flush() is called, which happens as soon as the client sends a chat message to the server. This is only resolved by manually enabling supported ciphersuites in both the ClientThread (server) and ServerThread (client), then calling SSLSocket.StartHandshake() in at least the ClientThread, if not both.
Now the server is receiving messages from the client, but it is not echoing them out to the clients.
When I run it in a debugger and try stepping through the code I find that the ClientThread receives the client's message and sends it back by calling write() on the PrintWriter for each ClientThread, then flush(). The ServerThread is supposed to receive it by calling InputStream.available() to check for input without blocking, but available() always returns '0 bytes' and so it never hits Scanner.nextLine()
So either Printwriter.write() and .flush() aren't sending data or InputStream.available() is not reading data.
EDIT: After more debugging and testing, I can only narrow the problem down to output from the server side. I determined this by having the server immediately send its own message before waiting to receive messages, and had the client just grab the nextLine() instead of checking first with available(). Since this test failed it shows that data must be being blocked somehow coming from the server side only.
EDIT 2: I changed the code to use ObjectInputStreams and ObjectOuputStreams instead of using the Scanner and PrintWriters. Now I'm sending "Message" objects from a Serializable class I made to just hold Strings. This has fixed the output issue for messages coming from the server. If I make the client simply wait for input by calling readObject() it will receive messages from the server. However, if I use the availble() method of InputStream first, it still only returns 0 even when it shouldn't. Since the InputStream serverInStream is initialized by socket.getInputStream(), it gets an ssl.AppInputStream with an ssl.InputRecord, and I'm guessing one of the two does not implement available() correctly.
I figured it out: the problem was available(), It is useless with SSL in Java. I got the solution from this answer.
I am writing client/server application in which multiple clients connect to servers and continiusly send serialized objects to servers at a high rate over TCP connection.
I am using ObjectOutputStream.writeObject on client and ObjectInputStream.readObject at server.
Server application accepts clients connection on the single port using serverSocket.accept() and passes Socket to a new thread for reading objects.
When a single client connects and sends about 25K objects/s - all works fine. Once I start a second client, after the short period of time, one or both clients hang on ObjectOutputStream.writeObject for one of the servers and the corresponding server hangs on the ObjectInputStream.readObject.
No exceptions thrown on the both sides.
If rate is very low, lets say 10-20/s in total - it will not hang but at 100-1000/s it will.
Using netstat -an on the client machine I can see that the send-Q of the corresponding link is about 30K. On the server side the receive-Q is also ~30K.
When running client/server on the local Windows I observe something similar - client hangs but the server continue to process incoming objects and once it catches up, client unlocks and continue to send objects.
Locally on windows the server is slower than client, but on linux, number of the server instances running on the deferent machines is more than enough for the rate that clients produce.
Any clue what is going on?
client code snip:
Socket socket = new Socket(address, port);
ObjectOutputStream outputStream = new ObjectOutputStream(socket.getOutputStream());
while(true)
{
IMessage msg = createMsg();
outputStream.writeObject(msg);
outputStream.flush();
outputStream.reset();
}
server code accepting connections:
while(active)
{
Socket socket = serverSocket.accept();
SocketThread socketThread = new SocketThread(socket);
socketThread.setDaemon(true);
socketThread.start();
}
server code reading objects:
public class SocketThread extends Thread
{
Socket socket;
public SocketThread(Socket socket)
{
this.socket = socket;
}
#Override
public void run() {
try {
ObjectInputStream inStream = new ObjectInputStream(socket.getInputStream());
while(true)
{
IMessage msg = (IMessage)inStream.readObject();
if(msg == null){
continue;
}
List<IMessageHandler> handlers = handlersMap.get(msg.getClass());
for(IMessageHandler handler : handlers){
handler.onMessage(msg);
}
}
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
}
}
You have just described the operation of TCP when the sender outruns the receiver. The receiver tells the sender to stop sending, so the sender stops sending. As you are using blocking I/O, the client blocks in send() internally.
There is no problem here to solve.
The problem was that handlers on the server side were using some not thread-safe resources (like Jedis connection) so it was all stack on the server side.
Doing it thread safe solved the issue.
I'm trying to develop my own communication library based on non-blocking NIO messages. I've been reading 1000 tutorials and book chapters about it and I think that at the end I have something that works with few simultaneous connections. But I'm having some issues when I have many connections coexisting on the server-side.
I have the typical selector implementation with the 4 private methods: accept, finishConnect, read and write. My problem lies on the first two ones: Accept and finishConnect.
When the client opens a new socket, and an acceptable key wakes the selector up, the following code is executed.
private void accept(SelectionKey key) {
try {
ServerSocketChannel ssc = (ServerSocketChannel) key.channel();
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
LOGGER.debug("Socket " + sc.hashCode() + "-" + sc.socket().toString() + " connexion completed");
changeInterest(sc, SelectionKey.OP_READ);
eventManager.addEvent(new ConnectionEstablished(sc));
} catch (Throwable e) {
NIOException ne = new NIOException(NIOException.ErrorType.ACCEPTING_CONNECTION, e);
eventManager.addEvent(new ErrorEvent(null, ne));
}
}
On the client side, I have this implementation for the connect method that will be invoked once the server processes its acceptable key for the socket.
private void finishConnect(SelectionKey key) {
SocketChannel sc = (SocketChannel) key.channel();
try {
if (sc.finishConnect()) {
eventManager.addEvent(new ConnectionEstablished(sc));
LOGGER.debug("Socket " + sc.hashCode() + "-" + sc.socket().toString() + " connection finished");
} else {
LOGGER.debug("REFUSED " + sc + " - " + sc.socket().toString());
refusedConnection(sc, null);
key.cancel();
}
} catch (Exception e) {
refusedConnection(sc, e);
key.cancel();
}
}
The thing is that when I create some connections are accepted, the client executes the finishConnect message (and I can see the socket connection established with the ports used). But I can't find this connection acceptance on the server side, there is no connection completed log message using those ports!!
I suspected that an exception could arise between the ssc.accept() and the log invocation so I added some extra log messages to check which instruction was blowing everything up. The sequence was completed for all the keys that got into the accept method.
How is that possible if I can't even see any Error message on the log ?
EDIT: I made some tests on the number of sockets that are open at a time. When the client starts running, there's just one openSocket on the server: the server socket. After that it has up to 200 simultaneous open sockets, and at the end of the client execution the servers goes back to 1 open socket. I guess that they are never counted
By now, I've made a workaround that monitors the amount of coexisting connections on the node and delays new connections acceptance until that number is reduced to a given threshold. However I would like to understand what's going wrong on.
Thanks for your help.
Because of the backlog queue, it's perfectly in order for a large number of client connections to complete before accept() is executed. So there is no actual problem here to solve.
But are you ever executing the accept() method? This is the bug you need to investigate.
As EJP suggested the problem was on the backlog queue. I was binding the ServerSocketChannel using the bind(SocketAddress local) method.
When a socket request arrives to the JVM it is enqueued to the backlog queue and waits there until the Listener triggers the process of the corresponding key to be accepted. The actual problem lies on the size of this queue, using the bind method, it stores up to 50 connections.
When a peak of connection requests happens, there's an overflow on the queue and some of them are lost. To avoid this happening, the method bind(SocketAddress local, int backlog), allows to change the capacity of the queue and increase it.
On the other side, when working in non-blocking mode, the selector on the client node does not need the connection to be accepted to process a OP_CONNECT key. The reception of the SYN-ACK TCP message will trigger the corresponding key in the selector.
I have some java code that looks similar to this:
private void startServer() throws IOException {
URLClassLoader classloader = null;
System.out.println("Opening server socket for listening on " + PORT_NUMBER);
try {
server = new ServerSocket(PORT_NUMBER);
server.setSoTimeout(10000);
connected = true;
System.out.println("Server is now listening on port " + PORT_NUMBER);
} catch (IOException e) {
System.err.println("Could not start server on port " + PORT_NUMBER);
e.printStackTrace();
connected = false;
}
while (connected) {
// Incoming request handler socket.
Socket socket = null;
try {
System.out.println("Waiting for client connection...");
// Block waiting for an incoming connection.
socket = server.accept();
if (socket == null) continue;
...and so on and so forth. When I call server.close() later on (I don't get any different behavior if I call socket.close() first), I don't get any errors, but netstat shows that the port is still being listened on. Should calling ServerSocket.close() be sufficient enough to free up the port on this system?
I am programming for a Java 1.4.2 microedition runtime. It is also worthy to note that I have this method being run in another thread, and I am trying to close the socket from its parent thread.
EDIT Here is the line from netstat, though I can assure you it is still being listened on, since if I start the Xlet again I get an exception with that port number.
tcp 0 0 *.2349 *.* LISTEN
There are several things to consider. One of them is described by the following quotation from JavaDoc of ServerSocket
public void setReuseAddress(boolean on)
throws SocketException
Enable/disable the
SO_REUSEADDR socket option. When a TCP connection is closed the
connection may remain in a timeout state for a period of time after
the connection is closed (typically known as the TIME_WAIT state or
2MSL wait state). For applications using a well known socket address
or port it may not be possible to bind a socket to the required
SocketAddress if there is a connection in the timeout state involving
the socket address or port.
So it is kind of OK that the OS can still show that there is something going on after you close() the server socket. But if you going to open/close a server socket on the same port frequently you might hit a problem.