ServerSocket serverSocket = new ServerSocket(portNumber)
Socket socket = serverSocket.accept();
try (
BufferedReader in = new BufferedReader(
new InputStreamReader(
socket.getInputStream()));
) {
while (in.readLine() != null) {
//do something
}
System.out.println("reach me if you can");
socket.close();
}
Writing my Server/Client software, I tried to implement functionality to show number of current connections. But I realized that my server never gets the message when a client abruptly terminates; it just keeps waiting at in.readLine(). How should I ensure that a Thread created to handle a specific connection is not left running while the connection is dead?
It is a general TCP problem that the machine on one end of a connection can go away without any notification to the machine on the other end. Machines are not supposed to do that, but it isn't always under their control. The usual way for one end to avoid waiting forever for data in such a case, and / or to avoid being loaded down with dead connections, is to employ a timeout.
The general problem is bigger than you described, but you should be able to solve the particular part you asked about by invoking setSoTimeout() on the socket some time before you try to read from it. The socket will then throw an exception if your read attempt blocks for longer than the time you specify. This setting persists for the lifetime of the socket (or until you set a different value), so you can apply it immediately after accepting the connection if you wish.
Be aware, however, that a sufficiently long period of simple client inactivity can also cause a timeout.
I wonder if it is really possible for a simple server program using server socket can handle multiple clients at the same time simultaneously?
I am creating a server program that needs to handle multiple clients. With the same port number. But the problem is the program will only serve one client at a time, and in order for it to serve the other client, the first connection have to be terminated.
Here is the code:
try{
ServerSocket server = new ServerSocket(PORT);
System.out.println("Server Running...");
while(true){
Socket socket = server.accept();
System.out.println("Connection from:"+socket.getInetAddress());
Scanner in = new Scanner (socket.getInputStream());
PrintWriter output = new PrintWriter(socket.getOutputStream());
}
}catch(Exception e){
System.out.println(e);
}
Is there any possible java code that will be added here in order to for the program to serve multiple clients simultaneously?
The code you've posted doesn't actually do anything with the client connection, which makes it hard to help you. But yes, it's entirely possible for a server to handle multiple concurrent connections.
My guess is that the problem is that you're doing everything on a single thread, with synchronous IO. That means that while you're waiting for data from the existing client, you're not accepting new connections. Typically a server takes one of two approaches:
Starting a thread (or reusing one from a thread pool) when it accepts a connection, and letting that thread deal with that connection exclusive.
Using asynchronous IO to do everything on a single thread (or a small number of threads).
The latter approach can be more efficient, but it also significantly more complicated. I'd suggest you use a "thread per connection" approach to start with. While you're experimenting, you can simply start a brand new Thread each time a client connects - but for production use, you'd use an ExecutorService or something similar, in order to reuse threads.
Note that depending on what kind of server you're building, all of this may well be done for you in third party libraries.
I'm trying to create a simple multiplayer game for Android devices. I have been thinking about the netcode and read now a lot of pages about Sockets. The Android application will only be a client and connect only to one server.
Almost everywhere (here too) you get the recommendation to use NIO or a framework which uses NIO, because of the "blocking".
I'm trying to understand what the problem of a simple socket implementation is, so I created a simple test to try it out:
My main application:
[...]
Socket clientSocket = new Socket( "127.0.0.1", 2593 );
new Thread(new PacketReader(clientSocket)).start();
PrintStream os = new PrintStream( clientSocket.getOutputStream() );
os.println( "kakapipipopo" );
[...]
The PacketReader Thread:
class PacketReader implements Runnable
{
Socket m_Socket;
BufferedReader m_Reader;
PacketReader(Socket socket)
{
m_Reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
public void run()
{
char[] buffer = new char[200];
int count = 0;
while(true)
{
count = m_Reader.read(buffer, 0, 200);
String message = new String(buffer, 0, count);
Gdx.app.log("keks", nachricht);
}
}
}
I couldn't experience the blocking problems I should get. I thought the read() function will block my application and I couldn't do anything - but everything worked just fine.
I have been thinking: What if I just create a input and output buffer in my application and create two threads which will write and read to the socket from my two buffers? Would this work?
If yes - why does everyone recommend NIO? Somewhere in the normal IO way there must a block happen, but I can't find it.
Are there maybe any other benifits of using NIO for Android multiplayer gaming? I thought that NIO seems to be more complex, therefore maybe less suited for a mobile device, but maybe the simple socket way is worse for a mobile device.
I would be very happy if someone could tell me where the problem happens. I'm not scared of NIO, but at least I would like to find out why I'm using it :D
Greetings
-Thomas
The blocking is, the read() will block current thread until it can read data from socket's input stream. Thus, you need a thread dedicate on that single TCP connection.
What if you have more than 10k client devices connected with your server? You need at least 10k threads to handle all client devices (assume each device maintain a single TCP connection) no matter they are active or not. You have too much overhead on context switch and other multi-threads overhead even only 100 of them are active.
The NIO use a selector model to handle those clients, means you don't need a dedicate thread for each TCP connection to receive data. You just need to select all active connections (which has data already received) and to process those active connections. You can control how many threads should be maintained in server side.
EDIT
This answer is kind of not exactly answering about what OP asked. For client side its fine because the client is going to connect to just one server. Although my answer gives some generic idea about Blocking and Non Blocking IO.
I know this answer is coming 3 years later but this is something which might help someone in future.
In a Blocking Socket model, if data is not available for reading or if the server is not ready for writing then the network thread will wait on a request to read from or write to a socket until it either gets or sends the data or
times out. In other words, the program may halt at that point for quite some time if it can't proceed. To cancel this out we can create a thread per connection which can handle out requests from each client concurrently. If this approach is chosen, the scalability of the application can suffer because in a scenario where thousands of clients are connected, having thousands of threads can eat up all the memory of the system as threads are expensive to create and can affect the performance of the application.
In a Non-Blocking Socket model, the request to read or write on a socket returns immediately whether or not it was successful, in other words, asynchronously. This keeps the network thread busy. It is then our task to decide whether to try again or consider the read/write operation complete. Having this creates an event driven approach towards communication where we can create threads when needed and which leads to a more scalable system.
Diagram below explains the difference between Blocking and Non-Blocking socket model.
What is the efficient way of blocking on the socket for data after opening it.The method i used is to call read on input stream (this is a blocking call that waits till some data written to this socket).
//Socket creation
SocketForCommunication = new Socket();
InetSocketAddress sAddr = new InetSocketAddress("hostName", 8800);
SocketForCommunication.connect(sAddr,10000);
is = new DataInputStream(SocketForCommunication.getInputStream());
os = new DataOutputStream(SocketForCommunication.getOutputStream());
//Waiting on socket using read method for data
while(true)
{
int data = is.read();
if(data == HEADER_START)
{
processPackage(is);
}
}
Here problem is read can timeout.Is there a way to register a callback that gets called when data available to read on socket.?
The socket will timeout by default, but you can change this if you really want to. See the Socket.setSoTimeout() call (a timeout of zero means "indefinite").
N.B. Even if you specify a zero timeout, your O/S may or may not actually let you keep a socket open indefinitely. For example, idle sockets may get closed after a certain amount of time. In environments, e.g. shared web hosting environments, it's not uncommon for a housekeeping routine to also run (say) once a day and shut down idle sockets.
And of course, stuff happens on networks. Either way, you shouldn't rely on the socket staying open indefinitely...
You want to use the java.nio (non blocking IO) system. While its not callback driven, it allows you much more flexibility in handling IO.
http://rox-xmlrpc.sourceforge.net/niotut/index.html
(See this question in ServerFault)
I have a Java client that uses Socket to open concurrent connections to the same machine. I am witnessing a phenomenon where one request completes extremely fast, but the others see a delay of 100-3000 milliseconds. Packet inspection using Wireshark shows all SYN packets beyond the first wait a long time before leaving the client. I am seeing this on both Windows and Linux clients. What could be causing this? This happens when the client is a Windows 2008 or a Linux box.
Code attached:
import java.util.*;
import java.net.*;
public class Tester {
public static void main(String[] args) throws Exception {
if (args.length < 3) {
usage();
return;
}
final int n = Integer.parseInt(args[0]);
final String ip = args[1];
final int port = Integer.parseInt(args[2]);
ExecutorService executor = Executors.newFixedThreadPool(n);
ArrayList<Callable<Long>> tasks = new ArrayList<Callable<Long>>();
for (int i = 0; i < n; ++i)
tasks.add(new Callable<Long>() {
public Long call() {
Date before = new Date();
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress(ip, port));
}
catch (Throwable e) {
e.printStackTrace();
}
Date after = new Date();
return after.getTime() - before.getTime();
}
});
System.out.println("Invoking");
List<Future<Long>> results = executor.invokeAll(tasks);
System.out.println("Invoked");
for (Future<Long> future : results) {
System.out.println(future.get());
}
executor.shutdown();
}
private static void usage() {
System.out.println("Usage: prog <threads> <url/IP Port>");
System.out.println("Examples:");
System.out.println(" prog tcp 10 127.0.0.1 2000");
}
}
Update - the problem reproduces consistently if I clear the relevant ARP entry before running the test program. I've tried tuning the TCP retransmission timeout, but that didn't help. Also, we ported this program to .Net, but the problem still happens.
Updated 2 - 3 seconds is the specified delay in creating new connections, from RFC 1122. I still don't fully understand why there is a retransmission here, it should be handled by the MAC layer. Also, we reproduced the problem using netcat, so it has nothing to do with java.
It looks like you use a single underlying HTTP connection. So other request can't be done before you call close() on the InputStream of an HttpURLConnection, i. e. before you process the response.
Or you should use a pool of HTTP connections.
You are doing the right thing in reducing the size of the problem space. On the surface this is an impossible problem - something that moves between IP stacks, languages and machines, and yet is not arbitrarily reproducible (e.g. I cannot repro using your code on Windows nor Linux).
Some suggestions, going from the top of the stack to the bottom:
Code -- you say this happens on .Net and Java. Are there any language/compiler combinations for which it does not happen? I used your client talking to the SocketTest program from sourceforge and also "nc" with identical results - no delays. Similarly JDK 1.5 vs 1.6 made no difference for me.
-- Suppose you pace the speed at which the client sends requests, say one every 500ms. Does the problem repro?
IP stack -- maybe something is getting stuck in the stack on the way out. I see you've ruled out Nagle but don't forget silly stuff like firewalls/ip tables. I'd find it hard to believe that the TCP stack on Win and Linux was that hosed, but you never know.
-- loopback interface handling can be freaky. Does it repro when you use the machine's real IP? What about across the network (or better, back-to-back with a x-over cable to another machine)?
NIC -- if the packets are making it to the cards, consider features of the cards (TCP offload or other 'special' handling) or quirks in the NICs themselves. Do you get the same results with other brands of NIC?
I haven't found a real answer from this discussion. The best theory I've come up with is:
TCP layer sends a SYN to the MAC layer. This happens from several threads.
First thread sees that IP has no match in the ARP table, sends an ARP request.
Subsequent threads see there is a pending ARP request so they drop the packet altogether. This behavior is probably implemented in the kernel of several operating systems!
ARP reply returns, the original SYN request from the first thread leaves the machine and a TCP connection is established.
TCP layer waits 3 seconds as stated in RFC 1122, then retries and succeeds.
I've tried tweaking the timeout in Windows 7 but wasn't successful. If anyone can reproduce the problem and provide a workaround, I'll be most helpful. Also, if anyone has more details on why exactly this phenomenon happens only with multiple threads, it would be interesting to hear.
I'll try to accept this answer as I don't think any of the answers provided a true explanation (see this discussion on meta).
If either of the machines is a windows box, I'd take a look at the Max Concurrent Connections on both. See: http://www.speedguide.net/read_articles.php?id=1497
I think this is a app-level limit in some cases, so you'll have to follow the guide to raise them.
In addition, if this is what happens, you should see something in the System Event Log on the offending machine.
Java client that uses HttpURLConnection to open concurrent connections to the same machine.
The same machine? What application does the clients accept? If you wrote that program by yourself, maybe you have to time how fast your server can accept clients. Maybe it is just a bad (or not fast working) written server application. The servercode looks like this, I think;
ServerSocket ss = ...;
while (acceptingMoreClients)
{
Socket s = ss.accept();
// On this moment the client is connected to the server, so start timing.
long start = System.currentTimeMillis();
ClientHandler handler = new ClientHandler(s);
handler.start();
// After "handler.start();" the handler thread is started,
// So the next two commands will be very fast done.
// That means the server is ready to accept a new client.
// Stop timing.
long stop = System.currentTimeMillis();
System.out.println("Client accepted in " + (stop - start) + " millis");
}
If this result are bad, than you know where the problem is situated.
I hope this helps you closer to the solution.
Question:
To do the test, do you use the ip you recieved from the DHCP server or 127.0.0.1
If that from the DHCP-Server, everything goes thru the router/switch/... from your company. That can slow down the whole process.
Otherwise:
In Windows all TCP-traffic (localhost to localhost) will be redirected in the software-layer of the system (not the hardware-layer), that is why you cannot see TCP-traffic with Wireshark. Wireshark only sees the traffic that passes the hardware-layer.
Linux: Wireshark can only see the traffic at the hardware-layer. Linux doesn't redirect on the software-layer. That is also the reason why InetAddress.getLocalhost().getAddress() 127.0.0.1 returns.
So when you use Windows, it is very normal you cannot see the SYN packet, with Wireshark.
Martijn.
The fact that you see this on multiple clients, with different OS's, and with different application environments on (I assume) the same OS is a strong indication that it's a problem with either the network or the server, not the client. This is reinforced by your comment that clearing the ARP table reproduces the problem.
Do you perhaps have two machines on the switch with the same MAC address? (one of which will probably be a router that's spoofing the MAC address).
Or more likely, if I recall ARP correctly, two machines that have the same hardcoded IP address. When the client sends out "who is IP 123.456.123.456", both will answer, but only one will actually be listening.
Another possibility (I've seen this happen in a corporate environment) is a rogue DHCP server, again giving out the same IP addresses to two machines.
Since the problem isn't reproducible unless you clear the associated ARP cache, what does the entire packet trace look like from a timing perspective, from the time the ARP request is issued until after the 3 second delay?
What happens if you open connections to two different IPs? Will the first connections to both succeed? If so, that should rule out any JVM or library issues.
The first SYN can't be sent until the ARP response arrives. Maybe the OS or TCP stack uses a timeout instead of an event for threads beyond the first one that try to open a connection when the associated MAC address isn't known.
Imagine the following scenario:
Thread #1 tries to connect, but the SYN can't be sent because the ARP cache is empty, so it queues the ARP request.
Next, Thread #2 (through #N) tries to connect. It also can't send the SYN packet because the ARP cache is empty. This time, though, instead of sending another ARP request, the thread goes to sleep for 3 seconds, as it says in the RFC.
Next, the ARP response arrives. Thread #1 wakes up immediately and sends the SYN.
Thread #2 isn't waiting on the ARP request; it has a hard-coded 3-second sleep. So after 3 seconds, it wakes up, finds the ARP entry it needs, and sends the SYN.
I have seen similar behavior when I was getting DNS timeouts. To test this, you can either use the IP address directly or enter the IP address in your hosts file.
Does setting socket.setTcpNoDelay( true ) help?
Have you tried to see what system calls are made by running your client with strace.
It's been very helpful to me in the past, while debugging some mysterious networking issues.
What is the listen backlog on the server? How quickly is it accepting connections? If the backlog fills up, the OS ignores connection attempts. 3 seconds later, the client tries again and gets in now that the backlog has cleared.