Java serversocket not detecting lost connection - java

I have a socket client (on android phone) and server (on PC) both on a wifi network and the server successfully reads data from the client.
However, when I turn off the wifi on the phone the server read just hangs, whereas I was hoping some error would be thrown.
I do have setSoTimeout set on the server, but the read is not timing out.
On the PC netstat still shows an established connection
netstat -na | grep 6668
TCP 192.168.43.202:6668 192.168.43.26:43076 ESTABLISHED
Is there a way to tell if the client host has disappeared, or getting the read to time out?
Here is the server read
if (ss.isConnected()) {
try {
readData();
} catch (java.net.SocketTimeoutException ex) {
logger.warning(ex.toString());
} catch (InterruptedIOException ex) {
logger.warning(ex.toString());
} catch (IOException ex) {
logger.log(Level.WARNING, "Data communication lost will close streams - IOEx - socket status {0}", ss.socketStatus());
closeStreams();
} catch (Exception ex) {
logger.log(Level.WARNING, "Data communication lost will close streams - Ex - socket status {0}", ss.socketStatus());
closeStreams();
}
}
Where readData is,
public void readData() throws IOException {
for (int i = 0; i < data.length; i++) {
data[i] = ss.readDouble();
}
}
ss.readDouble() is,
public double readDouble() throws IOException {
return in.readDouble();
}
And the server connection,
public void connect() throws IOException {
if (serverSocket == null || serverSocket.isClosed()) {
init();
}
logger.log(Level.INFO, "Wait on " + serverSocket.getLocalPort());
server = serverSocket.accept();
serverSocket.close();
logger.log(Level.INFO, "Connected to {0}", server.getRemoteSocketAddress());
out = new DataOutputStream(server.getOutputStream());
in = new DataInputStream(server.getInputStream());
}

Make a timeout, so let's say no data has been sent for 10 minutes, close it in 60 seconds!
Setting a timeout for socket operations
The answer for this question may help you.

This is nature of TCP connection, not java sockets per se. If the remote peer disconects with broken connection, how should your server know that the peer simply has no data to send?
Writting on closed socket will cause exception, read will simply block if client doesnt end tcp connection properly, for the reason above.
If you go through socket API, you will find option to set timeout ( before proceeding with blocking operation).
You could also consider TCP KEEP Alive, which is also exposed by the Socket API.
// Edit: additional information as per the OP comment
When your client connects to server, you create a client socket to communicate with the peer. Your server socket is the one at which you are listening for new client connections. It is the client socket at which you specify keep alive or read timeout because this is the socket from which you read/write.
// your server is actually reference to ClientSocket
server = serverSocket.accept();
// keep alive duh
server.setKeepAlive(true);
serverSocket.close();

Related

Java Server Socket connection over different routers

I am currently developing a client and a server for a small game.
The client which connects to the server establishes the connection with this method:
// This method is called, passing on an ipv6 address and port number 6666
public void startConnection(String ip, int port) throws IOException {
try {
clientSocket = new Socket(ip, port);
out = new PrintWriter(clientSocket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
//some other code handling responses
} catch (IOException e) {
LOG.debug("Error when initializing connection", e);
throw new IOException();
}
}
The Server I built accepts connections using this method:
public void start(int port) {
try {
serverSocket = new ServerSocket(port); //port = 6666
//This part is used to handle multiple connections at once
while (b){
try {
map.add(new EchoClientHandler(serverSocket.accept())); //EchoClientHandler is a class used to send and receive data instructions
x = map.size() - 1;
System.out.println("Establishing connection from port " + port);
map.get(x).start();
System.out.println("Connection established");
} catch (SocketTimeoutException e) {
}
}
}catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Both methods work fine and establish a stable connection between the client and the server, but when i try and establish a connection from different routers or general internet connections (like via cellular data) it doesn't work.
Is there a way to establish connections without both the client and the server having to connect from the same router?
Edit:
Here is the error i get from the client, the server doesn't show anything:
18:03:24.288 [AWT-EventQueue-0] DEBUG dorusblanken.werwolfclient.Client - Error when initializing connection
java.net.SocketException: Network is unreachable: connect
"Network is unreachable" means there is no way to get to the destination network from the current network.
You mentioned that you are trying to establish a connection via the Internet. For that to work, the destination host (your server) must be connected to the Internet, it must have a public IP address, and the clients need to use the public IP address when connecting.
That is the simplest configuration. Most companies don't actually put their servers directly on the Internet. Instead, the public IP frequently belongs to a CDN or DDoS mitigation layer, which forwards connections to a load balancer, and the load balancer forwards connections to servers.

How to close TCP connection from server in Java

Here is use case I need to implement in Java:
Server is listening for push messages from some clients
If client has some data to push into server, it opens TCP connection and sends all messages
When client sends last message (special message saying that this is the last one) server should close connection by starting TCP closing handshake
I have problem with last step because I don't know how to close connection from server site. My current code is bellow. How to initiate connection closing TCP handshake form server site? Thank you for any help.
public class Server{
public static void main(String[] args) throws Exception {
while (true) {
int port = AppConfig.getInstance().getPort();
try (ServerSocket socket = new ServerSocket(port)) {
Socket server = socket.accept();
InetAddress ipAddress = server.getInetAddress();
MessageHandler handler = new MessageHandler(ipAddress);
InputStream in = server.getInputStream();
// reads all bytes from input stream and process them by given handler
processStream(in, handler);
in.close();
server.close();
} catch (Exception e) {
LoggingUtils.logException(e);
}
}
}
private static void processStream(InputStream in, MessageHandler handler) throws Exception {
// implementation is omitted
}
}
You've done it. in.close() closes the input stream, the socket output stream, and the socket.
What you should really close is whatever output stream was attached to the socket, to ensure it gets flushed, and you should probably do that in the processStream() method, with a saver server .close() in a finally block in the calling method.
NB Your socket names are really the wrong way round. It is customary to use ServerSocket serverSocket, and Socket socket = serverSocket.accept().
I may be not totally sure about this one, but I would believe that socket.close() will send all the commands (FIN/FIN-ACK)

BluetoothServerSocket doesn't return from accept()

I'm trying to build a little Bluetooth-Android-App for a project in school.
I'm quite new to Android (got my phone since 2 days). I'm experimenting since 2 weeks with android programming on my laptop. Installed a VirtualBox with Android x86 (eeepc) so I can use the BluetoothAdapter of the laptop. Emulator doesn't support Bluetooth and is quite slow. That's about the project...
The problem/question:
A Bluetoothconnection has 2 devices - a connecting and a listening one. The listening device has a BluetoothServerSocket, that loops accept() method until accept() returns a BluetoothSocket.
In my case the accept() method doesn't return so I get stuck and the app freezes with blackscreen asking mit to stop the app or just to wait. When I pass a timeout to accept() --> accept(10000) I get an IOException after the timeout.
listening device:
private class AcceptThread extends Thread {
private BluetoothSocket tSocket;
private BluetoothServerSocket bss = null;
public void run() {
try {
Log.d(TAG, "erzeuge ServerSocket");
bss = BluetoothAdapter.getDefaultAdapter().listenUsingInsecureRfcommWithServiceRecord("BluetoothChatInsecure", MainActivity.BT_UUID);
Log.d(TAG, "ServerSocket OK");
} catch (IOException e) {
e.printStackTrace();
Log.e(TAG, "Fehler Serversocket");
}
while (true) {
Log.d(TAG, "Versuche zu akzeptieren");
try {
Log.d(TAG, "Akzeptieren Anfang");
tSocket = bss.accept(10000);
//this line is never reached
Log.d(TAG, "Akzeptieren Ende");
if (tSocket != null){
//Hier wollen wir hin!
Log.d(TAG, "Verbindung akzeptiert");
ConnectedThread conThread = new ConnectedThread(tSocket);
conThread.run();
bss.close();
break;
} else {
Log.e(TAG, "Fehler, keine Verbindung");
}
} catch (IOException e) {
Log.e(TAG, "IOException währent accept-loop");
//this exception is triggered every 10 sec, when the accept(10000) times out
e.printStackTrace();
}
}
Log.i(TAG, "Acceptthread hat fertig");
}
}
connecting device:
try {
socket = device.createInsecureRfcommSocketToServiceRecord(MainActivity.BT_UUID);
outstr = socket.getOutputStream();
instr = socket.getInputStream();
ois = new ObjectInputStream(instr);
oos = new ObjectOutputStream(outstr);
} catch (IOException e) {
e.printStackTrace();
}
I've read a lot of threads on stackoverflow and some other forums about this topic, but I didn't got a solution for the problem.
Sorry about my English, but I am not a native speaker.
Thanks for any help!
EDIT:
I forgot to write, that I test the app with 2 devices. My laptop does accept-loop, while I use my phone and try to connect.
This is just the normal behavior: accept() will "wait" (block) until a connection has been made from another device. Then it returns the socket representing that connection for further data transfer.
As you have seen, the timeout is signalled via an IOException. The contract of accept() is that it never returns null but always a valid socket, or fails with an exception thrown.
Therefore, thejh is right in saying that you should have a dedicated thread which waits for connections in accept().
When accept() returns a new socket, you may want to spawn another thread to handle further communication over that socket, while the accept() thread loops to wait for the next connection.
N.b.: You cannot shut down a thread blocked in IO (as in accept()) via Thread.interrupt(), but you have to close the ServerSocket from another thread to cause an IOException to 'wake up' the blocked thread.
I've been facing this problem for a couple of days. Finally, I realized why:
I was creating the Thread that accepts incoming connections in the server twice. Thus, the ServerSocket was being created to times, although only the second time the accept() method was called.
This leads to server not accepting any connection!!
It seems that you didn't call socket.connect() from client side in the shown codes.
Today I continued work on project. I got IOException after failing connect() from connecting device.
Now I managed the devices to have a socket, after pairing them before running the app.
EDIT: accept() returns a socket now, but it isn't connected when asking with isConnected().
Socket of the connecting device is connected.

Socket does not receive messages

I have written a simple client and simple udp server that needs to read string messages from particular port. here is the UDP-socket:
public class UDPServer {
// boolean variable defines if the infinite loop
// in startServer() runs or not
private boolean isSwitched = false;
private DatagramSocket socket = null;
public UDPServer(int port) throws SocketException {
socket = new DatagramSocket(port);
Logger.getLogger(Main.class.getName()).log(Level.INFO, "Server started! Bound to port: " + port);
}
//this method start the server and switches on the infinite loop to
// listen to the incoming UDP-packets
public void startServer() throws IOException {
this.isSwitched = true;
Logger.getLogger(Main.class.getName()).log(Level.INFO, "Server starts listening!");
while (isSwitched) {
byte[] size = new byte[30];
DatagramPacket dp = new DatagramPacket(size, size.length);
try {
System.out.println("Debug: receive loop started!");
socket.receive(dp);
System.out.println("Debug: Packet received after socket.receive!");
Thread requestDispatch = new Thread(new Request(dp.getData()));
requestDispatch.start();
} catch (SocketException ex) {
Logger.getLogger(Main.class.getName()).log(Level.INFO, "Stops listening on specified port!");
}
}
}
// this method stops the server from running
public void stopServer() {
this.isSwitched = false;
socket.close();
Logger.getLogger(Main.class.getName()).log(Level.INFO, "Server is shut down after last threads complete!");
}
}
I deploy it on the remote server and switch on the program. The server prints out that it started listening so it reaches the socket.receive() stage. Then I send a UDP-message from a remote client. But nothing happens. The udp-server does not move any further - it justs holds and seems to receive no messages.
I tried to debug the ports with the tcpdump and it shows that messages come to the required port. but java program does not seem to receive them.
When I issue this command on the remote server:
tcpdump udp port 50000
and send a few packets thats what it writes:
12:53:40.823418 IP x.mobile.metro.com.42292 > y.mobile.metro.com.50000: UDP, length 28
12:53:43.362515 IP x.mobile.metro.com.48162 > y.mobile.metro.com.50000: UDP, length 28
I tested your server code locally with netcat and it works just fine, so the problem has to be somewhere else. Are you sure you're actually sending UDP packets? Did you run tcpdump on the remote server? When not, maybe your packets get filtered.
Ok, question resolved. The problem was:
FIREWALL on Red Hat linux, which I successfully switched off for the required port.

Reliable UDP Protocol Implementation in Java - Why does this happen?

I'm currently using a Java implementation of the Reliable UDP protocol, found here. The project has absolutely no tutorials so I have found it really hard to identify problems.
I have set up a client and server. The server runs on localhost:1234 and the client runs on localhost:1235. The server is first established, and loops listening for connections -
try {
ReliableSocket clientSocket = server.socket.accept();
InetSocketAddress clientAddress = (InetSocketAddress) clientSocket.getRemoteSocketAddress();
Logger.getLogger("ServerConnectionListener").info("New Connection from "+
clientAddress.getHostName()+":"+clientAddress.getPort()+" Processing...");
LessurConnectedClient client = new LessurConnectedClient(clientSocket);
ClientCommunicationSocketListener listener = new ClientCommunicationSocketListener(this, client);
clientSocket.addListener(listener);
} catch (Exception e) {
e.printStackTrace();
}
When a connection is established, it creates a listener for events on that socket -
class ClientCommunicationSocketListener implements ReliableSocketListener {
ServerConnectionListener connectionListener;
LessurConnectedClient client;
public ClientCommunicationSocketListener(ServerConnectionListener connectionListener, LessurConnectedClient client){
this.connectionListener = connectionListener;
this.client = client;
}
#Override
public void packetReceivedInOrder() {
connectionListener.server.handlePacket(client);
}
#Override
public void packetReceivedOutOfOrder() {
connectionListener.server.handlePacket(client);
}
}
When a packet is received, it passes it to server.handlePacket, which performs a debug routine of printing "Packet Received!".
My client connects to the server as so -
LessurClient client = new LessurClient();
InetSocketAddress a = (InetSocketAddress) server.getSocket().getLocalSocketAddress();
Logger.getLogger("client-connector").info("Trying to connect to server "+
a.getAddress().toString()+":"+
a.getPort());
client.connect(a.getAddress(), a.getPort());
// LessurClient.connect
public void connect(InetAddress address, int port){
try {
socket = new ReliableSocket(address, port, InetAddress.getLocalHost(), 1235);
isConnected = true;
Logger.getLogger("LessurClient").info("Connected to server "+address.getHostAddress()+":"+port);
} catch (IOException e) {
e.printStackTrace();
}
}
I have linked my code so when I press the key 'Z', it will send a packet to the server as so -
public void sendPacket(GamePacket packet){
if(!isConnected){
Logger.getLogger("LessurClient").severe("Can't send packet. Client is not connected to any server.");
return;
}
try {
OutputStream o = socket.getOutputStream();
o.write(packet.getData());
o.flush();
Logger.getLogger("LessurClient").info("Sending Packet with data \""+packet.getData()+"\" to server "+socket.getInetAddress().toString()+":"+socket.getPort());
} catch (IOException e) {
e.printStackTrace();
}
}
My problem is, after sending 32 packets, the server no longer receives packets, and after sending 64 packets, it crashes. I have investigated into the code, and it appears that its something associated with packets not being removed from the receive queue, as when I changed the _recvQueueSize variable in ReliableSocket.java:1815 from 32 to 40, I could now send 40 packets without something going wrong.
Could someone help me identify this issue? I've been looking at the code all day.
I managed to fix the problem.
You see, since this is an implementation of RUDP, it extends most of the Socket classes. Specifically, ReliableSocket.getInputStream(), was custom coded to a managed input stream. My problem was, I was receiving the packets, but not reading from the buffer.
When you receive a packet you're supposed to read from the buffer, otherwise the packet will not be dropped from the queue.
So all I had to do, was everytime I received a packet, read the size of the packet, and continue.

Categories