Socket won't send data after blocking queue .take() - java

Edit: I've found the problem ( see answer below )
I'm trying to write an Android application to send data to a running termux instance via TCP.
In termux, I have netcat listening for incoming TCP connections and printing data to stdout using the command nc -l localhost 8080.
In my android app, I have a thread that reads in strings from a blocking queue and writes them out to a socket connectd to the address that netcat is listening on. The relevant code is the following:
runnable = () -> {
Socket socket = null;
OutputStream socketOutStream = null;
while (running) {
try {
if (null==socket) {
socket = new Socket();
socket.setTcpNoDelay(true);
socket.setKeepAlive(true);
socket.connect(new InetSocketAddress("127.0.0.1", 8080), 2000);
socketOutStream = socket.getOutputStream();
socketOutStream.write("Hello, Server!".getBytes()); // [1] Works!!
socketOutStream.flush();
}
String message = queue.take();
socketOutStream.write(message.getBytes()); // [2] Doesn't work!!?
socketOutStream.flush();
socketOutStream.close();
Log.i(TAG, "We wrote 'Button Clicked!' to the socket I think.");
// running=false;
}
catch (IOException e) {
Log.e(TAG, e.toString());
socket = null;
}
catch (InterruptedException e) {
Log.e(TAG, e.toString());
Log.i(TAG, "Exiting socket sending loop.");
running = false;
}
}
};
Thread thread = new Thread(runnable);
thread.start();
I'm trying to figure out why the initial data sent to netcat at the line marked [1] is actually received and displayed inside termux, but any subsequent data sent at [2] is not. The incoming strings fetched from the queue are certainly not empty.
Additionally, if I move the socket instantiation and connection logic to occurr until after queue.take() returns, I see a SocketTimeoutException via adb log output.
I would like to understand why it should make any difference whether either of these operations shoud occurr before or after the queue.take() operation returns.

I was eventually able to resolve the issue with the TCP socket hanging by attaching the socket to a foreground service as opposed to a background service which is how it was running in the first instance. It seems Android background services are liable to have TCP communications delayed or buffered in order to economize battery usage.

Related

Do I need to wait for my UDP socket to be connected before sending anything?

I have written a class which sends some messages to a UDP socket. I noticed that at the start on my first try using the socket, it would timeout. When I went to do a wireshark capture, I kept seeing the first packet not being sent from my machine which is causing a timeout as the server needs to have two messages before sending a status back. Here is my code.
DatagramSocket socketN = null;
try {
socketN = new DatagramSocket();
DatagramPacket connect = new DatagramPacket(connectMsg, connectMsg.length, ipAddr, port);
socketN.send(connect);
socketN.setSoTimeout(5000);
DatagramPacket start = new DatagramPacket(startMsg, startMsg.length, ipAddr, port);
DatagramPacket status = new DatagramPacket(status, status.length);
socketN.send(start);
socketN.receive(status);
} catch (InterruptedException | IOException e) {
e.printStackTrace();
} finally {
socketN.close();
}
Based on the code and my wireshark capture, I can see the message start being sent from my PC but not the message connect. This block of code is repeated a number of times so on other times it is repeated the timeout does not occur.

TCP. client connects even if server doesn't accept him

I have TCP server-client application. It works but sometime something happens. Client connects to server but server says he doesn't accepted him.
Server side code:
while(!stopped){
try {
AcceptClient();
} catch(SocketTimeoutException ex){
continue;
} catch (IOException ex) {
System.err.println("AppServer: Client cannot be accepted.\n"+ex.getMessage()+"\n");
break;
}
...
private void AcceptClient() throws IOException {
clientSocket = serverSocket.accept();
clientSocket.setSoTimeout(200);
out = new ObjectOutputStream(clientSocket.getOutputStream());
in = new ObjectInputStream(clientSocket.getInputStream());
System.out.println("Accepted connection from "+clientSocket.getInetAddress());
}
Client side code:
try {
socket = new Socket(IPAddress, serverPort);
socket.setSoTimeout(5000);
out = new ObjectOutputStream(socket.getOutputStream());
in = new ObjectInputStream(socket.getInputStream());
} catch (IOException e1) {
sendSystemMessage("DISCONNECTED");
sendSystemMessage(e1.getMessage());
return;
}
sendSystemMessage("CONNECTED");
If client connects the message:
Accepted connection from ... appears. But sometimes it doesn't appear
even if client sends message "CONNECTED"
Server is still runing the loop trying to get client and it is catching socketTimeoutException. Client is connected, sends message and waits for response.
I suspect a missing 'flush' inside your client's 'sendSystemMessage()'.
Unfortunately the constructor of ObjectInputStream attempts to read a header from the underlying stream (which is not very intuitive IMHO). So if the client fails to flush the data - the server may remain stuck on the line "in = new ObjectInputStream(socket.getInputStream())"...
As a side note it's usually better for a server to launch a thread per incoming client, but that's just a side remark (plus it obviously depends on requirements).
I found the problem. The communication on my net is too slow so it timeouts in getting inputstream. The solution has two parts. Flushing outputstream before getting inputstream. And set socket timout after streams are initialized.
serverside:
clientSocket = serverSocket.accept();
out = new ObjectOutputStream(clientSocket.getOutputStream());
out.flush()
in = new ObjectInputStream(clientSocket.getInputStream());
clientSocket.setSoTimeout(200);

Java socket read timeout exceptions

I try to overcome a user disconnection detection on the server side using read timeout.
This is part of my code:
try {
socket.setSoTimeout(3000);
in = new DataInputStream(socket.getInputStream());
out = new DataOutputStream(socket.getOutputStream());
usr = new User(in.readUTF());
usr.connectUser();
int i=0;
while(true){
try{
i = in.readInt();
}
catch(SocketTimeoutException e){
System.Out.Println("Timeout");
// user connected, no data received
}
catch(EOFException e){
System.Out.Println("Disconnected");
// user disconnected
}
}
}
catch(Exception e){
// other exceptions
}
the code works fine except the "user disconnected" issue.
i want to catch the timeout exception and just continue waiting for data
but only if the client still connected.
why i never get other exception than SocketTimeoutException?
shouldn't i get IOException while in.readInt() can't use the socket because client disconnected?
is there any other simple way to detect user disconnection?
i mean as unwanted disconnection, like user had suddenly wifi shutdown etc...
thanks,
Lioz.
If the client didn't write anything within the timeout period, you get a SocketTimeoutException. If he disconnected instead of writing anything, you get an EOFException. Catch them separately. If you didn't get an EOFException, he didn't disconnect.

Handling socket.close() gracefully

In my server located in a android device , if the number number of clients exceeds a specific number then the server close the socket. But in my client(other android device) i get a force close. How can i handle it gracefully?
Here is the connect part on my client:
serverIpAddress = serverIp.getText().toString();
if (!serverIpAddress.equals(""))
{
try
{
InetAddress serverAddr = InetAddress.getByName(serverIpAddress);
SocketAddress sockaddr = new InetSocketAddress(serverAddr, 5000);
nsocket = new Socket();
nsocket.connect(sockaddr);
}catch(Exception e){
Log.i("Connect", "Connection Error");
}
if (nsocket.isConnected()){
score.setText("Your score is " + sc);
serverIp.setVisibility(View.GONE);
connectPhones.setVisibility(View.GONE);
enterIP.setVisibility(View.GONE);
Log.i("Connect", "Socket created, streams assigned");
Log.i("Connect", "Waiting for inital data..." + nsocket.isConnected());
receiveMsg();
}
Keep checking the socket connection is still open or not using isClosed() within an infinite loop, when server closes its connection, the isClosed() gets true, and then display a message or toast giving your desired reason to the user.
Sounds like whatever you are using to read the socket is a blocking read, and throws an exception when the socket closes and it is stuck at that read. Make sure that read is in a try block, and use the catch/finally to gracefully exit whatever you are doing at that moment.

Listening for TCP and UDP requests on the same port

I am writing a Client/Server set of programs
Depending on the operation requested by the client, I use make TCP or UDP request.
Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side.
On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the server to open new thread for each connection request.
I have adopted the approach explained in: link text
I have extended this code sample by creating new threads for each TCP/UDP request.
This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings.
Please give me any suggestion how can I correct this.
tnx
Here is the Server Code:
public class Server {
public static void main(String args[]) {
try {
int port = 4444;
if (args.length > 0)
port = Integer.parseInt(args[0]);
SocketAddress localport = new InetSocketAddress(port);
// Create and bind a tcp channel to listen for connections on.
ServerSocketChannel tcpserver = ServerSocketChannel.open();
tcpserver.socket().bind(localport);
// Also create and bind a DatagramChannel to listen on.
DatagramChannel udpserver = DatagramChannel.open();
udpserver.socket().bind(localport);
// Specify non-blocking mode for both channels, since our
// Selector object will be doing the blocking for us.
tcpserver.configureBlocking(false);
udpserver.configureBlocking(false);
// The Selector object is what allows us to block while waiting
// for activity on either of the two channels.
Selector selector = Selector.open();
tcpserver.register(selector, SelectionKey.OP_ACCEPT);
udpserver.register(selector, SelectionKey.OP_READ);
System.out.println("Server Sterted on port: " + port + "!");
//Load Map
Utils.LoadMap("mapa");
System.out.println("Server map ... LOADED!");
// Now loop forever, processing client connections
while(true) {
try {
selector.select();
Set<SelectionKey> keys = selector.selectedKeys();
// Iterate through the Set of keys.
for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) {
SelectionKey key = i.next();
i.remove();
Channel c = key.channel();
if (key.isAcceptable() && c == tcpserver) {
new TCPThread(tcpserver.accept().socket()).start();
} else if (key.isReadable() && c == udpserver) {
new UDPThread(udpserver.socket()).start();
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
System.err.println(e);
System.exit(1);
}
}
}
The UDPThread code:
public class UDPThread extends Thread {
private DatagramSocket socket = null;
public UDPThread(DatagramSocket socket) {
super("UDPThread");
this.socket = socket;
}
#Override
public void run() {
byte[] buffer = new byte[2048];
try {
DatagramPacket packet = new DatagramPacket(buffer, buffer.length);
socket.receive(packet);
String inputLine = new String(buffer);
String outputLine = Utils.processCommand(inputLine.trim());
DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length,
packet.getAddress(), packet.getPort());
socket.send(reply);
} catch (IOException e) {
e.printStackTrace();
}
socket.close();
}
}
I receive:
Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException
at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source)
at server.UDPThread.run(UDPThread.java:25)
10x
It should work. One of the problems with this code, it seems, is that the ByteBuffer size is set to 0, meaning that the datagram is discarded (as it mentions in the comments). If you need to receive any information over UDP and you are on a reliable network, you can set the size quite big and receive big datagrams made up of multiple packets. Otherwise, on an unreliable network, set this to the MTU size. Make sure you flip() the ByteBuffer after receiving anything in it.
Also, creating new threads for each request is a bad idea, create a 'session' thread for each different IP you receive in a HashMap or something, and then do a guarded block on the session object. Wake up the thread sleeping on that object when you receive a message after passing in new information. The selector code you have is designed to avoid the creation of threads in this way.
Edit: based on the above code, you're create a datagram channel and then using the socket to receive datagrams directly? That's doesn't make sense. Use the channel methods only after binding the channel. Also, don't do this in a separate thread. Your code isn't thread-safe and will bust itself up. Hand the received information off to the separate 'session' thread as mentioned earlier. The selector is designed to tell you what channels can be read from without blocking (although blocking is disabled anyway, so it will tell you what channels have data to be read from).
AFAIK, you should be able to listen for both TCP connections and UDP messages on the same port. It would help if you posted your UDP code, and the exception + stacktrace that you are seeing.
You can't use DatagramSocket.receive() in non-blocking mode. You have to use the read() or receive() methods of your DatagramChannel directly.
In fact as you are using non-blocking mode and a Selector, it is quite impossible to see why you're also using a UDPThread at all. Just call udpserver.receive() instead of starting the thread.

Categories