Java Sockets: No buffer space available (maximum connections reached?) - java

I have a big problem. I have developped a client-server application. A client thread sends a serialized object to the server and the server sends back a serialized object. Currently I'm using one server and 10 client threads and after about 30 seconds I get the error message from each client thread (IOException):
No buffer space available (maximum connections reached?): connect
If I'm looking in netstat then I see that there are a lot of connections created and it is growing and growing and all connections are in TIME_WAIT state.
I don't know why. I close the sockets in the server and in the clients everytime in a finally block. Here is some code:
In the server I have in socketHandlerThread:
ServerSocket serverSocket = new ServerSocket(port);
serverSocket.setSoTimeout(5000);
while(true) {
Socket socket = serverSocket.accept();
}
The new socket is then put on a LinkedBlockingQueue and a worker thread takes the socket and makes the following:
try {
outputStream = new ObjectOutputStream(new BufferedOutputStream(socket.getOutputStream()));
outputStream.flush();
inStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
ClientRequest clientRequest = (ClientRequest) inStream.readObject();
...
outputStream.writeObject(serverResponse);
outputStream.flush();
} catch....
} finally {
if (inStream != null) {
inStream.close();
}
if (outputStream != null) {
outputStream.close();
}
if (socket != null) {
socket.close();
}
}
On the client side I have the following code:
try {
socket = new Socket(host, port);
outputStream = new ObjectOutputStream(new BufferedOutputStream(socket.getOutputStream()));
outputStream.flush();
inputStream = new ObjectInputStream(new BufferedInputStream(socket.getInputStream()));
outputStream.writeObject(request);
outputStream.flush();
Object serverResponse = inputStream.readObject();
} catch....
} finally {
if (inputStream != null) {
inputStream.close();
}
if (outputStream != null) {
outputStream.close();
}
if (socket != null) {
socket.close();
}
}
Can somebody help? I really don't know what mistake I made. I seems that the sockets get no closed but I don't know why.
Could it be the problem that I put the sockets on a queue on the server side so that the socket is somehow copied?
Edit: If I put the client and the server each on a different Amazon EC2 classic instance running Linux AMI then it works. Could it be a problem with Windows or is the problem simply that I was running the Clients and servers on the same machine (my local pc)?
Does somebody see a bug in my code?
Edit2: As said above on EC2 instances it works but if I use netstat it shows still a lot of lines saying TIME_WAIT.
Here are screenshots:
https://drive.google.com/file/d/0BzERdJrwWrNCWjhReGhpR2FBMUU/view?usp=sharing
https://drive.google.com/file/d/0BzERdJrwWrNCOG1TWGo5YmxlaTg/view?usp=sharing
First screenshot is from windows. "WARTEND" means "WAITING" (it is german).
The second screenshot is from Amazon EC2 (to the left the client machine, to the right the server machine).

TIME-WAIT is entered after the connection is closed at both ends. It lasts for a couple of minutes, for data integrity reasons.
If the buffer problem is due to TIME-WAIT states at the server, the solution is to make the server be the peer that first receives the close. That will shift the TIME-WAIT state to the client, where it is benign.
You can do that by putting your server-side request handling into a loop, so that it can handle multiple requests per connection, and so that the server only closes the socket when it reaches end of stream on it.
for (;;)
{
try
{
ClientRequest clientRequest = (ClientRequest) inStream.readObject();
...
outputStream.writeObject(serverResponse);
outputStream.flush();
}
catch (EOFException exc)
{
break;
}
}
If you then implement connection-pooling at the client, you will massively reduce the number of connections, which will further reduce the incidence of the buffer problem.

Related

TCP. client connects even if server doesn't accept him

I have TCP server-client application. It works but sometime something happens. Client connects to server but server says he doesn't accepted him.
Server side code:
while(!stopped){
try {
AcceptClient();
} catch(SocketTimeoutException ex){
continue;
} catch (IOException ex) {
System.err.println("AppServer: Client cannot be accepted.\n"+ex.getMessage()+"\n");
break;
}
...
private void AcceptClient() throws IOException {
clientSocket = serverSocket.accept();
clientSocket.setSoTimeout(200);
out = new ObjectOutputStream(clientSocket.getOutputStream());
in = new ObjectInputStream(clientSocket.getInputStream());
System.out.println("Accepted connection from "+clientSocket.getInetAddress());
}
Client side code:
try {
socket = new Socket(IPAddress, serverPort);
socket.setSoTimeout(5000);
out = new ObjectOutputStream(socket.getOutputStream());
in = new ObjectInputStream(socket.getInputStream());
} catch (IOException e1) {
sendSystemMessage("DISCONNECTED");
sendSystemMessage(e1.getMessage());
return;
}
sendSystemMessage("CONNECTED");
If client connects the message:
Accepted connection from ... appears. But sometimes it doesn't appear
even if client sends message "CONNECTED"
Server is still runing the loop trying to get client and it is catching socketTimeoutException. Client is connected, sends message and waits for response.
I suspect a missing 'flush' inside your client's 'sendSystemMessage()'.
Unfortunately the constructor of ObjectInputStream attempts to read a header from the underlying stream (which is not very intuitive IMHO). So if the client fails to flush the data - the server may remain stuck on the line "in = new ObjectInputStream(socket.getInputStream())"...
As a side note it's usually better for a server to launch a thread per incoming client, but that's just a side remark (plus it obviously depends on requirements).
I found the problem. The communication on my net is too slow so it timeouts in getting inputstream. The solution has two parts. Flushing outputstream before getting inputstream. And set socket timout after streams are initialized.
serverside:
clientSocket = serverSocket.accept();
out = new ObjectOutputStream(clientSocket.getOutputStream());
out.flush()
in = new ObjectInputStream(clientSocket.getInputStream());
clientSocket.setSoTimeout(200);

My java chat client only sends strings when the dataStream is closed

I created a java chat application (client and server)
Everything works fine when I'm on my LAN (using LAN IP address of the server into my client).
But when I'm using the Internet address of my server in my client, the strings are sent only when I close the output Data stream of my client (and all the strings are sent at once).
Here's a quick snap of my code (I have port forward from 6791 to 6790 in the example below),
My server (thread):
// this line is actually on my global server class, used below with theServer
ServerSocket svrSocket= new ServerSocket(6790);
//wait for incoming connection
connectionSocket = svrSocket.accept();
connectionSocket.setSoTimeout(10000);
// free the accepting port
svrSocket.close();
//create a new thread to accept future connections (creates a new svrSocket)
theServer.openNewConnection();
//create input stream
BufferedReader inFromClient = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
boolean threadRunning = true);
while (threadRunning) {
//System.out.println("thread: in the while");
try {
Thread.sleep(100);
clientSentence = inFromClient.readLine();
System.out.println(clientSentence);
}
catch...
}
My client:
InetAddress dnsName;
Socket clientSocket;
PrintWriter out;
dnsName = InetAddress.getByName("myAddress.me");
clientSocket = new Socket(dnsName.getHostAddress(), 6791);
Thread.sleep(10);
out = new PrintWriter(clientSocket.getOutputStream(), true );
int i=140;
while (i>130){
try {
out.println(Integer.toString(i));
out.flush();
Thread.sleep(200);
}
catch(Exception e) {
e.printStackTrace();
}
i--;
}
out.flush();
out.close();
clientSocket.close();
I've tried with DataOutStreams, there's nothing to do.
My server will only receive the strings when out.close() is called on client side.
Is there a reason why, over the Internet, the data stream has to be closed for data to be sent? Is there a way around this? Am I doing something wrong?

putting socket connection to dormant state while waiting for data from server

I have a client socket connected to the server socket, the server will send data to the client from time to time while its connected. currently my client uses a while loop to keep receiving data from the server even the server is not sending anything.
my question is, is there any more efficient way to listen for input?
i am thinking maybe create a thread for the socket connection, and put it to sleep when there is no incoming data, and sends an interrupt when there is data coming in. would that work? if putting the thread to sleep, would it break the socket connection?
i cannot modify the server socket and it doesnt initiate a connection.
import java.io.*;
import java.net.Socket;
public class core_socket {
public static void main(String[] args) {
String host = ("192.168.100.206");
int port = 4025;
try {
Socket socket = new Socket(host, port);
System.out.println("created socket\n");
OutputStream os = socket.getOutputStream();
boolean autoflush = true;
PrintWriter out = new PrintWriter(socket.getOutputStream(), autoflush);
BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
// read the response
boolean loop = true;
StringBuilder sb = new StringBuilder(8096);
while (loop) {
if (in.ready()) {
int i = 0;
while (i != -1) {
i = in.read();
sb.append((char) i);
}
loop = false;
}
}
// display the response to the out console
System.out.println(sb.toString());
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
See multi-user chat application example at http://cs.lmu.edu/~ray/notes/javanetexamples/ - basically, you should consider spawning off a new worker thread for each incoming connection and then go back to listen for any new incoming requests.
A long time ago I wrote one of the first application servers (say in 1997 when most people didn't know what an app server is) - it was deployed at one of the largest higher-ed institutions and processed couple million requests per day during peak times - that's not the institution in the link by the way. The reason I mention this is... multi-threading gives you a tremendous scalability with very little effort - even if scalability is not what you are looking for, worker thread model is still a good practice.
Maybe what you want is to use asynchronous sockets. Basically, it spins off another thread that's job is to listen for any data on the socket. Once data does come in, a "callback" method is called, which can then begin to process your data.
I've never done sockets in Java before though, just C#, so I'm not sure how it compares, but the concept should remain the same.

android: TCP Connection performance

I have a PC server and an android client; my android client start a socket connection to server.
While I am connected to server, I also receive data from server to android client;
Here is my code:
Socket socket = null;
DataOutputStream out = null;
DataInputStream in = null;
InputStream inputStream = null;
OutputStream outputStream = null;
...
public void connectToTCP()
{
try
{
socket = new Socket(HOST_ADDRESS, PORT);
socket.setSoTimeout(30000);
outputStream = socket.getOutputStream();
out = new DataOutputStream(outputStream);
inputStream = socket.getInputStream();
in = new DataInputStream(inputStream);
Log.e("TCP-", "Connected");
while (socket.isConnected()){readBytes();}
}
catch (UnknownHostException e)
{
Log.e("Error in tcp connection","Unknown Host");
}
catch (IOException e)
{
Log.e("Error in tcp connection", "Couldn't get I/O for the connection");
}
}
public void readBytes() throws IOException
{
if (in.available() > 0)
{
byte[] buffer = new byte[in.available()];
if (buffer.length > 0)
{
if (mListener != null)
{
int numberOfBytes = in.read(buffer);
mListener.tcpConnectionDataReceived(buffer, numberOfBytes);
}
}
}
}
but my problem is in performance. I tested the code on the device and I noticed (from task manager) that the app consume a lot of resources (CPU usage is more than 50%) but when I stop reading from socket by deleting this while loop while (socket.isConnected()){readBytes();} CPU usage becomes less than 1%.
Any ideas to solve this?
You readBytes() method will return immediately if no data is available. Since it's in a tight loop, you're essentially continuously checking if there is something available, wasting a lot of CPU power.
With the code you show, you would be better off doing a plain blocking read (i.e. remove the available() check altogether, and use a reasonable, fixed-size buffer).
You should sleep between calls to readBytes() - you basically created an endless loop if no data is available and thus in.available() > 0 is false.
Or if this is in its own background thread, just do blocking reads when you know that more data is expected.

buffered reader not receiving data from socket

I am writing a client application that will receive a continuous flow of data through tcp/ip. The problem I'm having is that the buffered reader object isn't receiving any data and is hanging at the readline method.
The way the server works is that you connect to it, and then send authentication information in order to receive data. The gist of my code is below
socket = new Socket(strHost, port);
authenticate();
inStream = new BufferedReader(new InputStreamReader(socket.getInputStream()));
process(inStream);
authenticate()
{
PrintWriter pwriter = new PrintWriter(socket.getOutputStream(), true);
pwriter.println(authString);
}
process(BufferedReader bufferedReader)
{
while((line = bufferedReader.readLine()) != null)
dostuff
}
I created a sample server application that sends data the way (I think) the server is sending data and it connects, and receives and processes the data fine. I can connect to the server fine in my application. I can also telnet to the server and write the authentication string and receive a flood of data using telnet. However my application just hangs at readLine with the server and I'm out of idea's why.
The data coming in (through telnet atleast) looks like a continuous stream of the following:
data;data;data;data;data
data;data;data;data;data
Why is my app hanging at readline, am I not outputting the authentication line correctly? I'm not receiving any errors...
EDIT
My sample server code (which is working correctly)...again this is only mimicking the way I think the real server is running but I can connect to both in my application just not receive data from the real server.
public static void main(String[] args) throws IOException
{
ServerSocket serverSocket = null;
try
{
serverSocket = new ServerSocket(1987);
}
catch (IOException e)
{
System.out.println("Couldn't listen on port: 1987");
System.exit(-1);
}
Socket clientSocket = null;
try
{
clientSocket = serverSocket.accept();
}
catch (IOException e) {
System.out.println("Accept failed: 1987");
System.exit(-1);
}
PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true);
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
String something;
while ((something = in.readLine()) != null)
{
while(true)
{
out.println(message);
}
}
out.close();
in.close();
clientSocket.close();
serverSocket.close();
}
Firstly you should call BufferedReader.ready() before calling readLine(), as the ready() will tell you if it's ok to read.
PrintWriter doesn't throw I/O Exception so the write may have failed without your knowledge which is why there is nothing to read. Use PrintWriter.checkError() to see if anything as gone wrong during the write.
You ought to set up the input and output streams on the Socket at the same time before you write anything down the pipe. If your reader is not ready when the other end tries to write you will get a broken pipe in the server and it won't send any more data. Telnet sets up read and write before you have written or read anything.
You can make use of Wireshark to tell if the server is actually sending data.
BufferdReader.readLine() reads lines, i.e. sequences of characters ended with \r or \r\n. I guess that your server writes its output into one single line. Your telnet output proves this assumption. Just use PrintWriter.println() at server side.
this work with me
with socket without flush
void start_listen()
{
String result1="";
char[] incoming = new char[1024];
while (!s.isClosed())
{
try {
int lenght = input.read(incoming);
result1 = String.copyValueOf(incoming,0,lenght);
}
catch (IOException e)
{
e.printStackTrace();
}
Log.d("ddddddddddd",result1);
}

Categories