I run the following Client-Server code on a linux box
Server Code
public class MyServer {
public static void main(String[] args) {
try {
ServerSocket server = new ServerSocket(6868, 5);
while (true) {
Thread.sleep(5000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Client Code
public class MyClient {
public static void main(String[] args) throws UnknownHostException, IOException {
Socket[] clients = new Socket[8];
for (int i = 0; i < 6; i++) {
clients[i] = new Socket("175.27.3.4", 6868);
System.out.printf("Client %2d: " + clients[i] + "%n", i);
//clients[i].close();
}
for (int i = 6; i < 10; i++) {
clients[i] = new Socket("175.27.3.4", 6868);
System.out.printf("Client %2d: " + clients[i] + "%n", i);
//clients[i].close();
}
}
}
Note-
I have not "ACCEPTED" the client request to test the ServerSocket
backlog argument.
The two loops in the client code are deliberate to help debug the
code.
The Server box is a Cent OS
The Client box is Cent OS (different physical server)
The client sockets are NOT closed deliberately.
After executing the Server code and later Client code I observe the following
On the server using netstat, I see 6 connections in ESTABLISHED
state
On the server using netstat, I see 4 connections in SYN_RECV
state(backlog queue full)
On the client machine, I see all 10 connections in ESTABLISHED
state- Is this normal?
In the wireshark on the server side (tcpdump) for the first 6
connections, I see a proper 3 way handshake for connection
establishment (SYN, SYN-ACK, ACK)
For the next 4 connections , in the wireshark on the server side, I
see (SYN [sometimes multiple], SYN-ACK, ACK, then many Duplicate
SYN-ACKS)
Question -
Are all the above observations normal?
Are the observations system dependent (OS) ?
Bigger Picture
Please refer here OR ServerFault for detailed explanation
I am trying to simulate the production issue on a smaller scale with
the above code
In the production code (which I did not write but I maintain it) the
server socket has a backlog argument as 100.
I suspect the backlog queue is full and hence the newer connections
are dropped.
I see plenty of SYN requests but not many SYN-ACKS in the production
wireshark which is a hour long. Does it ring a bell and can we draw
parallel with the toy code above?
Related
I've learned in Java how to stream data over a network connection using ServerSocket and Socket, such as:
Client.java:
Socket socket = new Socket(address, port);
int i;
while ((i = System.in.read()) != -1)
socket.getOutputStream().write(i);
Server.java:
ServerSocket server = new ServerSocket(port);
Socket socket = server.accept();
int i;
while ((i = socket.getInputStream().read()) != -1)
System.out.println(i);
This would simply have Client blocking on System.in.read() at one end, and Server blocking on socket.getInputStream().read() at the other, and the bytes get passed when ENTER is pressed in the Client program.
How would I accomplish something similar within a single program, without using Sockets? For example, if I had Thread A waiting on keyboard input which is then streamed to Thread B which is able to "consume" the bytes at an arbitrary time in the future, just as Server (above) is able to consume bytes from socket.getInputStream() at some arbitrary time?
Is PipedInput/OutputStream the right solution for this, or ByteArrayInput/OutputStream, or something else? Or am I overthinking it?
Yes, you can use PipedInputStream/PipedOutputStream for "streaming" data "locally" in your JVM. You create one PipedInputStream and one PipedOutputStream instance, connect them with the connect() method and start sending/receiving bytes. Check the following example:
PipedInputStream pipedIn = new PipedInputStream();
PipedOutputStream pipedOut = new PipedOutputStream();
pipedIn.connect(pipedOut);
Thread keyboardReadingThread = new Thread() {
#Override
public void run() {
System.out.println("Enter some data:");
Scanner s = new Scanner(System.in);
String line = s.nextLine();
System.out.println("Entered line: "+line);
byte[] bytes = line.getBytes(StandardCharsets.UTF_8);
try {
pipedOut.write(bytes);
pipedOut.flush();
pipedOut.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Keyboard reading thread terminated");
}
};
keyboardReadingThread.start();
Thread streamReadingThread = new Thread() {
#Override
public void run() {
try {
int bytesRead = 0;
byte[] targetBytes = new byte[100];
System.out.println("Read data from the PipedInputStream instance");
while ((bytesRead = pipedIn.read(targetBytes)) != -1) {
System.out.println("read "+bytesRead+" bytes");
String s = new String(targetBytes, 0, bytesRead, StandardCharsets.UTF_8);
System.out.println("Received string: "+s);
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Streaming reading thread terminated");
}
};
streamReadingThread.start();
keyboardReadingThread.join();
streamReadingThread.join();
First the two piped stream instances are connected. After that two threads will read from the keyboard and read from the PipedInputStream instance. When you run your application you will get an output similar to this (with Some example input for testing being the keyboard input):
Enter some data:
Read data from the PipedInputStream instance
Some example input for testing
Entered line: Some example input for testing
Keyboard reading thread terminated
read 30 bytes
Received string: Some example input for testing
Streaming reading thread terminated
Also notice that the threads are not synchronized in any way, so the System.out.println() statements might get executed in a different order.
This is mostly an extension of the answer #VGR gave in the comments.
If the entirety of your "Network" exists within the same, single JVM, then you don't need anything like sockets at all - you can just use Objects and methods.
The entire point of Sockets was to allow the JVM to perform actions outside of itself (typically with another JVM somewhere in the outside world).
So unless you are trying to interact with objects outside of your current JVM, it is as simple as this.
public class ClientServerExample
{
public static void main(String[] args)
{
Server server = new Server();
Client client = new Client();
client.sendMessage("Hello Server", server);
}
static class Server
{
String respond(String input)
{
String output = "";
System.out.println("Server received the following message -- {" + input + "}");
//do something
return output;
}
}
static class Client
{
void sendMessage(String message, Server server)
{
System.out.println("Client is about to send the following message to the server -- {" + message + "}");
String response = server.respond(message);
System.out.println("Client received the following response from the server -- {" + response + "}");
//maybe do stuff with the response
}
}
}
Here is the result from running it.
Client is about to send the following message to the server -- {Hello Server}
Server received the following message -- {Hello Server}
Client received the following response from the server -- {}
Note that server doesn't return anything because I didn't do anything in the server. Replace that comment with some code of your own and you will see the results.
EDIT - to better explain a real world example, where a server will respond to requests in FIFO, here is a modified version of the above example.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
public class ClientServerExample
{
public static void main(String[] args)
{
System.out.println("===========STARTING SYNCHRONOUS COMMUNICATION============");
synchronousCommunication();
System.out.println("===========FINISHED SYNCHRONOUS COMMUNICATION============");
System.out.println("===========STARTING ASYNCHRONOUS COMMUNICATION============");
asynchronousCommunication();
System.out.println("===========FINISHED ASYNCHRONOUS COMMUNICATION============");
}
public static void synchronousCommunication()
{
Server server = new Server();
Client client = new Client();
String response = "";
response = client.sendMessage("Good morning Server!", server).join();
System.out.println("Client received the following response from the server -- {" + response + "}");
response = client.sendMessage("Good evening Server!", server).join();
System.out.println("Client received the following response from the server -- {" + response + "}");
}
public static void asynchronousCommunication()
{
Server server = new Server();
Client client = new Client();
List<CompletableFuture<String>> responses = new ArrayList<>();
responses.add(client.sendMessage("Good morning Server!", server));
responses.add(client.sendMessage("Good evening Server!", server));
for (CompletableFuture<String> eachResponse : responses)
{
System.out.println("Client received the following response from the server -- {" + eachResponse.join() + "}");
}
}
static class Server
{
CompletableFuture<String> respond(final String input)
{
System.out.println("Server received the following message -- {" + input + "}");
return
CompletableFuture.supplyAsync(
() ->
{
try
{
//sleep for 2 seconds, to represent arbitrary delay in receiver processing
Thread.sleep(2000);
return input.contains("morning") ? "Good morning to you too!" : "Good evening to you too!";
}
catch (Exception e)
{
throw new IllegalStateException("What happened?", e);
}
});
}
}
static class Client
{
CompletableFuture<String> sendMessage(String message, Server server)
{
System.out.println("Client is about to send the following message to the server -- {" + message + "}");
return server.respond(message);
}
}
}
Both of these examples are performing a FIFO approach to data processing. They receive the request, calculate a response, and then send back a CompletableFuture, which is basically an Object that contains the response that will arrive once the Server gets around to it, sort of like a Promise in Javascript.
For the synchronous example, we see that a client message is sent, and then processed before the next one is sent. As a result, we have a minor delay between the 2 (about 2 seconds).
For the asynchronous example, we see that both client messages are sent, and their CompletableFutures are put into a batch list, which is converted to normal strings once all requests have been sent.
The synchronous example takes around 10 seconds.
The asynchronous example takes around 5 seconds.
Both of these are different ways of performing FIFO in the way that you described. They both are examples where multiple clients send a request to the server, and then the server finishes them when they get around to it. That 5 seconds delay is meant to represent the idea of "getting around to it". In reality, getting around to it usually means that the server has so much on it's plate that it will take a long time before it has a chance to give a full response.
Let me know if you need another example to better help you understand.
I have a socketserver set up with a remote client, and it is functional. Upon opening the client and logging in, I noticed that sometimes, there is an error that seems to be due to the client reading an int when it shouldn't be.
Upon logging on, the server sends a series of messages/packets to the client, and these are anything from string messages to information used to load variables on the client's side.
Occasionally, while logging in, an error gets thrown showing that the client has read a packet of size 0 or a very large size. Upon converting the large-sized number into ascii I once found that it was a bit of a string "sk." (I located this string in my code so it's not entirely random).
Looking at my code, I'm not sure why this is happening. Is it possible that the client is reading an int at the wrong time? If so, how can I fix this?
InetAddress address = InetAddress.getByName(host);
connection = new Socket(address, port);
in = new DataInputStream(connection.getInputStream());
out = new DataOutputStream(connection.getOutputStream());
String process;
System.out.println("Connecting to server on "+ host + " port " + port +" at " + timestamp);
process = "Connection: "+host + ","+port+","+timestamp + ". Version: "+version;
write(0, process);
out.flush();
while (true) {
int len = in.readInt();
if (len < 2 || len > 2000) {
throw new Exception("Invalid Packet, length: "+len+".");
}
byte[] data = new byte[len];
in.readFully(data);
for (Byte b : data) {
System.out.printf("0x%02X ",b);
}
try {
reader.handlePackets(data);
} catch (Exception e) {
e.printStackTrace();
//connection.close();
//System.exit(0);
//System.out.println("Exiting");
}
}
//Here is code for my write function (Server sided):
public static void write(Client c, Packet pkt) {
for (Client client : clients) {
if (c.equals(client)) {
try {
out.writeInt(pkt.size());
out.write(pkt.getBytes());
out.flush();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
So looking at the write function, I don't really see how it could be confusing the client and making it read for the size of the packet twice for one packet (at least that's what I think is happening).
If you need more information please ask me.
The client side code looks fine, and the server side code looks fine too.
The most likely issue is that this is some kind of issue with multi-threading and (improper) synchronization. For example, maybe two server-side threads are trying to write a packet to the same client at the same time.
It is also possible that your Packet class has inconsistent implementations of size() and getBytes() ... or that one thread is modifying a Packet objects while a second one is sending it.
When I tested with both client and server on localhost its works. But then I split the client and server to different machines with different IP addresses and now the packets are not being received on the client side. Can anyone spot the problem with my code:
Client:
class Csimudp {
public static DatagramSocket ds;
public static byte buffer[] = new byte[1024];
public static void Myclient() throws Exception {
while (true) {
DatagramPacket p = new DatagramPacket(buffer, buffer.length);
ds.receive(p);
System.out.println(new String(p.getData(), 0, p.getLength()));
}
}
public static void main(String args[]) throws Exception {
System.out.println("for quitting client press ctrl+c");
ds = new DatagramSocket(777);
Myclient();
}
}
Server:
class Ssimudp {
public static DatagramSocket ds;
public static byte buffer[] = new byte[1024];
public static void MyServer() throws Exception {
int pos = 0;
while (true) {
int c = System.in.read();
switch (c) {
case '~':
System.out.println("\n Quits");
return;
case '\r':
break;
case '\n':
ds.send(new DatagramPacket(buffer, pos, InetAddress
.getByName("117.201.5.150"), 777));
pos = 0;
break;
default:
buffer[pos++] = (byte) c;
}
}
}
public static void main(String args[]) throws Exception {
System.out.println("server ready....\n please type here");
ds = new DatagramSocket(888);
MyServer();
}
}
I would hazard a guess that your packets are being blocked by a firewall somewhere in their way. I'd start by opening the appropriate outgoing and incoming UDP ports in the firewalls of the client and the server respectively.
Or your server might be sitting behind a NAT gateway and you need to set up port forwarding rules for it to receive any packets. For example, most ADSL routers are actually set up as a NAT gateway.
Another potential issue is your port selection:
You are binding your client to a specific local port. There is no need for that - let the OS select a free port on its own. This would also remove the possibility of trying to use a port that is already in use.
You are using ports in the [0-1023] range. This port range is generally reserved for well-known services - as a matter of fact, on most Unix-like systems (e.g. Linux) you cannot bind a listening port in that range without root privileges. As a result, quite a few ISPs will filter that port range in their firewall, supposedly to protect their users.
Without more information on the networks that connect the client to the server, it's quite hard to provide a more concrete answer.
PS: There is no need to recreate the InetAddress object in every iteration of your loop - do it once beforehand...
PS.2: In general the computer that sends the first packet in a UDP session is considered the client, since it's also the one that can exist without a fixed address. Your assignment of the client/server roles is reversed in that respect. So when reading my points above, you will have to reverse the client/server specifications for them to apply to your code...
I did a little research about new java socket NIO. I am using MINA for building a simulated server which accept connection from many clients(about 1000) and process the data received from them. I also set up the client simulator which creates around 300 client connection and send data to server using thread. And the result is some of the connection is aborted by the server. Code is below
try {
listener = new NioSocketAcceptor(ioThread);
listener.getFilterChain().addLast("codec", new ProtocolCodecFilter(new MessageCodecFactory()));
listener.getFilterChain().addLast("thread", new ExecutorFilter(100, 150));
listener.setHandler(new IncomingMessageHandler(serverMessageHandler));
listener.bind(new InetSocketAddress(PORT));
}
catch (IOException ioe) {
}
And here is the handler, Session is my class for each connection from client
#Override
public void sessionCreated(IoSession session) throws Exception {
new Session(session.getRemoteAddress(), handler, session);
super.sessionCreated(session);
}
#Override
public void messageReceived(IoSession session, Object message)
throws Exception {
Message m = Message.wrap((MessagePOJO)message);
if (m != null) {
Session s = SessionManager.instance.get(session.getRemoteAddress());
if (s != null) {
s.submit(m);
ArmyServer.instance.tpe.submit(s);
}
}
super.messageReceived(session, message);
}
#Override
public void sessionClosed(IoSession session) throws Exception {
Session s = SessionManager.instance.get(session.getRemoteAddress());
if (s != null)
s.disconnect();
super.sessionClosed(session);
}
And the client simulator, SIZE ~300 - 400
for (int i = 0; i < SIZE; i++) {
clients[i] = new Client(i);
pool[i] = new Thread(clients[i]);
pool[i].start();
}
So the question is how many connections can Mina accept one at a time? Or is there any wrong in my code?
You may just be overloading the server. It's only going to be able to accept so many requests at a time due to OS and CPU limits. Once there are more pending requests than the listen queue length on the ServerSocket, connections will be rejected.
Try increasing the listen queue length (the backlog parameter in ServerSocket.bind()) and / or adding a small amount of sleep() in the client for loop.
I do not know the details of Mina, but you may also want to make sure you have more than 1 Thread accepting in addition to how many threads you have handling messages.
From what I can see there is no documented limit on how many channels a selector can select from. Typically there will be an implementation limit on Integer.MAX_VALUE or something similar. For this particular case, I assume the limit lies in how the SelectorProvider is implemented, and I bet it's native on most JVMs...
Related question:
select() max sockets
Related article:
select system call limitation in Linux
i have a server at the moment which makes a new thread for every user connected but after about 6 people are on the server for more than 15 mins it tends to flop and give me java heap out of memory error i have 1 thread that checks with a mysql database every 30 seconds to see if any of the users currently logged on have any new messages. what would be the easiest way to implement a server queue?
this is the my main method for my server:
public class Server {
public static int MaxUsers = 1000;
//public static PrintStream[] sessions = new PrintStream[MaxUsers];
public static ObjectOutputStream[] sessions = new ObjectOutputStream[MaxUsers];
public static ObjectInputStream[] ois = new ObjectInputStream[MaxUsers];
private static int port = 6283;
public static Connection conn;
static Toolkit toolkit;
static Timer timer;
public static void main(String[] args) {
try {
conn = (Connection) Mysql.getConnection();
} catch (Exception ex) {
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println("****************************************************");
System.out.println("* *");
System.out.println("* Cloud Server *");
System.out.println("* ©2010 *");
System.out.println("* *");
System.out.println("* Luke Houlahan *");
System.out.println("* *");
System.out.println("* Server Online *");
System.out.println("* Listening On Port " + port + " *");
System.out.println("* *");
System.out.println("****************************************************");
System.out.println("");
mailChecker();
try {
int i;
ServerSocket s = new ServerSocket(port);
for (i = 0; i < MaxUsers; ++i) {
sessions[i] = null;
}
while (true) {
try {
Socket incoming = s.accept();
boolean found = false;
int numusers = 0;
int usernum = -1;
synchronized (sessions) {
for (i = 0; i < MaxUsers; ++i) {
if (sessions[i] == null) {
if (!found) {
sessions[i] = new ObjectOutputStream(incoming.getOutputStream());
ois[i]= new ObjectInputStream(incoming.getInputStream());
new SocketHandler(incoming, i).start();
found = true;
usernum = i;
}
} else {
numusers++;
}
}
if (!found) {
ObjectOutputStream temp = new ObjectOutputStream(incoming.getOutputStream());
Person tempperson = new Person();
tempperson.setFlagField(100);
temp.writeObject(tempperson);
temp.flush();
temp = null;
tempperson = null;
incoming.close();
} else {
}
}
} catch (IOException ex) {
System.out.println(1);
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
}
} catch (IOException ex) {
System.out.println(2);
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
}
public static void mailChecker() {
toolkit = Toolkit.getDefaultToolkit();
timer = new Timer();
timer.schedule(new mailCheck(), 0, 10 * 1000);
}
}
It seems like you have a memory leak. 6 threads is not much. I suspect it is because ObjectInputStream and ObjectOutputStream cache all the objects transmitted. This makes them quite unsuitable for long transfers. You think you are sending an object that is then gc'ed, but it's really being held in memory by the object streams.
To flush the streams cache, use
objectOutputStream.reset()
after writing your objects with writeObject()
EDIT:
To get thread pooling, the SocketHandler can be passed to an Executor instead of starting it's own thread. You create an executor like:
Executor executor = Executors.newFiexThreadPool(MaxUsers);
The executor is created as a field, or at the same level as the server socket. Then after
accepting a connection you add the SocketHandler to the executor:
executor.execute(new SocketHandler(...));
However, if your clients are long lived, then this will make little improvement, since the thread startup time is small compared to the amount of work done on each thread. Pools are most effective for executing many small tasks, rather than a few large ones.
As to making the server more robust - some quick hints
ensure it is started with sufficient memory, or at least that the maximum memory is set to anticipate the need of 1000 users.
use a load test framework, such as Apache JMeter to verify it will scale to the maximum number of users.
use a connection pool for your database, and don't hand-code JDBC calls - use an established framework, e.g. Spring JDBC.
Each thread starts with 2MB stack by default. So, if you have 1000 users, then that will use ~2GB of virtual process space just for the stack. ON many 32-bit systems, this is the amount of user space you can have, so there will be no room for data. If you need more users, then either scale out to more processes, with a load balancer passing requests to each process, or look at server solutions that do not require a thread per connection.
attention to detail, particularly exception handling.
logging, for diagnosing failures.
JMX or other managability to monitor server health, with notification to you when values go out of bounds (e.g. memory/cpu use too high for a long period, or request time slow.)
See Architecture of a Highly Scalable Server
You should check out Java NIO for building scalable servers
I would focus attention on why you are running out of heap space, or rather why your program gets an OOM error when 6 connections have been open for some time. Your server should be able to scale to at least many more simultaneous concurrent connections, but it's hard to quantify that number without getting more details about your environment, HW, etc.
You've only posted the main method for your server so it's hard to tell if there are memory leaks, resource leaks, etc. that might be causing you to run out of heap space. Are you running your server with the default heap settings? If so, you might want to try increasing your heap size, as the defaults are quite conservative.
Romain is correct: you should be closing your stream resources in a try { ... } finally { ... } block to make sure you are not leaking resources.
Lastly, you might want to consider passing the backlog parameter to the ServerSocket constructor. This specifies the maximum queue size for incoming connections to that ServerSocket, after which any new connections are refused. But first you still need to figure out why your server cannot handle more than 6 connections.