easiest and best way to make a server queue java - java

i have a server at the moment which makes a new thread for every user connected but after about 6 people are on the server for more than 15 mins it tends to flop and give me java heap out of memory error i have 1 thread that checks with a mysql database every 30 seconds to see if any of the users currently logged on have any new messages. what would be the easiest way to implement a server queue?
this is the my main method for my server:
public class Server {
public static int MaxUsers = 1000;
//public static PrintStream[] sessions = new PrintStream[MaxUsers];
public static ObjectOutputStream[] sessions = new ObjectOutputStream[MaxUsers];
public static ObjectInputStream[] ois = new ObjectInputStream[MaxUsers];
private static int port = 6283;
public static Connection conn;
static Toolkit toolkit;
static Timer timer;
public static void main(String[] args) {
try {
conn = (Connection) Mysql.getConnection();
} catch (Exception ex) {
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println("****************************************************");
System.out.println("* *");
System.out.println("* Cloud Server *");
System.out.println("* ©2010 *");
System.out.println("* *");
System.out.println("* Luke Houlahan *");
System.out.println("* *");
System.out.println("* Server Online *");
System.out.println("* Listening On Port " + port + " *");
System.out.println("* *");
System.out.println("****************************************************");
System.out.println("");
mailChecker();
try {
int i;
ServerSocket s = new ServerSocket(port);
for (i = 0; i < MaxUsers; ++i) {
sessions[i] = null;
}
while (true) {
try {
Socket incoming = s.accept();
boolean found = false;
int numusers = 0;
int usernum = -1;
synchronized (sessions) {
for (i = 0; i < MaxUsers; ++i) {
if (sessions[i] == null) {
if (!found) {
sessions[i] = new ObjectOutputStream(incoming.getOutputStream());
ois[i]= new ObjectInputStream(incoming.getInputStream());
new SocketHandler(incoming, i).start();
found = true;
usernum = i;
}
} else {
numusers++;
}
}
if (!found) {
ObjectOutputStream temp = new ObjectOutputStream(incoming.getOutputStream());
Person tempperson = new Person();
tempperson.setFlagField(100);
temp.writeObject(tempperson);
temp.flush();
temp = null;
tempperson = null;
incoming.close();
} else {
}
}
} catch (IOException ex) {
System.out.println(1);
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
}
} catch (IOException ex) {
System.out.println(2);
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
}
public static void mailChecker() {
toolkit = Toolkit.getDefaultToolkit();
timer = new Timer();
timer.schedule(new mailCheck(), 0, 10 * 1000);
}
}

It seems like you have a memory leak. 6 threads is not much. I suspect it is because ObjectInputStream and ObjectOutputStream cache all the objects transmitted. This makes them quite unsuitable for long transfers. You think you are sending an object that is then gc'ed, but it's really being held in memory by the object streams.
To flush the streams cache, use
objectOutputStream.reset()
after writing your objects with writeObject()
EDIT:
To get thread pooling, the SocketHandler can be passed to an Executor instead of starting it's own thread. You create an executor like:
Executor executor = Executors.newFiexThreadPool(MaxUsers);
The executor is created as a field, or at the same level as the server socket. Then after
accepting a connection you add the SocketHandler to the executor:
executor.execute(new SocketHandler(...));
However, if your clients are long lived, then this will make little improvement, since the thread startup time is small compared to the amount of work done on each thread. Pools are most effective for executing many small tasks, rather than a few large ones.
As to making the server more robust - some quick hints
ensure it is started with sufficient memory, or at least that the maximum memory is set to anticipate the need of 1000 users.
use a load test framework, such as Apache JMeter to verify it will scale to the maximum number of users.
use a connection pool for your database, and don't hand-code JDBC calls - use an established framework, e.g. Spring JDBC.
Each thread starts with 2MB stack by default. So, if you have 1000 users, then that will use ~2GB of virtual process space just for the stack. ON many 32-bit systems, this is the amount of user space you can have, so there will be no room for data. If you need more users, then either scale out to more processes, with a load balancer passing requests to each process, or look at server solutions that do not require a thread per connection.
attention to detail, particularly exception handling.
logging, for diagnosing failures.
JMX or other managability to monitor server health, with notification to you when values go out of bounds (e.g. memory/cpu use too high for a long period, or request time slow.)
See Architecture of a Highly Scalable Server

You should check out Java NIO for building scalable servers

I would focus attention on why you are running out of heap space, or rather why your program gets an OOM error when 6 connections have been open for some time. Your server should be able to scale to at least many more simultaneous concurrent connections, but it's hard to quantify that number without getting more details about your environment, HW, etc.
You've only posted the main method for your server so it's hard to tell if there are memory leaks, resource leaks, etc. that might be causing you to run out of heap space. Are you running your server with the default heap settings? If so, you might want to try increasing your heap size, as the defaults are quite conservative.
Romain is correct: you should be closing your stream resources in a try { ... } finally { ... } block to make sure you are not leaking resources.
Lastly, you might want to consider passing the backlog parameter to the ServerSocket constructor. This specifies the maximum queue size for incoming connections to that ServerSocket, after which any new connections are refused. But first you still need to figure out why your server cannot handle more than 6 connections.

Related

Java Server having many clients connect without bottlenecking

So what I'm trying to do is have a socket that receives input from the client, put the client into the queue and then return a message to each client in the queue when my algorithm returns true.
This queue should support a few hundred clients at once but at the same time not bottle neck the server so it can actually do what its supposed to do.
This is what i have so far:
private static final int PORT = 25566;
private static final int THREADS = 4;
private ExecutorService service;
public void init() throws IOException, IllegalStateException {
ServerSocket serverSocket;
serverSocket = new ServerSocket(PORT);
service = Executors.newCachedThreadPool();
Socket socket;
while(true) {
socket = serverSocket.accept();
System.out.println
("Connection established with " + socket.getInetAddress().toString());
service.execute(() -> {
Scanner scanner = null;
PrintWriter output = null;
String line = null;
try {
scanner = new Scanner(new InputStreamReader(socket.getInputStream()));
output = new PrintWriter(socket.getOutputStream());
} catch(IOException e) {
e.printStackTrace();
}
try {
if (scanner == null || output == null)
throw new IllegalStateException("Scanner/PrintWriter is " + "null!");
line = scanner.nextLine();
while (line.compareTo("QUIT") != 0) {
/* This is where input comes in, queue for the algorithm,
algorithm happens then returns appropriate values */
output.flush();
line = scanner.nextLine();
}
} finally {
try {
System.out.println
("Closing connection with " + socket.getInetAddress().toString());
if(scanner != null) {
scanner.close();
}
if(output != null) {
output.close();
}
socket.close();
} catch(IOException e) {
e.printStackTrace();
}
}
});
}
}
Now what I think will happen with this, is if the queues do reach high enough levels, my thread pool will completely bottleneck the server as all of the threads are being put to use on handling the clients in the queue and there won't be enough processing for the algorithm.
EDIT: After a bunch of testing, I think it will work out if in the algorithm it returns the value then disconnects, not waiting for user response but having the users client reconnect after certain conditions are met.
Your bottleneck is unlikely to be processing power unless you are machine limited. What's more likely to happen is that all the threads in your thread pool are consumed and end up waiting on input from the clients. Your design can only handle as many clients at once as there are threads in the pool.
For a few hundred clients, you could consider simply creating a thread for each client. The limiting resource for the number of threads that can be supported is typically memory for the stack that each thread requires, not processing power; for a modern machine with ample memory, a thousand threads is not a problem, based on personal experience. There may be an operating system parameter limiting the number of threads which you may have to adjust.
If you need to handle a very large number of clients, you can set up your code to poll sockets for available input and do the processing only for those sockets that have input to be processed.

Client Server communication and underlying TCP states

I run the following Client-Server code on a linux box
Server Code
public class MyServer {
public static void main(String[] args) {
try {
ServerSocket server = new ServerSocket(6868, 5);
while (true) {
Thread.sleep(5000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Client Code
public class MyClient {
public static void main(String[] args) throws UnknownHostException, IOException {
Socket[] clients = new Socket[8];
for (int i = 0; i < 6; i++) {
clients[i] = new Socket("175.27.3.4", 6868);
System.out.printf("Client %2d: " + clients[i] + "%n", i);
//clients[i].close();
}
for (int i = 6; i < 10; i++) {
clients[i] = new Socket("175.27.3.4", 6868);
System.out.printf("Client %2d: " + clients[i] + "%n", i);
//clients[i].close();
}
}
}
Note-
I have not "ACCEPTED" the client request to test the ServerSocket
backlog argument.
The two loops in the client code are deliberate to help debug the
code.
The Server box is a Cent OS
The Client box is Cent OS (different physical server)
The client sockets are NOT closed deliberately.
After executing the Server code and later Client code I observe the following
On the server using netstat, I see 6 connections in ESTABLISHED
state
On the server using netstat, I see 4 connections in SYN_RECV
state(backlog queue full)
On the client machine, I see all 10 connections in ESTABLISHED
state- Is this normal?
In the wireshark on the server side (tcpdump) for the first 6
connections, I see a proper 3 way handshake for connection
establishment (SYN, SYN-ACK, ACK)
For the next 4 connections , in the wireshark on the server side, I
see (SYN [sometimes multiple], SYN-ACK, ACK, then many Duplicate
SYN-ACKS)
Question -
Are all the above observations normal?
Are the observations system dependent (OS) ?
Bigger Picture
Please refer here OR ServerFault for detailed explanation
I am trying to simulate the production issue on a smaller scale with
the above code
In the production code (which I did not write but I maintain it) the
server socket has a backlog argument as 100.
I suspect the backlog queue is full and hence the newer connections
are dropped.
I see plenty of SYN requests but not many SYN-ACKS in the production
wireshark which is a hour long. Does it ring a bell and can we draw
parallel with the toy code above?

java tcp server to many connections

I wrote a simple TCP server to transfare some user Data to it and save it in an simple MySQL table. If i now run more than 2000 clients after each other it stops working. While running i get some IO error java.io.EOFException you may also see the misstake i made for that. But the most importand is that i get this
IO error java.net.SocketException: Connection reset
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at Server.main(Server.java:49)
Enough Memory schould be there but the threads are still running and i dont see where i made the misstake that they dont get terminated. So i got up to 3900 threads running than.
So here is the part of the Server:
try {
// create new socket
ServerSocket sock = new ServerSocket(port);
textArea.setText(textArea.getText() + "Server started\n");
while (true) {
// accept the connection
Socket newsock = sock.accept();
// handle the action
Thread t = new ThreadHandler(newsock);
newsock.setSoTimeout(2000); // adding client timeout
t.start();
}
} catch (Exception e) {
guess really simple. Here is how i handle the socket:
class ThreadHandler extends Thread {
private Socket socket;
private MySQLConnection sqlConnection;
ThreadHandler(Socket s) {
socket = s;
sqlConnection = new MySQLConnection();
}
public void run() {
try {
DataOutputStream out = new DataOutputStream(
socket.getOutputStream());
DataInputStream in = new DataInputStream(new BufferedInputStream(
socket.getInputStream()));
Server.textArea.append((new Date()) + "\nClient connected IP: " + socket.getInetAddress().toString()+"\n");
int firstLine = in.readInt(); // get first line for switch
switch (firstLine) {
case 0:
// getting the whole objekt for the database in own lines!
String name2 = in.readUTF();
int level2 = in.readInt();
int kp2 = in.readInt();
String skill = in.readUTF();
LeadboardElement element2 = new LeadboardElement();
element2.setName(name2);
element2.setLevel(level2);
element2.setKillPoints(kp2);
element2.setSkill(skill);
sqlConnection.saveChaToLeadboard(element2);
break;
//case 1 return the top10
###.... shorten here the rest of the cases
out.close();
in.close();
//close this socket
socket.close();
Server.textArea.append("Client disconnected IP: " + socket.getInetAddress().toString()+ "\n" + (new Date())
+ "\n----------------------------------------------------\n");
// autoscrolldown
Server.textArea.setCaretPosition(Server.textArea.getDocument()
.getLength());
} catch (Exception e) {
System.out.println("IO error " + e);
try {
socket.close();
} catch (IOException e1) {
e1.printStackTrace();
}
}finally{
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
the saveChaToLeadboard simply gets the name level kp and skill and uses a preparedStatement so save it to my MySQL Table.
I hope you can help me i just dont see the misstake of it. I think i need to Join it somewhere but if i put a join at the end of it (after socket.close()) it still does the same.
Here the save to database methode:
public void saveChaToLeadboard(LeadboardElement element) {
try {
// load driver
Class.forName("com.mysql.jdbc.Driver");
connection = DriverManager.getConnection(this.databaseURL
+ DATABASE_NAME, this.user, this.password);
// insert values into the prep statement
preparedStatement = connection
.prepareStatement(PREP_INSERT_STATEMENT);
preparedStatement.setString(1, element.getName());
preparedStatement.setInt(2, element.getLevel());
preparedStatement.setInt(3, element.getKillPoints());
if(!element.getSkill().equalsIgnoreCase("")){
preparedStatement.setString(4, element.getSkill());
}else{
preparedStatement.setString(4, null);
}
// execute
preparedStatement.executeUpdate();
connection.close();
} catch (Exception e) {
Server.textArea.append(e.getMessage() + "\n");
Server.textArea.setCaretPosition(Server.textArea.getDocument()
.getLength());
try {
connection.close();
} catch (SQLException e1) {
e1.printStackTrace();
}
}
Thanks alot!
Regards
Your run() method is mangled, but I suspect that part of the problem is that you are not always closing network sockets and streams. In particular, I suspect that you are not closing them if there is an exception while reading, or processing the data you read. You should always close sockets and streams in a finally block (or the Java 7 equivalent).
Another potential problem is that some of the connections may be stalling due to the other end not sending data. To deal with that, you would need to set a read timeout on the socket ... so that connections to slow / stuck clients can be closed.
Finally, it is probably unrealistic to even try to process 2000+ connections in parallel with a thread per connection. That's a LOT of resources1. I recommend you use a thread pool with a fixed upper limit in the low hundreds, and stop accepting new connections if all threads are in use.
1 - Each thread stack occupies at least 64K of memory on a HotSpot JVM, and possibly as much of 1Mb. Then there are the Heap resources that the thread directly or indirectly refers to, and OS resources needed to maintain the state of the threads and the sockets. For 2000 threads, that's probably multiple Gb of memory.
IMHO 2000 threads is on the high side for a single process and 2000 database connections definately is.
Regardless of whether or not you're hitting limits with 2000 incoming connections, your approach simply will not scale.
To acheive scalability you need to look at using resource pools - this means:
a pool of reader threads reading from the sockets queuing the data for processing.
a pool of worker threads processing the data queued by the reader threads.
a pool of database connections used by the worker threads - this connection pool could be adjusted so that each worker thread has it's own connection but the important thing is that you don't continually open and close database connections.
Look at the concurreny API for the thread pools and the NIO API for the IO.
This arrangement will allow you to tune your server to acheive the desired throughput.

Java InputStream Locking

I am using an InputStream to stream a file over the network.
However if my network goes down the the process of reading the file the read method blocks and is never recovers if the network reappears.
I was wondering how I should handle this case and should some exception not be thrown if the InputStream goes away.
Code is like this.
Url someUrl = new Url("http://somefile.com");
InputStream inputStream = someUrl.openStream();
byte[] byteArray = new byte[];
int size = 1024;
inputStream.read(byteArray,0,size);
So somewhere after calling read the network goes down and the read method blocks.
How can i deal with this situation as the read doesn't seem to throw an exception.
From looking at the documentation here:
http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html
It looks like read does throw an exception.
There are a few options to solve your specific problem.
One option is to track the progress of the download, and keep that status elsewhere in your program. Then, if the download fails, you can restart it and resume at the point of failure.
However, I would instead restart the download if it fails. You will need to restart it anyway so you might as well redo the whole thing from the beginning if there is a failure.
The short answer is to use Selectors from the nio package. They allow non-blocking network operations.
If you intend to use old sockets, you may try some code samples from here
Have a separate Thread running that has a reference to your InputStream, and have something to reset its timer after the last data has been received - or something similar to it. If that flag has not been reset after N seconds, then have the Thread close the InputStream. The read(...) will throw an IOException and you can recover from it then.
What you need is similar to a watchdog. Something like this:
public class WatchDogThread extends Thread
{
private final Runnable timeoutAction;
private final AtomicLong lastPoke = new AtomicLong( System.currentTimeMillis() );
private final long maxWaitTime;
public WatchDogThread( Runnable timeoutAction, long maxWaitTime )
{
this.timeoutAction = timeoutAction;
this.maxWaitTime = maxWaitTime;
}
public void poke()
{
lastPoke.set( System.currentTimeMillis() );
}
public void run()
{
while( Thread.interrupted() ) {
if( lastPoke.get() + maxWaitTime < System.currentTimeMillis() ) {
timeoutAction.run();
break;
}
try {
Thread.sleep( 1000 );
} catch( InterruptedException e ) {
break;
}
}
}
}
public class Example
{
public void method() throws IOException
{
final InputStream is = null;
WatchDogThread watchDog =
new WatchDogThread(
new Runnable()
{
#Override
public void run()
{
try {
is.close();
} catch( IOException e ) {
System.err.println( "Failed to close: " + e.getMessage() );
}
}
},
10000
);
watchDog.start();
try {
is.read();
watchDog.poke();
} finally {
watchDog.interrupt();
}
}
}
EDIT:
As noticed, sockets have a timeout already. This would be preferred over doing a watchdog thread.
The function inputStream.read() is blocking function and it should be called in a thread.
There is alternate way of avoiding this situation. The InputStream also has a method available(). It returns the number of bytes that can be read from the stream.
Call the read method only if there are some bytes available in the stream.
int length = 0;
int ret = in.available();
if(ret != 0){
length = in.read(recv);
}
InputStream does throw the IOException. Hope this information is useful to you.
This isn't a big deal. All you need to do is set a timeout on your connection.
URL url = ...;
URLConnection conn = URL.openConnection();
conn.setConnectTimeout( 30000 );
conn.setReadTimeout(15000);
InputStream is = conn.openStream();
Eventually, one of the following things will happen. Your network will come back, and your transfers will resume, the TCP stack will eventually timeout in which case an exception IS thrown, or the socket will get a socket closed/reset exception and you'll get an IOException. In all cases the thread will let go of the read() call, and your thread will return to the pool ready to service other requests without you having to do anything extra.
For example, if your network goes out you won't be getting any new connections coming in, so the fact that this thread is tied up isn't going to make any difference because you don't have connections coming in. So your network going out isn't the problem.
More likely scenario is the server you are talking to could get jammed up and stop sending you data which would slow down your clients as well. This is where tuning your timeouts is important over writing more code, using NIO, or separate threads, etc. Separate threads will just increase your machine's load, and in the end force you to abandon the thread after a timeout which is exactly what TCP already gives you. You also could tear your server up because you are creating a new thread for every request, and if you start abandoning threads you could easily wind up with 100's of threads all sitting around waiting for a timeout on there socket.
If you have a high volume of traffic on your server going through this method, and any hold up in response time from a dependency, like an external server, is going to affect your response time. So you will have to figure out how long you are willing to wait before you just error out and tell the client to try again because the server you're reading this file from isn't giving it up fast enough.
Other ideas are caching the file locally, trying to limit your network trips, etc to limit your exposure to an unresponsive peer. The exact same thing can happen with databases on external servers. If your DB doesn't send you a responses fast enough it can jam up your thread pool just like a file that doesn't come down quick enough. So why worry any differently about file servers? More error handling isn't going fix your problem, and it will just make your code obtuse.

How many connections can selector in java.nio select one at a time?

I did a little research about new java socket NIO. I am using MINA for building a simulated server which accept connection from many clients(about 1000) and process the data received from them. I also set up the client simulator which creates around 300 client connection and send data to server using thread. And the result is some of the connection is aborted by the server. Code is below
try {
listener = new NioSocketAcceptor(ioThread);
listener.getFilterChain().addLast("codec", new ProtocolCodecFilter(new MessageCodecFactory()));
listener.getFilterChain().addLast("thread", new ExecutorFilter(100, 150));
listener.setHandler(new IncomingMessageHandler(serverMessageHandler));
listener.bind(new InetSocketAddress(PORT));
}
catch (IOException ioe) {
}
And here is the handler, Session is my class for each connection from client
#Override
public void sessionCreated(IoSession session) throws Exception {
new Session(session.getRemoteAddress(), handler, session);
super.sessionCreated(session);
}
#Override
public void messageReceived(IoSession session, Object message)
throws Exception {
Message m = Message.wrap((MessagePOJO)message);
if (m != null) {
Session s = SessionManager.instance.get(session.getRemoteAddress());
if (s != null) {
s.submit(m);
ArmyServer.instance.tpe.submit(s);
}
}
super.messageReceived(session, message);
}
#Override
public void sessionClosed(IoSession session) throws Exception {
Session s = SessionManager.instance.get(session.getRemoteAddress());
if (s != null)
s.disconnect();
super.sessionClosed(session);
}
And the client simulator, SIZE ~300 - 400
for (int i = 0; i < SIZE; i++) {
clients[i] = new Client(i);
pool[i] = new Thread(clients[i]);
pool[i].start();
}
So the question is how many connections can Mina accept one at a time? Or is there any wrong in my code?
You may just be overloading the server. It's only going to be able to accept so many requests at a time due to OS and CPU limits. Once there are more pending requests than the listen queue length on the ServerSocket, connections will be rejected.
Try increasing the listen queue length (the backlog parameter in ServerSocket.bind()) and / or adding a small amount of sleep() in the client for loop.
I do not know the details of Mina, but you may also want to make sure you have more than 1 Thread accepting in addition to how many threads you have handling messages.
From what I can see there is no documented limit on how many channels a selector can select from. Typically there will be an implementation limit on Integer.MAX_VALUE or something similar. For this particular case, I assume the limit lies in how the SelectorProvider is implemented, and I bet it's native on most JVMs...
Related question:
select() max sockets
Related article:
select system call limitation in Linux

Categories