I have an application that uses LDAP and communicates in a server client way using Sun's jndi library. The problem is that when many connections are trying to be established at once I see a lot of failed connections because bind response is not sent in desired time interval.
Is there a way to enhance this?
It is not unusual that there are >200 connections at once. Everything works OK until ~60 connections and after that it becomes too slow.
P.S.There is no possibility to increase waiting time.
Every connection is running in a separate thread like this:
...
serverSocket = new ServerSocket(port);
infinite loop:
newSocket = serverSocket.accept();
newSocket.setTcpNoDelay(true);
Thread t = new Thread(/*runnable that does something*/);
t.start();
Thanks!
Just wanted to share with you that I set a higher value for backlog and also I cleaned up run method a lot, making the transfer part the first thing that executes and then doing the analysis. Thanks four your help.
You probably have networking code in the constructor of the Runnable. Move it to the run() method so that it runs in its own thread instead of the thread that is calling accept().
Related
I would like to use Java Netty to create a TCP server for a large number of persistent connections from a clients. In other words, imaging that there are 1000 client devices out there, and all of them create and maintain a persistent connection to the TCP server. There will be a reasonable amount of traffic (mostly lines of text) that go back and forth across each of these persistent connections. How can I determine the best number of threads to use in the boss and worker groups for NioEventLoopGroup?
My understanding is that when the connection is created, Netty creates a SimpleChannelInboundHandler<String> object to handle the connection. When the connection is created then the handler channelActive method is called, and every time it gets a new message from the client, the method messageReceived gets called (or channelRead0 method in Netty 4.0.24).
Is my understanding correct?
What happens if I have long running code to run in messageReceived -
do I need to launch this code in yet another thread
(java.util.Thread)?
What happens if my messageReceived method blocks on something or
takes a long time to complete? Does that bring Netty to a grinding
halt?
Basically I need to write a TCP socket server that can serve a large number of persistent connections as quickly as possible.
Is there any guidance available on number of threads for NioEventLoopGroup and on how to use any threads inside the handler?
Any help would be greatly appreciated.
How can I determine the best number of threads to use in the boss and worker groups for NioEventLoopGroup?
About Boss Thread,if you are saying that you need persistent connections , there is no sense to use a lot of boss threads, because boss threads only responsible for accepting new connections. So I would use only one boss thread.
The number of worker threads should depends on your processor cores.
Don't forget to add -XmsYYYYM and -XmxYYYYM as your VM attributes, because without them you can face the case, when your JVM are not using all cores.
What happens if I have long running code to run in messageReceived - do I need to launch this code in yet another thread (java.util.Thread)?
Do you really need to do it? Probably you should think of doing your logic another way, if not then probably you should consider OIO with new thread for each connection.
What happens if my messageReceived method blocks on something or takes a long time to complete?
You should avoid using thread blocking actions in your handlers.
Does that bring Netty to a grinding halt?
Yep, it does.
I wonder if it is really possible for a simple server program using server socket can handle multiple clients at the same time simultaneously?
I am creating a server program that needs to handle multiple clients. With the same port number. But the problem is the program will only serve one client at a time, and in order for it to serve the other client, the first connection have to be terminated.
Here is the code:
try{
ServerSocket server = new ServerSocket(PORT);
System.out.println("Server Running...");
while(true){
Socket socket = server.accept();
System.out.println("Connection from:"+socket.getInetAddress());
Scanner in = new Scanner (socket.getInputStream());
PrintWriter output = new PrintWriter(socket.getOutputStream());
}
}catch(Exception e){
System.out.println(e);
}
Is there any possible java code that will be added here in order to for the program to serve multiple clients simultaneously?
The code you've posted doesn't actually do anything with the client connection, which makes it hard to help you. But yes, it's entirely possible for a server to handle multiple concurrent connections.
My guess is that the problem is that you're doing everything on a single thread, with synchronous IO. That means that while you're waiting for data from the existing client, you're not accepting new connections. Typically a server takes one of two approaches:
Starting a thread (or reusing one from a thread pool) when it accepts a connection, and letting that thread deal with that connection exclusive.
Using asynchronous IO to do everything on a single thread (or a small number of threads).
The latter approach can be more efficient, but it also significantly more complicated. I'd suggest you use a "thread per connection" approach to start with. While you're experimenting, you can simply start a brand new Thread each time a client connects - but for production use, you'd use an ExecutorService or something similar, in order to reuse threads.
Note that depending on what kind of server you're building, all of this may well be done for you in third party libraries.
I have a Swing client which has a connect and cancel button so it can attempt connection to the server or end a current connection. I'm trying to make it so the client can connect to the server, end the connection and then connect to the server again multiple times.
My understanding is that typically when a client and server end a connection regardless of who ends it the client closes its streams and socket. Obviously then, they cannot be reused for another connection attempt. Right now I have the Socket and stream vars as private instance variables and a method for connecting to server which creates a new socket and then methods for opening and closing streams.
Just wondering how something like this could be typically handled. I've thought about having one humongous method which creates new socket, streams and handles all communication and closing of streams and socket, but it seems messy. Or maybe having a new thread create everything and then when the communication is over terminate the thread.
Ideas appreciated.
- Create a separate Thread at the Server end, when the Client connects to the Server.
- Do the process of read and writing onto the client socket for that particular client into that particular thread.
- And then terminate the Client thread when its done.
- If you again try to connect it, a new thread will span.
- You can always create a HashMap to keep tab on Client-Socket to Thread relation.
You should put all the "logic" in a new class, different than the GUI.
Then because you have 2 buttons, your Gui class should be able to call at least 2 methods on the logic class : connect() and disconnect(). Then in these methods you can handle all the work that is required to connect to a server, open/close streams etc...
That will make your code more clear, more maintainable, and maybe more evolutive if you plan to add features.
I would prefer create a thread for each socket creation and handle request and response withing that thread.
I am connecting 10 devices to a LAN, all of them have a udp server that goes like:
while(true){
serverSocket.receive(receivePacket);
dostuff(receivePacket);
}
serverSocket.close();
Now lets assume 9 of the devices try to initiate connection to the 10th device simultaenously. How can I accept all 9 instead of just the first which will then block the socket untill the server completes computation? Should I start a thread which will take care of dostuf() ? Will this let me get request from all of the simultaneous requests I got?
A basic design would have on thread responsible for handling incoming requests (with your desired limit) and then handing them off to worker/request handler threads. When each of these worker threads is finished, you'd want to update a shared/global counter to let the main thread know that it can establish a new connection. This will require a degree of synchronization, but it can be pretty fun.
Here's the idea:
serverThread:
while true:
serverLock.acquire()
if numberOfRequests < MAX_REQUESTS:
packet = socket.receive()
numberOfRequests++
requestThread(packet).run()
else
serverMonitor.wait(serverLock);
serverLock.release()
requestThread:
//handle packet
serverLock.acquire()
if numberOfRequests == MAX_REQUESTS:
numberOfRequests--
serverMonitor.pulse();
serverLock.release()
You'll want to make sure the synchronization is all correct, this is just to give you an idea of what you can start out with. But when you get the hang of it, you'll be able to make optimizations and enhancements. One particular enhancement, which also lends itself to limited number of requests, is something called a ThreadPool.
Regardless the basic structure is very much the same with most servers: a main thread responsible for handing off requests to worker threads. It's a neat and simple abstraction.
You can use threads in order to solve that problem. Since java already has an API that handles threads you can just create instance of runnable executors, take a look at the Executor Interface. Here is another useful link that could potentially help: blocking queue
Use a relatively larger size threadpool since udp doesn't require response.
main method will run as a listener and a threadpool will be doing rest of the heavy lifting
I'm trying to create a simple multiplayer game for Android devices. I have been thinking about the netcode and read now a lot of pages about Sockets. The Android application will only be a client and connect only to one server.
Almost everywhere (here too) you get the recommendation to use NIO or a framework which uses NIO, because of the "blocking".
I'm trying to understand what the problem of a simple socket implementation is, so I created a simple test to try it out:
My main application:
[...]
Socket clientSocket = new Socket( "127.0.0.1", 2593 );
new Thread(new PacketReader(clientSocket)).start();
PrintStream os = new PrintStream( clientSocket.getOutputStream() );
os.println( "kakapipipopo" );
[...]
The PacketReader Thread:
class PacketReader implements Runnable
{
Socket m_Socket;
BufferedReader m_Reader;
PacketReader(Socket socket)
{
m_Reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
public void run()
{
char[] buffer = new char[200];
int count = 0;
while(true)
{
count = m_Reader.read(buffer, 0, 200);
String message = new String(buffer, 0, count);
Gdx.app.log("keks", nachricht);
}
}
}
I couldn't experience the blocking problems I should get. I thought the read() function will block my application and I couldn't do anything - but everything worked just fine.
I have been thinking: What if I just create a input and output buffer in my application and create two threads which will write and read to the socket from my two buffers? Would this work?
If yes - why does everyone recommend NIO? Somewhere in the normal IO way there must a block happen, but I can't find it.
Are there maybe any other benifits of using NIO for Android multiplayer gaming? I thought that NIO seems to be more complex, therefore maybe less suited for a mobile device, but maybe the simple socket way is worse for a mobile device.
I would be very happy if someone could tell me where the problem happens. I'm not scared of NIO, but at least I would like to find out why I'm using it :D
Greetings
-Thomas
The blocking is, the read() will block current thread until it can read data from socket's input stream. Thus, you need a thread dedicate on that single TCP connection.
What if you have more than 10k client devices connected with your server? You need at least 10k threads to handle all client devices (assume each device maintain a single TCP connection) no matter they are active or not. You have too much overhead on context switch and other multi-threads overhead even only 100 of them are active.
The NIO use a selector model to handle those clients, means you don't need a dedicate thread for each TCP connection to receive data. You just need to select all active connections (which has data already received) and to process those active connections. You can control how many threads should be maintained in server side.
EDIT
This answer is kind of not exactly answering about what OP asked. For client side its fine because the client is going to connect to just one server. Although my answer gives some generic idea about Blocking and Non Blocking IO.
I know this answer is coming 3 years later but this is something which might help someone in future.
In a Blocking Socket model, if data is not available for reading or if the server is not ready for writing then the network thread will wait on a request to read from or write to a socket until it either gets or sends the data or
times out. In other words, the program may halt at that point for quite some time if it can't proceed. To cancel this out we can create a thread per connection which can handle out requests from each client concurrently. If this approach is chosen, the scalability of the application can suffer because in a scenario where thousands of clients are connected, having thousands of threads can eat up all the memory of the system as threads are expensive to create and can affect the performance of the application.
In a Non-Blocking Socket model, the request to read or write on a socket returns immediately whether or not it was successful, in other words, asynchronously. This keeps the network thread busy. It is then our task to decide whether to try again or consider the read/write operation complete. Having this creates an event driven approach towards communication where we can create threads when needed and which leads to a more scalable system.
Diagram below explains the difference between Blocking and Non-Blocking socket model.