I am initializing Netty Thread Pool like this:-
bossGroup = new EpollEventLoopGroup(1);
workerGroup = new EpollEventLoopGroup(1);
This creates more than 80 threads for a single application running on a VM allocated with 40 vCPUs.Is there a way that these threads can be reduced?
As it is not expected that the number of server and clients to increase?
Using Netty final 4.1.32
The code you showed will only create 2 EpollEventLoop instances and so only two Threads will end up in epollWait(...). You must execute this code multiple times or have a different place where you configure it differently to end up with more Threads. Another possibility is that you execute it multiple times but miss to call shutdownGracefully() on the EpollEventLoopGroup once you are done using it.
Related
According to this link:
server.tomcat.max-threads – Maximum amount of worker threads in server
under top load. In other words, maximum number of simultaneous
requests that can be handled.
Let's say that every request that tomcat server gets spawns 3 worker threads for reading from database.
It means that for 5 requests we have 20 threads (each request has 1 thread and additional 3 worker threads).
In this case, do we consider number of threads as 20 or 5 for dealing with the property server.tomcat.max-threads?
The way to limit the number of threads is to not spawn them directly.
Instead use a thread pool with a fixed upper bound on the number of threads.
The modern way to do this is to use the ExecutorService API (javadoc) and instantiate the service using either Executors.newFixedThreadPool(...) (javadoc) or directly using one of the many ThreadPoolExecutor (javadoc) constructor overloads.
In this case, do we consider number of threads as 20 or 5 for dealing with the property server.tomcat.max-threads?
Threads that are created by an application or an application thread pool while processing a request do not count as "worker threads" for the purposes of that Tomcat configuration property.
It is up to the application or its thread pool to manage any threads it creates and ensure that:
the number of these threads does not get too large,
they don't consume too much resources (CPU, memory, etc), and
they don't get "orphaned" and end up wasting resources on a task that is no longer needed; e.g. because the original client request timed out.
Beware that this kind of thing can easily turn into "denial of service" problem.
I am currently using 5 threadpools and I want to find optimal sizing for these pools. This is some kind of prior analysis. Pools are divided by usage: for handling commands (cmdPool), handling inventory transactions (invPool), pool for database transactions (dbPool), also pool for common things that simply need to run async like I/O (fastPool) and for scheduled tasks (timerPool). I do not have any statistical data that could be used for solving problem yet.
For database query I am using HikariCP with default values. I will try to change count of maximum connections and minimum idle connections later to find optimal performance. But for now, when using Hikari pool it will be always called from one of the pools to not affect main thread. Usual database query is called under dbPool but only when code block is not part of already runnable submited into one of the thread pools.
Actual setup looks it just works right in application. So my questions are:
1.) How will impact performance and resources when I decide to stop using cachedThreadPool and use pool with some minimum idle threads like timerPool or I should stick with cached ?
2.) Is right solution to set a maximum pool size to prevent spikes when like 100 clients will join in small period of time and let them keep wait for some short time while other task will complete.
3.) Is there any better solution how to manage many kinds of tasks ?
cmdPool = Executors.newFixedThreadPool(3);
invPool = Executors.newFixedThreadPool(2);
dbPool = Executors.newCachedThreadPool();
fastPool = Executors.newCachedThreadPool();
timerPool = new ScheduledThreadPoolExecutor(5);
timerPool.allowCoreThreadTimeOut(true);
timerPool.setKeepAliveTime(3, TimeUnit.MINUTES);
So first of all, every action depends on how many clients are connected, lets assume values like 5-25 clients. Pools should be designed to maintain even extremes like 100 clients and not make too many threads in small time period.
Expected uses may vary and are not same every second even may happen no task will come to run at all. Expected usage of cmdPool is like 3-8 uses per second (lightweight tasks). For invPool is usage nearly same like for cmdPool 2-6 uses per second (also lightweight tasks). As for dbPool this is more unpredictable than all others, but still expected usage is from 5-20 uses per second (lightweight and mediumweight tasks) also depends on how busy is network. Timer and fast pools are designed to take any kind of task and just do it, there is expected use of 20-50 uses per second.
I appreciate any suggestions, thank you.
The best solution is to adapt your application to the expected traffic.
You can do that in many manners:
Design it with a microservice architecture leaving the orchestrator to handle peak of traffic
Design the application that reads the parameters of the size of thread pools on the fly (from a database from a file, from a configuration server...), so you can change the values when needed
If you need only to tune your application but you don't need to change the values on the fly put your configurations in a file (or database). Check different configurations to find the most adapted to your needs
What is important is to move away a code similar to this one:
cmdPool = Executors.newFixedThreadPool(3);
and replace it with a code similar to this one
#Value("${cmdPoolSize}")
private int cmdPoolSize;
...
cmdPool = Executors.newFixedThreadPool(cmdPoolSize);
where the size of the pool is not taken from the code, but from an external configuration.
A better way is also to define the kind of pool with parameters:
#Value("${cmdPoolType}")
private String cmtPoolType;
#Value("${cmdPoolSize}")
private int cmdPoolSize;
...
if (cmdPoolType.equals("cached")) {
cmdPool = Executors.newCachedThreadPool();
} else if (cmdPoolType.equals("fixed")) {
cmdPool = Executors.newFixedThreadPool(cmdPoolSize);
}
Where you choose the reasonable kind of available pools.
In this last case you can also use a spring configuration file and change it before starting the application.
I am developing a stateless Agent in Java that takes informations from one Server and transfer it to another client. It means that the agent is located between a client and a server. So I am thinking to run two threads simultaneously on the agent: one thread (thread1) runs a serverSocket and get request from client while another threads (thread2)is runnning and makes communication with the server. The problem consists in synchronizing between the two threads. I am thinking in making thread 1 asking whole the time thread 2 about a new Information. If thread 2 has nothing new, he will not answer it. What is the best way to synchronize between them. Should I use a global variable (a flag) to synchronize between them? Can I save Information when I have a stateless agent?
I think you should modify your app into async model.
Your app needs:
- an entry point to accept incoming connections -> a good example is an async servlet (or one dedicated thread).
- a ThreadPoolExecutor that provides fixed numbers of workers and a blocking queue (use this constructor).
The workflow:
Accept incomming request.
Wrapp incoming request into (Runnable) task.
Put task into blocking queue.
If ThreadPoolExecutor has a free worker starts processing the task
An advantage of such a model is that you are able to handle one request using one thread. So there is no need to manually synchronize anything.
I would like to use Java Netty to create a TCP server for a large number of persistent connections from a clients. In other words, imaging that there are 1000 client devices out there, and all of them create and maintain a persistent connection to the TCP server. There will be a reasonable amount of traffic (mostly lines of text) that go back and forth across each of these persistent connections. How can I determine the best number of threads to use in the boss and worker groups for NioEventLoopGroup?
My understanding is that when the connection is created, Netty creates a SimpleChannelInboundHandler<String> object to handle the connection. When the connection is created then the handler channelActive method is called, and every time it gets a new message from the client, the method messageReceived gets called (or channelRead0 method in Netty 4.0.24).
Is my understanding correct?
What happens if I have long running code to run in messageReceived -
do I need to launch this code in yet another thread
(java.util.Thread)?
What happens if my messageReceived method blocks on something or
takes a long time to complete? Does that bring Netty to a grinding
halt?
Basically I need to write a TCP socket server that can serve a large number of persistent connections as quickly as possible.
Is there any guidance available on number of threads for NioEventLoopGroup and on how to use any threads inside the handler?
Any help would be greatly appreciated.
How can I determine the best number of threads to use in the boss and worker groups for NioEventLoopGroup?
About Boss Thread,if you are saying that you need persistent connections , there is no sense to use a lot of boss threads, because boss threads only responsible for accepting new connections. So I would use only one boss thread.
The number of worker threads should depends on your processor cores.
Don't forget to add -XmsYYYYM and -XmxYYYYM as your VM attributes, because without them you can face the case, when your JVM are not using all cores.
What happens if I have long running code to run in messageReceived - do I need to launch this code in yet another thread (java.util.Thread)?
Do you really need to do it? Probably you should think of doing your logic another way, if not then probably you should consider OIO with new thread for each connection.
What happens if my messageReceived method blocks on something or takes a long time to complete?
You should avoid using thread blocking actions in your handlers.
Does that bring Netty to a grinding halt?
Yep, it does.
I have an application that uses LDAP and communicates in a server client way using Sun's jndi library. The problem is that when many connections are trying to be established at once I see a lot of failed connections because bind response is not sent in desired time interval.
Is there a way to enhance this?
It is not unusual that there are >200 connections at once. Everything works OK until ~60 connections and after that it becomes too slow.
P.S.There is no possibility to increase waiting time.
Every connection is running in a separate thread like this:
...
serverSocket = new ServerSocket(port);
infinite loop:
newSocket = serverSocket.accept();
newSocket.setTcpNoDelay(true);
Thread t = new Thread(/*runnable that does something*/);
t.start();
Thanks!
Just wanted to share with you that I set a higher value for backlog and also I cleaned up run method a lot, making the transfer part the first thing that executes and then doing the analysis. Thanks four your help.
You probably have networking code in the constructor of the Runnable. Move it to the run() method so that it runs in its own thread instead of the thread that is calling accept().