executing millions of thread concurrently in java [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a requirement to handle millions of thread and i know its quite dependent on the hardware configuration and jvm.
I have used executors for the task
call flow of my project :
user(mobile)----->Server(Telecom) ------>Application----->Server(Telecom)----->User
Code call flow :
A------>B---------->C
//Code snippet of A
public static final int maxPoolSize=100;
ExecutorService executorCU=Executors.newFixedThreadPool(maxPoolSize);
Runnable handleCalltask=new B(valans, sessionID, msisdn);
executorCU.execute(handleCalltask);
//Code snippet of B
public static final int maxPoolSize=10;
ExecutorService executor=Executors.newFixedThreadPool(maxPoolSize);
Runnable handleCalltask=new c(valans, sessionID, msisdn);
executor.execute(handleCalltask);
and there are shared map which i implemented as concurrencyHashMap which gets loaded at the loading of application.
Is my approach is correct and if not can anybody suggest how i can achieved maximum threading in my web application.
I have tested with Jmeter and its result are not at all encouraging.
Thanks.

Is my approach is correct
IMO, no, it's definitely not the correct approach.
and if not can anybody suggest how i can achieved maximum threading in my web application.
Separate receiving messages from the client with processing the messages. That way, you can horizontally scale the two parts independently to meet your requirements without having millions of threads in a single JVM.
A few suggestions:
1) I'd make the web application as light as possible and submit any long running tasks to some sort of backend processor.
Within the same JVM, you could use a ThreadPoolExecutor with an ArrayBlockingQueue.
If you wanted to submit the jobs to another JVM, you could use JMS with competing consumers or something like Apache Kafka.
Again the benefit here is that you can add more nodes to either the backend or frontend of the app as required.
2) If required, make your application server's thread pool larger.
For instance, with Tomcat you'd tweak the parameters described here: http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html. Explaining how to correctly tune these parameters is more than I can describe here. Among other things, the values you select will depend on the average number of concurrent requests, the maximum number of concurrent requests, the time required to serve a single request, and the number of application servers in your pool.
3) You'll get the most scalability by reducing statefulness.
If a request can be dispatched to any front end consumer and then processed by any backend consumer, you can add more instances of either to scale. If one request depends on another, you'll need to synchronize the processing of requests across nodes, which reduces scalability. Design things to be stateless from the start if at all possible.
I have tested with Jmeter and its result are not at all encouraging.
You need to profile your application to determine where the hot spots are. If you follow my recommendations above, you can easily add more horsepower where required.

Related

Java: what is the best approach for high performance of multi-threading in a time-critical application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I’m developing a network proxy application using Java 8. For ingress, the main logic is the data-processing-loop: getting a packet in the inbound queue, processing the content data (e.g. protocol-adoption), and put it in the send-queue. Multi virtual TCP channels are allowed in the design, so a data processing thread, among a list of data-processing threads, handles a bunch of channels at a specific time duration, as a part of the whole job (e.g., for the channels with channel.channelId%NUM_DATA_PROCESSING_THREADS = 0, which is determined by a load-balancing scheduler). Channels are stored in an array and accessed by using the channeled as the index of the cell, which is wrapped by a class that provides methods like register, deregister, getById, size, etc., and the instance is called CHANNEL_STORE in the program. I need to use these methods in the main logic (data-processing-loop) by different threads (at least dispatcher thread, data processing thread, and the control operation thread for destroying a channel from the GUI). Then I need to consider concurrency among these threads. I have several candidate-approaches:
Use synchronized or reentrant locks surrounding the register, deregister, getById, etc. This is the simplest and its thread-safe. But I have performance concerns about the lock (CAS) mechanisms since I need to perform the operations on the CHANNEL_STORE (especially getById) at a very high frequency.
Designate the operations of CHANNEL_STORE to a SingleThreadExecutor by executor.execute(runnable) and/or executor.submit(callable). The concern is the performance of creating runnable/callables at each such destination in the data-processing-loop: creating the runnable instance and call execute – I have no idea will this be even more expansive than the synchronized or reentrant locks. In the reality (so far) there is post-operation so only putting runnable and no need to wait for the callable return in the data-processing-loop, although post-operation is needed in the control loop.
Designate the operations of CHANNEL_STORE to a dedicated task by a pair of ArrayBlockingQueue instead of Executor For each access to CHANNEL_STORE, put a task-indicator together with an attachment of parameters to the first queue, and then the dedicated thread loops on this queue by the blocking method take and operates on the CHANNEL_STORE. Then, it put the result to the 2nd queue for the Designator to continue the post-operation (currently no need, however). I regard this as the fastest, assuming the blocking queue in JVM is lock-free. The concern on this is that code is very messy and error-prone.
I think the 2nd and 3rd may be called "serialization".
The reason that I cannot simply assign tasks to a thread-pool for data processing and forget them is that the TCP stream data packets of each channel cannot be disordered, it has to be in serial per channel base.
Questions:
what’s the performance of the second way comparing to the first way?
what’s the suggestion for my situation?
I'm currently using stream-IO for LAN read/write. If using NIO, the coordination between the NIO thread and data processing threads may bring additional complexity (e.g post operations). So I think this question is meaningful for time-critical (stream-based, multi-channel network) applications like mine.
If I understand well your use case, this is a common problem in concurrent programming. One solution is to use the ring buffer approach, which usually offers a good solution to both synchronization and too many objects creation problems.
You can find a good implementation of this in the lmax dispruptor library. See https://lmax-exchange.github.io/disruptor/ to know more about this. But keep in mind that it is not magic and must be adapted to your use case.

Efficiently insert data in to database in java [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm building an application that has a database that solely will be used for logging purpose. We log the incoming transaction id and its start and end time. There is no use for the application itself from this database. Hence I want to execute this insert query as efficient as possible without affecting the application itself. My idea is to execute the whole database insert code in a separate thread. So in this way, the database insert will run without interfering the actual work. I would like to know whether there is any design patter related to this kind of scenario. Or else whether my thinking pattern is correct for this.
Your thinking pattern is right. Post your generated data from your main thread(s) into a safe-for-multi-threading blocking queue, and have the logging thread loop block waiting for a message to appear in the queue, then sending that message to the database and repeating.
If there is a chance, however small, that your application may be generating messages faster than your logging thread can process them, then consider giving the queue a maximum capacity, so that the application gets blocked when trying to enqueue a message in the event that the maximum capacity is reached. This will incur a performance penalty, but at least it will be controlled, whereas allowing the queue to grow without a limit may lead to degraded performance in all sorts of other unexpected and nasty ways, and even to out-of-memory errors.
Be advised, however, that plain insert operations (with no cursors and no returned fields) are quite fast as they are, so the gains from using a separate thread might be negligible.
Try running a benchmark while doing your logging a) from a separate logging thread as per your plan, and b) from within your main thread, and see whether it makes any difference. (And post your results here if you can, they would be interesting for others to see.)
From my point of view, the best idea is to make an Java + RabbitMq broker + Background process architecture.
For example:
Java process enqueued a JSON message in RabbitMq queue. This step can be done asynchronously through ExecutorService class if you want a thread pool. Anyway, this task can be done synchrounously due to high enqueue speed of RabbitMq.
Background process connects to queue that contains messages and start to consuming them. This process task is to read and intrepret message by message and make the insert in database with its content information.
This way, you will have two separate processes and database operations won't affect main process.

does multi threading improve performance? scenario java [duplicate]

This question already has answers here:
Does multi-threading improve performance? How?
(2 answers)
Closed 8 years ago.
I have a List<Object> objectsToProcess.Lets say it contains 1000000 item`s. For all items in the array you then process each one like this :
for(Object : objectsToProcess){
Go to database retrieve data.
process
save data
}
My question is : would multi threading improve performance? I would of thought that multi threads are allocated by default by the processor anyways?
In the described scenario, given that process is a time-consuming task, and given that the CPU has more than one core, multi-threading will indeed improve the performance.
The processor is not the one who allocates the threads. The processor is the one who provides the resources (virtual CPUs / virtual processors) that can be used by threads by providing more than one execution unit / execution context. Programs need to create multiple threads themselves in order to utilize multiple CPU cores at the same time.
The two major reasons for multi-threading are:
Making use of multiple CPU cores which would otherwise be unused or at least not contribute to reducing the time it takes to solve a given problem - if the problem can be divided into subproblems which can be processed independently of each other (parallelization possible).
Making the program act and react on multiple things at the same time (i.e. Event Thread vs. Swing Worker).
There are programming languages and execution environments in which threads will be created automatically in order to process problems that can be parallelized. Java is not (yet) one of them, but since Java 8 it's on a good way to that, and Java 9 maybe will bring even more.
Usually you do not want significantly more threads than the CPU provides CPU cores, for the simple reason that thread-switching and thread-synchronization is overhead that slows down.
The package java.util.concurrent provides many classes that help with typical problems of multithreading. What you want is an ExecutorService to which you assign the tasks that should be run and completed in parallel. The class Executors provides factor methods for creating popular types of ExecutorServices. If your problem just needs to be solved in parallel, you might want to go for Executors.newCachedThreadPool(). If your problem is urgent, you might want to go for Executors.newWorkStealingPool().
Your code thus could look like this:
final ExecutorService service = Executors.newWorkStealingPool();
for (final Object object : objectsToProcess) {
service.submit(() -> {
Go to database retrieve data.
process
save data
}
});
}
Please note that the sequence in which the objects would be processed is no longer guaranteed if you go for this approach of multithreading.
If your objectsToProcess are something which can provide a parallel stream, you could also do this:
objectsToProcess.parallelStream().forEach(object -> {
Go to database retrieve data.
process
save data
});
This will leave the decisions about how to handle the threads to the VM, which often will be better than implementing the multi-threading ourselves.
Further reading:
http://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html#executing_streams_in_parallel
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/package-summary.html
Depends on where the time is spent.
If you have a load of calculations to do then allocating work to more threads can help, as you say each thread may execute on a separate CPU. In such a situation there is no value in having more threads than CPUs. As Corbin says you have to figure out how to split the work across the threads and have responsibility for starting the threads, waiting for completion and aggregating the results.
If, as in your case, you are waiting for a database then there can be additional value in using threads. A database can serve several requests in paraallel (the database server itself is multi-threaded) so instead of coding
for(Object : objectsToProcess){
Go to database retrieve data.
process
save data
}
Where you wait for each response before issuing the next, you want to have several worker threads each performing
Go to database retrieve data.
process
save data
Then you get better throughput. The trick though is not to have too many worker threads. Several reasons for that:
Each thread is uses some resources, it has it's own stack, its own
connection to the database. You would not want 10,000 such threads.
Each request uses resources on the server, each connection uses memory, each database server will only serve so many requests in parallel. You have no benefit in submitting thousands of simultaneous requests if it can only server tens of them in parallel. Also If the database is shared you probably don't want to saturate the database with your requests, you need to be a "good citizen".
Net: you will almost certainly get benefit by having a number of worker threads. The number of threads that helps will be determined by factors such as the number of CPUs you have and the ratio between the amount of processing you do and the response time from the DB. You can only really determine that by experiment, so make the number of threads configurable and investigate. Start with say 5, then 10. Keep your eye on the load on the DB as you increase the number of threads.

How to handle thousands of background jobs? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are using quartz for a background server whose purpose is to systematically aggregate the data by applying some business rules. Essentially, we have three background jobs which fires m*n more jobs. Since we are a SaaS application so we have multiple tenants so we end up with (no. of tenants * (3 + m*n )) jobs. These are fired over ten threads and the triggers repeat indefinitely as we require them to be aggregated hourly due to business constraints. Note that once these jobs are fired at server startup they remain consistent, i.e. no new jobs would come. SO the final number of jobs are as mentioned above.
Each job hits the DB and some of them could take more than a second as well.
Could any of you suggest the best way to scale this. We could consider restructuring the code as well as this code was more of a POC and we really need to SCALE!!!
----------------------EDIT----------------------
From the responses so far received I would like to make the question more concise. There is this approach which we followed, i.e. using 10 threads we scheduled multiple quartz job at server startup and triggered them indefinitely to be run every hour. Do any of the members here have a suggestion here how to approach such problems in a more efficient manner, is quartz scheduler the best approach or use some other tools/ framework, maybe Spring batch..

Which one of them require multiple processors?Multitasking,Multiprocessing and multithreading [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
1.Is it possible to acheive multithreading with single processor?
Multiprocessing : Several jobs can run at the same time.(So, it requires more than one processor)
Multitasking : Sharing of processor among various tasks, here some scheduling algorithms come in to context switch tasks (Not necessarily need multiple processor)
Multithreading : A single process broken into sub tasks(threads) which enables you to execute like multitasking or multiprocessing and their results can be combined at the end. (Not necessarily multiple processors)
Links:
http://en.wikipedia.org/wiki/Computer_multitasking#Multithreading
http://en.wikipedia.org/wiki/Multiprocessing
http://en.wikipedia.org/wiki/Multiprogramming#Multiprogramming
Edit : To answer your question , multithreading is quite possible with one processor
Yes, it is possible.
With a single processor, the threads will take turns executing. Exactly how this is implemented is up to the operating system.
If the work done is computation heavy, you will probably lose more than you gain because of the added scheduling overhead.On the other hand, if there is a lot of waiting, for example for network resources, you can gain a lot from using several threads on a single processor.
Yes it is possible.
The threads can get their turn in time-slice i.e. each thread can be executed for some particular interval and then other will get turn.
For more info.
Time-slicing
Preemption
Threads concept mainly used for acheiving the multitask in a single processor,to minimize the ideal time of the of the processor we are using the multithreading concept in java.

Categories