Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are using quartz for a background server whose purpose is to systematically aggregate the data by applying some business rules. Essentially, we have three background jobs which fires m*n more jobs. Since we are a SaaS application so we have multiple tenants so we end up with (no. of tenants * (3 + m*n )) jobs. These are fired over ten threads and the triggers repeat indefinitely as we require them to be aggregated hourly due to business constraints. Note that once these jobs are fired at server startup they remain consistent, i.e. no new jobs would come. SO the final number of jobs are as mentioned above.
Each job hits the DB and some of them could take more than a second as well.
Could any of you suggest the best way to scale this. We could consider restructuring the code as well as this code was more of a POC and we really need to SCALE!!!
----------------------EDIT----------------------
From the responses so far received I would like to make the question more concise. There is this approach which we followed, i.e. using 10 threads we scheduled multiple quartz job at server startup and triggered them indefinitely to be run every hour. Do any of the members here have a suggestion here how to approach such problems in a more efficient manner, is quartz scheduler the best approach or use some other tools/ framework, maybe Spring batch..
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm building an application that has a database that solely will be used for logging purpose. We log the incoming transaction id and its start and end time. There is no use for the application itself from this database. Hence I want to execute this insert query as efficient as possible without affecting the application itself. My idea is to execute the whole database insert code in a separate thread. So in this way, the database insert will run without interfering the actual work. I would like to know whether there is any design patter related to this kind of scenario. Or else whether my thinking pattern is correct for this.
Your thinking pattern is right. Post your generated data from your main thread(s) into a safe-for-multi-threading blocking queue, and have the logging thread loop block waiting for a message to appear in the queue, then sending that message to the database and repeating.
If there is a chance, however small, that your application may be generating messages faster than your logging thread can process them, then consider giving the queue a maximum capacity, so that the application gets blocked when trying to enqueue a message in the event that the maximum capacity is reached. This will incur a performance penalty, but at least it will be controlled, whereas allowing the queue to grow without a limit may lead to degraded performance in all sorts of other unexpected and nasty ways, and even to out-of-memory errors.
Be advised, however, that plain insert operations (with no cursors and no returned fields) are quite fast as they are, so the gains from using a separate thread might be negligible.
Try running a benchmark while doing your logging a) from a separate logging thread as per your plan, and b) from within your main thread, and see whether it makes any difference. (And post your results here if you can, they would be interesting for others to see.)
From my point of view, the best idea is to make an Java + RabbitMq broker + Background process architecture.
For example:
Java process enqueued a JSON message in RabbitMq queue. This step can be done asynchronously through ExecutorService class if you want a thread pool. Anyway, this task can be done synchrounously due to high enqueue speed of RabbitMq.
Background process connects to queue that contains messages and start to consuming them. This process task is to read and intrepret message by message and make the insert in database with its content information.
This way, you will have two separate processes and database operations won't affect main process.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In a spring boot web application, I need to be able to do two tasks.
Tasks
Always check on the serial port if there is some data to read. Somebody can have passed a card on the scan. I think this task needs to start with the application.
If a new member comes, I need to scan a card, task 1 needs to be suspended/stopped... if the card is not assigned to anybody, it's assigned to this member. Restart task 1.
I don't know what is the best way to do task 1 to facilitate task 2.
I see there are many possibility: #Scheduled, TaskScheduler who will execute a thread...
Any suggestions?
You should make one Thread that reads data from the serial port in a loop and dispatches this data as events when something usefull was readed to a proper service that will serve this.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have an interesting scenario based question related to java threads.
Sam has designed an application. I t segregates tasks that are critical and executed frequently from tasks that are non critical and executed less frequently. He has prioritized these tasks based on their criticality and frequency of execution. After close scrutiny, he finds that the tasks designed to be non critical are rarely getting executed. From what kind of problem is the application suffering?
I have figured it out as "Starvation" but i am still confuse whether I am right or wrong.
Starvation is a reasonable term for what is going on here. However, your lecturer might have something more specific in mind (... I'm not sure what ...) so check your lecture notes and text books.
... i am still confuse whether I am right or wrong.
If you stated why you are confused, we might be able to sort out your confusion.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
1.Is it possible to acheive multithreading with single processor?
Multiprocessing : Several jobs can run at the same time.(So, it requires more than one processor)
Multitasking : Sharing of processor among various tasks, here some scheduling algorithms come in to context switch tasks (Not necessarily need multiple processor)
Multithreading : A single process broken into sub tasks(threads) which enables you to execute like multitasking or multiprocessing and their results can be combined at the end. (Not necessarily multiple processors)
Links:
http://en.wikipedia.org/wiki/Computer_multitasking#Multithreading
http://en.wikipedia.org/wiki/Multiprocessing
http://en.wikipedia.org/wiki/Multiprogramming#Multiprogramming
Edit : To answer your question , multithreading is quite possible with one processor
Yes, it is possible.
With a single processor, the threads will take turns executing. Exactly how this is implemented is up to the operating system.
If the work done is computation heavy, you will probably lose more than you gain because of the added scheduling overhead.On the other hand, if there is a lot of waiting, for example for network resources, you can gain a lot from using several threads on a single processor.
Yes it is possible.
The threads can get their turn in time-slice i.e. each thread can be executed for some particular interval and then other will get turn.
For more info.
Time-slicing
Preemption
Threads concept mainly used for acheiving the multitask in a single processor,to minimize the ideal time of the of the processor we are using the multithreading concept in java.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a requirement to handle millions of thread and i know its quite dependent on the hardware configuration and jvm.
I have used executors for the task
call flow of my project :
user(mobile)----->Server(Telecom) ------>Application----->Server(Telecom)----->User
Code call flow :
A------>B---------->C
//Code snippet of A
public static final int maxPoolSize=100;
ExecutorService executorCU=Executors.newFixedThreadPool(maxPoolSize);
Runnable handleCalltask=new B(valans, sessionID, msisdn);
executorCU.execute(handleCalltask);
//Code snippet of B
public static final int maxPoolSize=10;
ExecutorService executor=Executors.newFixedThreadPool(maxPoolSize);
Runnable handleCalltask=new c(valans, sessionID, msisdn);
executor.execute(handleCalltask);
and there are shared map which i implemented as concurrencyHashMap which gets loaded at the loading of application.
Is my approach is correct and if not can anybody suggest how i can achieved maximum threading in my web application.
I have tested with Jmeter and its result are not at all encouraging.
Thanks.
Is my approach is correct
IMO, no, it's definitely not the correct approach.
and if not can anybody suggest how i can achieved maximum threading in my web application.
Separate receiving messages from the client with processing the messages. That way, you can horizontally scale the two parts independently to meet your requirements without having millions of threads in a single JVM.
A few suggestions:
1) I'd make the web application as light as possible and submit any long running tasks to some sort of backend processor.
Within the same JVM, you could use a ThreadPoolExecutor with an ArrayBlockingQueue.
If you wanted to submit the jobs to another JVM, you could use JMS with competing consumers or something like Apache Kafka.
Again the benefit here is that you can add more nodes to either the backend or frontend of the app as required.
2) If required, make your application server's thread pool larger.
For instance, with Tomcat you'd tweak the parameters described here: http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html. Explaining how to correctly tune these parameters is more than I can describe here. Among other things, the values you select will depend on the average number of concurrent requests, the maximum number of concurrent requests, the time required to serve a single request, and the number of application servers in your pool.
3) You'll get the most scalability by reducing statefulness.
If a request can be dispatched to any front end consumer and then processed by any backend consumer, you can add more instances of either to scale. If one request depends on another, you'll need to synchronize the processing of requests across nodes, which reduces scalability. Design things to be stateless from the start if at all possible.
I have tested with Jmeter and its result are not at all encouraging.
You need to profile your application to determine where the hot spots are. If you follow my recommendations above, you can easily add more horsepower where required.