For every request, there is a lot of computation happens. On an average the reply takes about 10 minutes to process. Now in the mean time, if a user sends a new request; There is absolutely no point for the previous request to continue.
So I have written a code where I basically interrupt the previous thread executing it. Is it a good practice in tomcat environment? Can there be a better solution to deal with it. Is it alright to interrupt tomcat threads.
Or should I manage my own threadpool and let pool do the computation for me?
More Information:
Basically the whole task is wrapped with a FutureTask. For every request, this task is executed and the reference to the task is stored by a ConcurrentHashMap. For every request, all the future's in the map is "cancelled" and then proceeds to execute the latest request. Thus cancelling the previous requests.
Q> I basically interrupt the previous thread executing it. Is it a good practice in tomcat environment?
A> I think it's fine as long as you're happy having HTTP thread(s) blocked for 10 minutes. This means no other user would be able to process HTTP requests. Otherwise create your own thread pool and manage it.
Q> Is it a good practice in tomcat environment?
A> Interrupting Runnables or Callables can be tricky. For example, if your thread is in the middle of I/O operation, interrupting can leave the data in a corrupt state. Other than that, this is quite normal practice. I also recommend using your own thread pool in order for your server capacity to be predictable.
Can you break your large task into a lot of smaller tasks? Sticking a conditional and exiting early could be a good alternative to interruption.
Alternatively, does waiting for/ensuring the first task finishes the operation and others just return the same value make sense in your environment? If so, I'd rather prefer that instead of your approach. There's LoadingCache from guava library which does exactly that.
I am not sure I understand your question completely...
But, if you are talking of running a threadpool within a tomcat application and cancelling its future tasks, I see no problem with that.
I would not interrupt a thread allocated by tomcat, unless I would write code to deal with the interrupt personally (such as within the servlet class)
Related
I get that with threads being nonblocking, we don't need to have Thread sprawl depending on N concurrent requests, but rather we put our tasks in a single event loop in our reactive web programming pattern.
Yes, that can help, but since the event loop is a queue, what if the first task to be processed blocks forever? Then the event loop will never progress and thus end of responses and processing other than queueing more tasks. Yes, timeouts are probably possible, but I can't wrap my head around how the event loop can be a good solution.
Say you have 3 tasks that take 3 seconds to wait for IO and run each executions and they got submitted to the event queue. Then they will still take 9 seconds to be able to be processed and also to execute once IO resolved. In the case of making threads that block, this would have resolved in 3 seconds since they run concurrently.
Where I can see a benefit is if the event loop is not really a queue and upon signal that a task is ready to be processed, it dispatches that task to be processed. In that case though, this would mean that order of task execution is not maintained and also each task has to still be running a thread in order to be able to tell when IO is resolved.
Maybe I am not understanding the event loop and thread handling correctly. Can someone correct me please because it seems like this Reactor pattern seems to make things possibly worse.
Lastly, upon X requests in Spring Reactor, does only 1 thread get created to run handlers instead of the traditional X threads? In that case, if someone accidently wrote blocking code, doesnt that mean each subsequent requests get queued?
It is not a good idea to use the event loop for long running tasks. This is considered an anti-pattern. Usually it is merely used for quickly picking up imminent events, but not actually doing the work associated with these events if the work would block the event loop noticeably. You would want to use a separate thread pool for executing long running tasks. So the event loop would usually only initiate work using asynchronous and hence non-blocking structures (or actually doing the work only if it can be done very quickly) and pass the heavier and possibly blocking tasks to a separate thread pool (for CPU intensive computations) or to the operating system (such as data buffers to be sent over the network).
Also, don't be fooled by the fact that only one thread is dealing with the events, it is very fast and is usually enough for even demanding applications. Platforms like NodeJS or frameworks like Netty (used in Akka, Play framework, Apache Cassandra, etc.) are using an event loop at their heart with great success. One should just be aware of the fact, that performing blocking operations inside the event loop is generally a bad idea.
Please have a look at some of these posts for more information:
The reactor pattern and non blocking IO
Unix Network Programming
Kotlin Webflux
Slightly off topic but still a very prominent example: Don't Block the Event Loop (NodeJS)
I am in a wierd situation. In my web-server (tomcat), on web request, I basically need to cancel a previous request. I have a reference to the thread that was executing the previous request. So I can directly interrupt that thread and the node will do the rest.
I know you are not suppose to interrupt the thread which you do not own. But is it safe to interrupt tomcat thread in this case? What can be the other way? Maintaining own thread pool is waste of resources and ovehead
Maintaining your own thread pool is a waste of resource but it's also a gain in every other respect, like stability of your application server. So you need to decide what is more important: A few thousand bytes of memory and CPU cycles or a stable, reliable application.
The problem with interrupting another thread is that you usually can't know for sure where in the code that other thread is. You might want to use locking for this:
Thread A locks something while it's safe to interrupt, thread B checks the lock and if it can't get it, it interrupts A.
But what happens when A is just about to give up the lock, B checks the lock, A unlocks and starts with cleanup, B sends interrupt?
So you should really use your own thread pool.
I would not do it. Not 100% sure why, but I think those threads come from Tomcat's own worker thread pool and killing them one-by-one would/could eventually result in a non-responding Tomcat instance. (This is just a hypothesis).
I would argue that "maintaining own threadpool is waste of resources and overhead". I think it is a minor thing, threadpools are great guys, do no be afraid of them. I do not know the details of your application but I think if you measured the overhead by JConsole you could decide the point to do some optimization and it is not probable that the threadpool would be the bottleneck.
The best think I could suggest to you is a complete redesign: use short-returning HTTP requests to start long-running asynchronous operations in the background by submitting tasks to an ExecutorService or stuff. This way there is no need to harm Tomcat's own threads and the overall usability of your application could also be improved from a user/client perspective.
To sum up: I think it is not safe to do what you mentioned and one possible other way to do what you want is described in the above paragraph.
When writing a multithread internet server in java, the main-thread starts new
ones to serve incoming requests in parallel.
Is any problem if the main-thread does not wait ( with .join()) for them?
(It is obviously absurd create a new thread and then, wait for it).
I know that, in a practical situation, you should (or "you must"?) implement a pool
of threads to "re-use" them for new requests when they become idle.
But for small applications, should we use a pool of threads?
You don't need to wait for threads.
They can either complete running on their own (if they've been spawned to perform one particular task), or run indefinitely (e.g. in a server-type environment).
They should handle interrupts and respond to shutdown requests, however. See this article on how to do this correctly.
If you need a set of threads I would use a pool and executor methods since they'll look after thread resource management for you. If you're writing a multi-threaded network server then I would investigating using (say) a servlet container or a framework such as Mina.
The only problem in your approach is that it does not scale well beyond a certain request rate. If the requests are coming in faster than your server is able to handle them, the number of threads will rise continuously. As each thread adds some overhead and uses CPU time, the time for handling each request will get longer, so the problem will get worse (because the number of threads rises even faster). Eventually no request will be able to get handled anymore because all of the CPU time is wasted with overhead. Probably your application will crash.
The alternative is to use a ThreadPool with a fixed upper bound of threads (which depends on the power of the hardware). If there are more requests than the threads are able to handle, some requests will have to wait too long in the request queue, and will fail due to a timeout. But the application will still be able to handle the rest of the incoming requests.
Fortunately the Java API already provides a nice and flexible ThreadPool implementation, see ThreadPoolExecutor. Using this is probably even easier than implementing everything with your original approach, so no reason not to use it.
Thread.join() lets you wait for the Thread to end, which is mostly contrary to what you want when starting a new Thread. At all, you start the new thread to do stuff in parallel to the original Thread.
Only if you really need to wait for the spawned thread to finish, you should join() it.
You should wait for your threads if you need their results or need to do some cleanup which is only possible after all of them are dead, otherwise not.
For the Thread-Pool: I would use it whenever you have some non-fixed number of tasks to run, i.e. if the number depends on the input.
I would like to collect the main ideas of this interesting (for me) question.
I can't totally agree with "you
don't need to wait for threads".
Only in the sense that if you don't
join a thread (and don't have a
pointer to it) once the thread is
done, its resources are freed
(right? I'm not sure).
The use of a thread pool is only
necessary to avoid the overhead of
thread creation, because ...
You can limit the number of parallel
running threads by accounting, with shared variables (and without a thread pool), how many of then
were started but not yet finished.
I'm currently working on a daemon that will be doing A LOT of different tasks. It's multi threaded and is being built to handle almost any kind of internal-error without crashing. Well I'm getting to the point of handling a shutdown request and I'm not sure how I should go about doing it.
I have a shutdown hook setup, and when it's called it sets a variable telling the main daemon loop to stop running. The problem is, this daemon spawns multiple threads and they can take a long time. For instance, one of these threads could be converting a document. Most of them will be quick (I'm guessing under 10 seconds), but there will be threads that can last as long as 10+ minutes.
What I'm thinking of doing right now is when a shutdown hook has been sent, do a loop for like 5 seconds on ThreadGroup.activeCount() with a 500ms (or so) Sleep (all these threads are in a ThreadGroup) and before this loop, I will send a notification to all threads telling them a shutdown request has been called. Then they will have to instantly no matter what they're doing cleanup and shutdown.
Anyone else have any suggestions? I'm interested in what a daemon like MySQL for instance does when it gets told to stop, it stops instantly. What happens if like 10 query's are running that are very slow are being called? Does it wait or does it just end them. I mean servers are really quick, so there really isn't any kind of operation that I shouldn't be able to do in less than a second. You can do A LOT in 1000ms now days.
Thanks
The java.util.concurrent package provides a number of utilities, such as ThreadPoolExecutor (along with various specialized types of other Executor implementations from the Executors class) and ThreadPoolExecutor.awaitTermination(), which you might want to look into - as they provide the same exact functionality you are looking to implement. This way you can concentrate on implementing the actual functionality of your application/tasks instead of worrying about things like thread and task scheduling.
Are your thread jobs amenable to interruption via Thread#interrupt()? Do they mostly call on functions that themselves advertise throwing InterruptedException? If so, then the aforementioned java.util.concurrent.ExecutorService#shutdownNow() is the way to go. It will interrupt any running threads and return the list of jobs that were never started.
Similarly, if you hang on to the Futures produced by ExecutorService#submit(), you can use Future#cancel(boolean) and pass true to request that a running job be interrupted.
Unless you're calling on code out of your control that swallows interrupt signals (say, by catching InterruptedException without calling Thread.currentThread().interrupt()), using the built-in cooperative interruption facility is a better choice than introducing your own flags to approximate what's already there.
I'm using a java.util.concurrent.ExecutorService that I obtained by calling Executors.newSingleThreadExecutor(). This ExecutorService can sometimes stop processing tasks, even though it has not been shutdown and continues to accept new tasks without throwing exceptions. Eventually, it builds up enough of a queue that my app shuts down with OutOfMemoryError exceptions.
The documentation seem to indicate that this single thread executor should survive task processing errors by firing up a new worker thread if necessary to replace one that has died. Am I missing something?
It sounds like you have two different issues:
1) You're over-feeding the work queue. You can't just keep stuffing new tasks into the queue, with no regard for the consumption rate of the task executors. You need to figure out some logic for knowing when you to block new additions to the work queue.
2) Any uncaught exception in a task's thread can completely kill the thread. When that happens, the ExecutorService spins up a new thread to replace it. But that doesn't mean you can ignore whatever problem is causing the thread to die in the first place! Find those uncaught exceptions and catch them!
This is just a hunch (cuz there's not enough info in your post to know otherwise), but I don't think your problem is that the task executor stops processing tasks. My guess is that it just doesn't process tasks as fast as you're creating them. (And the fact that your tasks sometimes die prematurely is probably orthogonal to the problem.)
At least, that's been my experience working with thread pools and task executors.
Okay, here's another possibility that sounds feasible based on your comment (that everything will run smoothly for hours until suddenly coming to a crashing halt)...
You might have a rare deadlock between your task threads. Most of the time, you get lucky, and the deadlock doesn't manifest itself. But occasionally, two or more of your task threads get into a state where they're waiting for the release of a lock held by the other thread. At that point, no more task processing can take place, and your work queue will pile up until you get the OutOfMemoryError.
Here's how I'd diagnose that problem:
Eliminate ALL shared state between your task threads. At first, this might require each task thread making a defensive copy of all shared data structures it requires. Once you've done that, it should be completely impossible to experience a deadlock.
At this point, gradually reintroduced the shared data structures, one at a time (with appropriate synchronization). Re-run your application after each tiny modification to test for the deadlock. When you get that crashing situation again, take a close look at the access patterns for the shared resource and determine whether you really need to share it.
As for me, whenever I write code that processes parallel tasks with thread pools and executors, I always try to eliminate ALL shared state between those tasks. As far as the application is concerned, they may as well be completely autonomous applications. Hunting down deadlocks is a drag, and in my experience, the best way to eliminate deadlocks is for each thread to have its own local state rather than sharing any state with other task threads.
Good luck!
My guess would be that your tasks are blocking indefinitely, rather than dying. Do you have evidence, such as a log statement at the end of your task, suggest that your tasks are successfully completing?
This could be a deadlock, or an interaction with some external process that is blocking.
Although you don't leave enough detail to be sure, the first thing I'd try is to have your tasks catch "Exception" at the top level and log the message.
I know it doesn't seem right, but occasionally (depending on a lot of variables) I've worked on code where stuff happening in a thread throws an exception and it is never logged, or it just doesn't show up on the console--yet the "executing" code exits out of it's top level loop or whatever code is causing your task to run.
I guess I'm just saying, make sure your tasks are not throwing an exception out.