What are Thread Groups in Java? - java

I am wondering why there is so little documentation about Thread Groups on the internet ?
Are they still used or they are some stale concepts ?
Can some one explain:
What they are.
What they are used for.
If they are stilled used, where ?
Give some real application examples (web servers like, maybe).

They are used as a Group of Threads. In a simple application you only need one, but in a more complex application server it makes sense to have one for each application.
why there is so little documentation about Thread Groups on the internet ?
I guess some assume it's a pretty simple idea. Not sure what is missing about it.
Are they still used or they are some stale concept ?
I would image most developers never think about Thread Groups. But I think they are useful in certain situations. We have a library where we have a custom ThreadGroup for resetting thread affinity.
Can some one explain that are they what they are used for, if still used, and give an example.
Mostly in applications servers, each server has it's own collection of threads and can be managed collectively. If you want to monitor or shutdown an application your need to know which threads the application started.
If you start off a thread in a ThreadGroup, every Thread it creates will also be in that thread group. Without this feature, you would have a hard time assigning threads to applications.
From #biziclop: How do you reliably enumerate threads in a group?
You can get the size of activeThreads and enumerate as the ThreadGroup locks on this (for better or worse)
synchronized(threadGroup) {
Thread[] threads = threadGroup.activeCount();
threadGroup.enumerate(threads);
// use threads before the lock is released or it could be wrong.
}

Related

Should I implement the consumer/producer pattern in my java video app, and if yes, how?

I built a small video frame analysis app with desktop Java 8. On each frame, I extract data (5 doubles now, but could expand to a 1920x1080x3 OpenCV Mat in the future). I would like to store this data into a database (Java DB, for example) to perform some time-series analysis, and periodically return the results to the user.
I am worried about hard-drive access times if I write to the database and run the app on a single thread, and the best solution that occured to me would be to implement the producer/consumer pattern with multithreading. The examples I found all implement 3 threads:
the main thread
the producer thread
the consumer thread
Is there an advantage in doing that compared to a 2 thread implementation?
main and producer thread
consumer thread
And is that the right way to handle real-time data with a database?
It's limiting to use a fixed number of threads. My PC has (only) 8 cores, your intensive sounding app is not going to use half of them, indeed probably only the consumer is the intensive one, so maybe 12.5%. You'll have to have several of each thread to get the most out of the CPU, and then you'll spend a lot of effort managing threads.
The alternative is to use one of various existing systems for executing work in the background. For example ThreadPoolExecutor With that you can just throw lots of work at it (Runnables) and it will queue work up, and execution can be scaled to suit the hardware it's running on by customizing the number of worker threads.
Or if you're using Swing, then SwingWorker. The advantage of this is you can do some work on a background thread and post the results on the foreground (main/UI) thread easily.
Your question is rather conceptional, so I think it belongs here: Programmers
But as one short hint from my experience, you separate the producer from the main because your main control may freeze if something goes wrong with the producer. Things like frozen forms, not responding controls etc. may be the result. Give your system a chance to reestablish by command.

Concurrency : Handling multiple submits in a web application

This is a recent interview question to my friend:
How would you handle a situation where users enter some data in the screen and let's say 5 of them clicked on the Submit button *the SAME time ?*
(By same time,the interviewer insisted that they are same to the level of nanoseconds)
My answer was just to make the method that handles the request synchronized and only one request can acquire the lock on the method at a given time.
But it looks like the interviewer kept insisting there was a "better way" to handle it .
One other approach to handle locking at the database level, but I don't think it is "better".
Are there any other approaches. This seems to be a fairly common problem.
If you have only one network card, you can only have one request coming down it at once. ;)
The answer he is probably looking for is something like
Make the servlet stateless so they can be executed concurrently.
Use components which allow thread safe concurrent access like Atomic* or Concurrent*
Use locks only where you obsolutely have to.
What I prefer to do is to make the service so fast it can respond before the next resquest can come in. ;) Though I don't have the overhead of Java EE or databases to worry about.
Does it matter that they click at the same time e.g. are they both updating the same record on a database?
A synchronized method will not cut it, especially if it's a webapp distributed amongst multiple JVMs. Also the synchronized method may block, but then the other threads would just fire after the first completes and you'd have lost writes.
So locking at database level seems to be the option here i.e. if the record has been updated, report an error back to the users whose updates were serviced after the first.
You do not have to worry about this as web server launches each request in isolated thread and manages it.
But if you have some shared resource like some file for logging then you need to achieve concurrency and put thread lock on it in request and inter requests

What type of thread method should I use for continuous work?

My application is currently using ordinary threads to produce servers, clients and even a thread which swaps WiFi networks and starts the previous. Those threads run in the background and dont have any impact on the UI so it is something I was looking for, but the problem is that when I reenter the application all those threads are recreated. Is it possible to create a singleton thread which will be possible to control when we reopen the application?
Android offers some classes also:
Service: but it uses the UI thread...
AsyncTask: probably a better candidate
IntentService: has a worker thread which could be manipulated? Probably best option from above.
Any thoughts/opinions will be highly appreciated. :)
EDIT:
Also why I would want to change my ordinary threads into some other method is because Android will prioritize ordinary threads to get killed.
Thread call hierarchy:
MainActivity -> NetworkSwap(infinite process which is scanning, connecting and swaping WiFi networks), ServerTCP(infinitely listening for connections) , ServerUDP(infininetely listening for connections)
Networkswap -> ClientUDP (sends broadcast request to serverUDP and ends)
ServerUDP -> ClientTCP (sends request to serverTCP and ends)
It's still not entirely clear to me what you're using these threads for. From the title it seems you're doing ongoing work, but in the description it sounds like sometimes you do smaller discrete chunks of work. It's also not clear whether these types of work are related.
That said, with ongoing work I'd say to move your currently existing thread to be managed by a regular Service, thus giving a lifetime that is independent of activities and can do ongoing background work. For smaller discrete chunks of work, IntentService is a better match. If you have these two types of work and they're not very related, you could even consider having both types of services (it sounds like you have multiple threads as is anyway).

Manually Increasing the Amount of CPU a Java Application Uses

I've just made a program with Eclipse that takes a really long time to execute. It's taking even longer because it's loading my CPU to 25% only (I'm assuming that is because I'm using a quad-core and the program is only using one core). Is there any way to make the program use all 4 cores to max it out? Java is supposed to be natively multi-threaded, so I don't understand why it would only use 25%.
You still have to create and manage threads manually in your application. Java can't determine that two tasks can run asynchronously and automatically split the work into several threads.
This is a pretty vague question because we don't know much about what your program does. If your program is single-threaded, then no number of cores on your machine is going to make it run any faster. Java does have threading support, but it won't automatically parallelize your code for you. To speed it up, you'll need to identify parts of the computation that can be run in parallel with one another and add code as appropriate to split up and reconstitute the work. Without more info on what your program does, I can't help you out.
Another important detail to note is that Java threads are not the same as system threads. The JVM often has its own thread scheduler that tries to put Java threads onto actual system threads in a way that's fair, but there's no actual guarantee that it will do so.
Yes, Java is multi-threaded, but the multi-threading doesn't happen "by magic".
Have a look at either at the Thread class or at the Executor framework. Essentially you need to split your job into "subtasks" each of which can run on a single processor, then do something like this:
Executor ex = Executors.newFixedThreadPool(4);
while (thereAreMoreSubtasksToDo) {
ex.execute(new Runnable() {
public void run() {
... do subtask ...
}
});
}
Turning a serial routine/algorithm into a parallel one isn't necessarily trivial: you need to know in particular about a range of issues broadly termed "thread-safety". You may be interested in some material I've written about thread-safety in Java, and threading in general if you follow the links: the key thing to bear in mind is that if any data/objects are being shared among the different threads running, then you need to take special precautions. That said, for independent things that you just want to "run at the same time", then the above pattern will get you started.
Java is multi-threaded but if your application runs in only one thread, only one thread will be used. (Apart from the internal threads Java uses for finalization, garbage collection and so on.)
If you want your code to use multiple threads, you have to split it up manually, either by starting threads by yourself or using a third party thread pool. I'd suggest the latter option as it's safer but both can work equally well.
You've got a bit of learning ahead of you (actually, quite a bit of learning) - but it's learning you should do if you are going to be doing any serious programming.
Here's a starting point: http://download.oracle.com/javase/tutorial/essential/concurrency/
But you might want to look into a good book on Java multi-threading (I did this so long ago that any book I could recommend would be out of print). This sort of hard topic is well suited for learning from a text instead of online tutorials.

Multiple SingleThreadExecutors for a given application...a good idea?

This question is about the fallouts of using SingleThreadExecutor (JDK 1.6). Related questions have been asked and answered in this forum before, but I believe the situation I am facing, is a bit different.
Various components of the application (let's call the components C1, C2, C3 etc.) generate (outbound) messages, mostly in response to messages (inbound) that they receive from other components. These outbound messages are kept in queues which are usually ArrayBlockingQueue instances - fairly standard practice perhaps. However, the outbound messages must be processed in the order they are added. I guess use of a SingleThreadExector is the obvious answer here. We end up having a 1:1 situation - one SingleThreadExecutor for one queue (which is dedicated to messages emanating from one component).
Now, the number of components (C1,C2,C3...) is unknown at a given moment. They will come into existence depending on the need of the users (and will be eventually disposed of too). We are talking about 200-300 such components at the peak load. Following the 1:1 design principle stated above, we are going to arrange for 200 SingleThreadExecutors. This is the source of my query here.
I am uncomfortable with the thought of having to create so many SingleThreadExecutors. I would rather try and use a pool of SingleThreadExecutors, if that makes sense and is plausible (any ready-made, seen-before classes/patterns?). I have read many posts on recommended use of SingleThreadExecutor here, but what about a pool of the same?
What do learned women and men here think? I would like to be directed, corrected or simply, admonished :-).
If your requirement is that the messages be processed in the order that they're posted, then you want one and only one SingleThreadExecutor. If you have multiple executors, then messages will be processed out-of-order across the set of executors.
If messages need only be processed in the order that they're received for a single producer, then it makes sense to have one executor per producer. If you try pooling executors, then you're going to have to put a lot of work into ensuring affinity between producer and executor.
Since you indicate that your producers will have defined lifetimes, one thing that you have to ensure is that you properly shut down your executors when they're done.
Messaging and batch jobs is something that has been solved time and time again. I suggest not attempting to solve it again. Instead, look into Quartz, which maintains thread pools, persisting tasks in a database etc. Or, maybe even better look into JMS/ActiveMQ. But, at the very least look into Quartz, if you have not already. Oh, and Spring makes working with Quartz so much easier...
I don't see any problem there. Essentially you have independent queues and each has to be drained sequentially, one thread for each is a natural design. Anything else you can come up with are essentially the same. As an example, when Java NIO first came out, frameworks were written trying to take advantage of it and get away from the thread-per-request model. In the end some authors admitted that to provide a good programming model they are just reimplementing threading all over again.
It's impossible to say whether 300 or even 3000 threads will cause any issues without knowing more about your application. I strongly recommend that you should profile your application before adding more complexity
The first thing that you should check is that number of concurrently running threads should not be much higher than number of cores available to run those threads. The more active threads you have, the more time is wasted managing those threads (context switch is expensive) and the less work gets done.
The easiest way to limit number of running threads is to use semaphore. Acquire semaphore before starting work and release it after the work is done.
Unfortunately limiting number of running threads may not be enough. While it may help, overhead may still be to great, if time spent per context switch is major part of total cost of one unit of work. In this scenario, often the most efficient way is to have fixed number of queues. You get queue from global pool of queues when component initializes using algorithm such as round-robin for queue selection.
If you are in one of those unfortunate cases where most obvious solutions do not work, I would start with something relatively simple: one thread pool, one concurrent queue, lock, list of queues and temporary queue for each thread in pool.
Posting work to queue is simple: add payload and identity of producer.
Processing is relatively straightforward as well. First you get get next item from queue. Then you acquire the lock. While you have lock in place, you check if any of other threads is running task for same producer. If not, you register thread by adding a temporary queue to list of queues. Otherwise you add task to existing temporary queue. Finally you release the lock. Now you either run the task or poll for next and start over depending on whether current thread was registered to run tasks. After running the task, you get lock again and see, if there is more work to be done in temporary queue. If not, remove queue from list. Otherwise get next task. Finally you release the lock. Again, you choose whether to run the task or to start over.

Categories