I have 2 threads, one to take Orders and the other one to distribute the orders.
My first thread must be able to get the orders gradually one by one from a file and display it in a JTextArea-1, while this is happening the other thread should be able to remove one order at a time from JTextArea-1 and paste it to another JTextArea-2.
I have created 2 threads and used MVC pattern (with 2 views and 1 model). The Threads are both displaying the values in the JTEXTAREA's at the same time which is not acceptable. How can i solve this issue?
I should have a minimum 5 seconds delay between the removing from JTextArea-1 and pasting into JTextArea-2. kindly help.
Fundamental lack of comprehension of concurrency alert!
This is not the appropriate way to deal with your problem, which probably doesn't even need threads in the first place.
But if you insist on using threads, what you want is a semaphore to block one thread until a condition for the other thread to act exists. A "FIFO blocking queue" would be appropriate here as well.
Using pauses and timing with concurrency is not effective as it is a non-deterministic system, and only lead to even more subtle failure scenarios.
You could simply just add a synchronized call that logs the text so that only 1 thread can perform that at one time.
Related
EDIT: This question might be appropriate for other languages as well - the overall theory behind it seems mostly language agnostic. However, as this will run in a JVM, I'm sure there's differences between JVM overheads/threading and those of other environments.
EDIT 2: To clarify a little better, I guess the main question is which is better for scalability: to have smaller threads that can return quicker to enable processing other chunks of work for other workloads, or try to get a single workload through as quickly as possible? The workloads are sequential and multithreading won't help speed up a single unit of work in this case; it's more in hopes of increasing the throughput of the system overall (thanks to Uri for leading me towards the clarification).
I'm working on a system that's replacing an existing system; the current system has a pretty heavy load, so we already know the replacement needs to be highly scalable. It communicates with several outside processes, such as email, other services, databases, etc., and I'm already planning on making it multithreaded to help with scaling. I've worked on multithreaded apps before, just nothing with this high of a performance/scalability requirement, so I don't have much experience when it comes to getting the absolute most out of concurrency.
The question I have is what's the best way to divide the work up between threads? I'm looking at two different versions, one that creates a single thread for the full workflow, and another that creates a thread for each of the individual steps, continuing on to the next step (in a new/different thread) when the previous step completes - probably with a NodeJS-style callback system, but not terribly concerned about the direct implementation details.
I don't know much about the nitty-gritty details of multithreading - things like context switching, for example - so I don't know if the overhead of multiple threads would swamp the execution time in each of the threads. On one hand, the single thread model seems like it would be fastest for an individual work flow compared to the multiple threads; however, it would also tie up a single thread for the entire workflow, whereas the multiple threads would be shorter lived and would return to the pool quicker (I imagine, at least).
Hopefully the underlying concept is easy enough to understand; here's a contrived pseudo-code example though:
// Single-thread approach
foo();
bar();
baz();
Or:
// Multiple Thread approach
Thread.run(foo);
when foo.isDone()
Thread.run(bar);
when bar.isDone()
Thread.run(baz);
UPDATE: Completely forgot. The reason I'm considering the multithreaded approach is the (possibly mistaken) belief that, since the threads will have smaller execution times, they'll be available for other instances of the overall workload. If each operation takes, say 5 seconds, then the single-thread version locks up a thread for 15 seconds; the multiple thread version would lock up a single thread for 5 seconds, and then it can be used for another process.
Any ideas? If there's anything similar out there in the interwebs, I'd love even a link - I couldn't think of how to search for this (I blame Monday for that, but it would probably be the same tomorrow).
Multithreading is not a silver bullet. It's means to an end.
Before making any changes, you need to ask yourself where your bottlenecks are, and what you're really trying to parallelize. I'm not sure that without more information that we can give good advice here.
If foo, bar, and baz are part of a pipeline, you're not necessarily going to improve the overall latency of a single sequence by using multiple threads.
What you might be able to do is to increase your throughput by letting multiple executions of the pipeline over different input pieces work in parallel, by letting later items to travel through the pipeline while earlier items are blocked on something (e.g., I/O). For instance, if bar() for a particular input is blocked and waiting on a notification, it's possible that you could do computationally heavy operations on another input, or have CPU resources to devote to foo(). A particularly important question is whether any of the external dependencies act as a limited shared resource. e.g., if one thread is accessing system X, is another thread going to be affected?
Threads are also very effective if you want to divide and conquer your problem - splitting your input into smaller parts, running each part through the pipeline, and then waiting on all the pieces to be ready. Is that possible with the kind of workflow you're looking at?
If you need to first do foo, then do bar, and then do baz, you should have one thread do each of these steps in sequence. This is simple and makes obvious sense.
The most common case where you're better off with the assembly line approach is when keeping the code in cache is more important than keeping the data in cache. In this case, having one thread that does foo over and over can keep the code for this step in cache, keep branch prediction information around, and so on. However, you will have data cache misses when you hand the results of foo to the thread that does bar.
This is more complex and should only be attempted if you have good reason to think it will work better.
Use a single thread for the full workflow.
Dividing up the workflow can't improve the completion time for one piece of work: since the parts of the workflow have to be done sequentially anyway, only one thread can work on the piece of work at a time. However, breaking up the stages can delay the completion time for one piece of work, because a processor which could have picked up the last part of one piece of work might instead pick up the first part of another piece of work.
Breaking up the stages into multiple threads is also unlikely to improve the time to completion of all your work, relative to executing all the stages in one thread, since ultimately you still have to execute all the stages for all the pieces of work.
Here's an example. If you have 200 of these pieces of work, each requiring three 5 second stages, and say a thread pool of two threads running on two processors, keeping the entire workflow in a single thread results in your first two results after 15 seconds. It will take 1500 seconds to get all your results, but you only need the working memory for two of the pieces of work at a time. If you break up the stages, then it may take a lot longer than 15 seconds to get your first results, and you potentially may need memory for all 200 pieces of work proceeding in parallel if you still want to get all the results in 1500 seconds.
In most cases, there are no efficiency advantages to breaking up sequential stages into different threads, and there may be substantial disadvantages. Threads are generally only useful when you can use them to do work in parallel, which does not seem to be the case for your work stages.
However, there is a huge disadvantage to breaking up the stages into separate threads. That disadvantage is that you now need to write multithreaded code that manages the stages. It's extremely easy to write bugs in such code, and such bugs can be very difficult to catch prior to production deployment.
The way to avoid such bugs is to keep the threading code as simple as possible given your requirements. In the case of your work stages, the simplest possible threading code is none at all.
I am wondering how I would go about doing this. Say I load a list of 1,000 words and for each word a thread is created and say it does a google search on each word. The problem here is obvious. I can't have 1k threads, can I. Keep in mind I am extremely new to threads and synchronization. So basically I am wondering how I would go about using less threads. I assume I have to set thread amount to a fixed number and synchronize the threads. Was wondering how to do this with Apache HttpClient using GetThread and then run it. In run I'm getting the data from webpage and turning it into a String and then checking if it contains a certain word.
Surely you can have as many threads as you want. But in general it is not recommended to use more threads than there are processing cores on your computer.
And don't forget that creating 1000 internet sessions at once affects your networking. A size of one single google page is nearly 0.3 megabytes. Are you really going to download 300 megabytes of data at once?
By the way,
There is a funny thing about concurrency.
Some people say: "synchronization is like concurrency". It is not true.
Synchronization is the opposite of concurrency.
Concurrency is when lots of things happen in parallel.
Synchronization is when I am blocking you.
(Joshua Bloch)
Maybe you can look at this problem this way.
You have 1000 words and for each word you are going to carry out a search.
In other words there are 1000 tasks to be executed and they are not related
to each other, so there is no need for synchronization in the case of this
problem as per the following definition from Wiki.
"In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data Synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity"
So in this problem you do not have to synchronize the 1000 processes which
execute the word searches since they can run independently and dont need
to join forces. So it is not a Process synchronization.
It is not a Data synchronization either since the data of each search is
independent of the other 999 searches.
Hence when Joshua says Synchronization is when I am blocking you, there is no need of blocking in this case.
Yes all tasks can concurrently get executed in different threads.
Of course your system may not have the resources to run 1000 threads
concurrently ( read same time ).
So you need concepts like pools where a pool has a certain no of
threads...say if it has 10 threads...then those 10 will start
10 independent searches on 10 words from your list.
If any of them is done with its task then it will take up the next
word search task available and the process goes on....
I have more than just a basic understanding of locks and synchronized blocks. However, as far as i know, they just ensure that a single thread has access to some data at a time.
What i need is a little bit different. Lets say i have a String. One and only one thread my change it (assign some value to it), and lets assume this operation takes a lot of time - 3 seconds.
A lot of other threads need to read the value of this String, and do some operations (execute a block of code) with the assumption that the value remains unchanged until the whole block executes. Lets assume that this block takes 5 secs to execute.
So, the "writer" thread must not update the String while there are threads using/reading it, and the "reader" threads must wait for an update (by the "writer") to finish before starting to use it.
I could solve this with a lock if there was only 1 "writer" and 1 "reader", but how can i do this now that there is 1 "writer" and N "readers". Note that many "readers" may access the data at the same time.
By the way, it is kind of hard to provide code, since it is hundreds of line, includes internet access, database access...
I know i can avoid this problem by changing my implementation but i have reasons to prefer this implementation - and it's not i'm too lazy to change it.
What you are looking for is a ReadWriteLock
. Where multiple threads can read at the same time. A write is blocked until the reading threads finish. And reads are blocked until the writing thread finishes.
If operations take that long, you are doing something wrong. The right method is to calculate the new data, however long it takes, without affecting the old data, and then replacing the old with the new data in one go. And destroying the old data only when everyone using it is finished with it.
I have a scenario in which I have to register a number of users and than run parallel with threads as the number of registered users and execute same set of actions by all the users in parallel. for this I have a jmx with few actions that should happened only once (in a setup thread with one thread count) and another thread group that runs with say 5 threads which is the number of the previously registered users, and I execute some operations using these users.
Now I want execute this whole scenario in parallel using 5 threads.
How do I come about doing this?
I used the include controller but thread groups are not executed as expected, I don't get 25 iterations for the actions that happen in a 5 threads group in the included jmx.
I'm not precisely sure what you're doing, and I know little of jmx, but here's a couple of ideas. One (or both) might be relevant.
The first one is that your threads might be sharing an instance field. If they have a common counter, for instance, you will do something 5 times rather than 25 times. Make sure your common variables (instance and class fields) are properly synchronized. Use local variables whenever possible. You must use them when their value applies to each thread rather than all threads.
The second is that you might be displaying results--or event stopping the program--before all threads have done their work. It's worst on single-core machines, but threads can and do run in any order imaginable, and in a few orders that are not. They can run one at a time, with the one started last running first. One can stop in the middle, and let all the other run to completion, then start up again. A bunch can run simultaneously (on different cores or swapping rapidly) while others do nothing.
I'd suggest putting in a bunch of logging/output statements (System.out.println is good enough) and seeing for yourself what's happening. It'll take you a while to make sense of your output, but once you do, you'll be able to start bringing things under control.
I have multiple threads each one with its own private concurrent queue and all they do is run an infinite loop retrieving messages from it. It could happen that one of the queues doesn't receive messages for a period of time (maybe a couple seconds), and also they could come in big bursts and fast processing is necessary.
I would like to know what would be the most appropriate to do in the first case: use a blocking queue and block the thread until I have more input or do a Thread.yield()?
I want to have as much CPU resources available as possible at a given time, as the number of concurrent threads may increase with time, but also I don't want the message processing to fall behind, as there is no guarantee of when the thread will be reescheduled for execution when doing a yield(). I know that hardware, operating system and other factors play an important role here, but setting that aside and looking at it from a Java (JVM?) point of view, what would be the most optimal?
Always just block on the queues. Java yields in the queues internally.
In other words: You cannot get any performance benefit in the other threads if you yield in one of them rather than just block.
You certainly want to use a blocking queue - they are designed for exactly this purpose (you want your threads to not use CPU time when there is no work to do).
Thread.yield() is an extremely temperamental beast - the scheduler plays a large role in exactly what it does; and one simple but valid implementation is to simply do nothing.
Alternatively, consider converting your implementation to use one of the managed ExecutorService implementations - probably ThreadPoolExecutor.
This may not be appropriate for your use case, but if it is, it removes the whole burden of worrying about thread management from your own code - and these questions about yielding or not simply vanish.
In addition, if better thread management algorithms emerge in future - for example, something akin to Apple's Grand Central Dispatch - you may be able to convert your application to use it with almost no effort.
Another thing that you could do is use the concurrent hash map for your queue. When you do a read it gives you a reference of the object you were looking for, so it is possible you my miss a message that was just put into the queue. But if all this is doing is listening for a message you will catch it the next iteration. It would be different if the messages could be updated by other threads. But there doesn't really seem to be a reason to block that I can see.