I've been looking for the way to increase the speed of processing messages received from rabbitmq queue. The only way I've found is make more than one threads doing the same - receiving and processing. And this gave me some profit. After I created 4 threads the speed quadrupled. As I have 8-core processor I've decided to increase the number of threads to 8. But this gives no performance increasing. YourKit shows that only 50% of CPU is used. Somebody can say that my app is lightweight and so it is so, but I can say that it can't do more work than it does regardless I produce much more what to do. Why this doesn't work?
There are many different issues that can constrain the maximum speed of some application on a given system. For example, it can be limited by memory bandwidth, by Amdahl's Law effects (time needed for non-parallel code, including synchronized blocks), I/O bandwidth, and cache space.
If you want further improvement you need to do some measurements and profiling to find where the time is going, and then work on that.
The short (and not particularly helpful) answer is "overheads and bottlenecks".
For instance:
Creating threads in Java is relatively expensive. If the amount of work done by a thread isn't large, the overhead of creating the thread can out-weigh the benefits.
Context switching between threads is relatively expensive, especially when you take account of memory-related overheads such as cache misses, TLB misses. (These overheads actually hit when a native thread is assigned to a core. If the OS can somehow keep a native thread on a single core continuously (i.e. with no other threads on the same core), then it can use spinlocking ... and avoid the context switch. But the more Java threads you have, the less likely it is that the OS can do this.)
The threads may be spending a large proportion of their time waiting for I/O to complete. The I/O system's throughput or the speed / latency of some external service can be a bottleneck.
You may have contention over data structures; e.g. threads requiring exclusive access to safely read or update (say) a shared Map. If threads regularly need to wait for others to release locks, then you have a bottleneck.
Your computation may be dominated by the costs of "feeding" the threads. For example, if there is a single master thread that hands out "work" to worker threads, then the master thread's activities could be the bottleneck; i.e. it may not be able to provide enough work to keep the workers busy.
Since your tags imply that you are using a message queue, it is possible that that is the bottleneck, especially if the messages are big or the "work" done on each one is relatively small.
(Using a separate separate message queue service is liable to increase context switches, add I/O latency, add protocol overheads and so on. It's not an automatic route to performance improvement for small-scale systems.)
It is also possible that you have "hyperthreaded" cores not real cores, or that the operating system is stopping your JVM from using all cores.
If CPU or waiting for IO is your bottle neck, adding independent threads can make a big difference.
If you have a shared resource is a bottleneck, e.g. your L3 cache, your network adapter, your kernel, adding threads won't help because CPU is not the problem. In fact it can often make it worse by adding overhead.
my app is lightweight
In which case CPU is unlikely to be your issue and you are doing well to see a speed up with more than 1 CPU. Most likely you are speeding up CPU used by RabbitMQ. Ideally it should be more efficient and this shouldn't really help much. IMHO, more efficient messaging solutions don't gain much by multiple CPUs as they will not be bottlenecked on CPU.
One way or another, you're only using 4 cores. There's a lot that can stop stop you from doubling your performance by doubling your threads, but from your 4 thread success you've gotten past all that. I'm guessing there's a bug in your code to set off 8 threads and it's only firing up 4. (Even with hyperthreading, you're going to get some improvement. Even with every possible problem, you're going to get some improvement.) Otherwise, I'll go with T.J.Crowder and Stephen C: I don't think you really have 8 cores.
I'd try using different numbers of threads: 3, 5, 6. See what changes. I think you'll stumble on the problem soon enough.
To be fair to Java: if you write thread-safe code and avoid bottlenecks, it handles threads really well, as you've noticed going from one thread to 4. I've always found the overhead costs to be trivial.
Your application does not have linear speedup, and therefore, it does not have good scalability.
In order to keep increase the number of threads you need to ensure the data being handle is growing accordingly. For a fixed amount of data, increasing number of threads (and/or cores) will have a diminishing return at some point since the overhead of creating threads will outweight the thread's compute time.
Make sure to look up the following link:
Ahmdahl's law
Gustafson's law
Gustafson's law is a great counterpoint to Ahmdal's law so I highly recommend understanding that article.
Related
I'm new to java multi-threaded programming. The question that has came to my mind is that how many threads can I run according to the number of my CPU cores. and if I run threads more than CPU cores will it be an overhead for the machine to run the app. for example when we have a server machine which has a server software that run 2 threads(main thread + developer thread), will it be an overhead for the server when more simultaneous clients make socket connections to the server or not?
Thanks.
The number of threads a system can execute simultaneously is (of course) identical to the number of cores in the system.
The number of threads that can exist on the system is limited by the available memory (each thread requires a stack and a structure used by the OS to manage the thread), and possibly there is a limitation how many threads the OS allows (this depends on the OS architecture, some OS' may use a fixed size table and once its full no more threads can be created).
Commonly, todays computers can handle hundreds to thousands of threads.
The reason why more threads are used than cores exist in the system is: Most threads will inevitably spend much of their time waiting for some event (example: word processor waiting for user to type on keyboard). The OS manages it that threads that wait in such a manner do not consume CPU time.
Idea behind it is don't let your CPU sleep, neither load it too much that it waste most of time in thread switching.
Its helpful to check Tuning the pool size, In IBMs paper
Idea behind is, it depends on the nature of task, if its all in-memory computation tasks you can use N+1 threads (N numbers of cores (included hyper threading)).
Or
we need to do the application profiling and find out waiting time (WT) , service time (ST) for a typical request and approximately N*(1+WT/ST) number of optimal threads we can have, considering 100% utilization of CPU.
That depends on what the threads are doing. The CPU is only able to do X things at once, where X is the number of cores it has. That means X threads at most can be active at any one time - however the other threads can wait their turn and the CPU will process them at appropriate moments.
You should also consider that a lot of the time threads are waiting for a response, or waiting for data to load, or a network message to arrive, etc so are not actually trying to do anything. These idle/waiting threads have very little load on the system.
Don't worry about getting a higher number of threads than CPU cores; that is actually not in your hands, but in OS'.
Assuming the JVM maps your java threads over OS threads (which is fairly normal these days), it depends on the thread management your OS does. There you rely on how smart the kernel implementation is to get performance out of your cores.
What you must keep in mind is that your design must be sustainable. For example, application servers are built on a threadpool full of worker threads. Those threads are awaken in order to serve requests. Do you want a thread for each request? Then you will surely have a problem - requests can arrive in the thousands to the server, and that could be a problem for the kernel to manage. Actually the threadpool size should be limited (between 1 and X and easily changed even in real time), threads should get work from a concurrent queue (java gives you some excellent classes for that) and each one attend requests sequentially.
I hope that being of help
Having less threads than CPUs can mean you are not using all the CPUs in your system. Having more threads might improve throughput if CPU is your bottleneck.
Having more threads than CPU does introduce an overhead and if CPU is your bottleneck this can hurt performance. However, if network IO, is your bottleneck, this overhead is a price worth paying as it usually allows you to handle many more connections. e.g. You can have 1000 TCP connections with their own threads.
There doesn't have to be any relation. A computer can have any number of cores; a process can have any number of threads.
There are several different reasons that processes utilize threading, including:
Programming abstraction. Dividing up work and assigning each division to a unit of execution (a thread) is a natural approach to many problems. Programming patterns that utilize this approach include the reactor, thread-per-connection, and thread pool patterns. Some, however, view threads as an anti-pattern. The inimitable Alan Cox summed this up well with the quote, "threads are for people who can't program state machines."
Blocking I/O. Without threads, blocking I/O halts the whole process. This can be detrimental to both throughput and latency. In a multithreaded process, individual threads may block, waiting on I/O, while other threads make forward progress. Blocking I/O via threads is thus an alternative to asynchronous & non-blocking I/O.
Memory savings. Threads provide an efficient way to share memory yet utilize multiple units of execution. In this manner they are an alternative to multiple processes.
Parallelism. In machines with multiple processors, threads provide an efficient way to achieve true parallelism. As each thread receives its own virtualized processor and is an independently schedulable entity, multiple threads may run on multiple processors at the same time, improving a system's throughput. To the extent that threads are used to achieve parallelism—that is, there are no more threads than processors—the "threads are for people who can't program state machines" quote does not apply.
The first three bullets utilize threads with no relationship to cores. If you are using threads as a programming abstraction to handle UI elements, for example, you'll have one thread per UI element (or whatever) regardless of whether you have 1 core or 12. Similarly, if you were using threads to perform blocking I/O, you'd scale your thread count with your I/O capacity, not your processing power.
The fourth bullet, however, does relate threads to cores. If the goal of threading is parallelism, then the number of threads should scale linearly with the number of cores. For example, if you double the number of cores in a system, then you would double the number of threads in your application. This is true for cores in the logical sense—that is, including SMT.
When threading is used to achieve parallelism—and this is both a common and the best use of threading—you will often have, say, one or two threads per core. Oftentimes, applications are written so as to dynamically size thread pools off the number of available cores. A single thread per core is ideal, but applications often use a larger multiplier, such as two threads per core, due to bugs and inefficiencies in their code, such as operations that block when none should.
Best performance will be when number of cores(NOC) equals number of thread (NOT), because if NOT > NOC then processor should switch context or OS will try to do that work, which is expensive enough opperation. But you have to understand that it impossible to have NOC = NOT on Web Servers because you can't predict how much clients will be at the same time. Take a look on load balancing concept to solve this issue in best way.
can you explain this nonsense to me?
i have a method that basically fills up an array with mathematical operations. there's no I/O involved or anything. now, this method takes about 50 seconds to run, and the code is perfectly scalable (theoretically 100%), so i split it up into 4 threads, wait for them to complete, and reassemble the 4 arrays. now, i run the program on a quad core processor, expecting it to take about 15 seconds, and it actually takes 58 seconds. that's right: it takes longer! i see the cpu working 100%, and i know that each thread does 1/4 of the calculations, and creating threads and reassembling the arrays take about 1-2 ms in total.
what's causing such loss of performance? what the hell is the cpu doing all that time?
CODE: http://pastebin.com/cFUgiysw
Threads don't work that way.
Threads are still part of the same process (depending on the OS), so in terms of the operating system - CPU time will be scheduled the same for 4 threads in 1 process as it is for 1 thread in 1 process.
Also, with such a small number of values, you won't see the scalability in the midst of the overhead. Re-assembling the arrays in java will be costly.
Check out things like "Context switching overhead" - things like that always mess you up when you try to map theory to practise :P
I would stick to the single-threaded way :)
~ Dan
http://en.wikipedia.org/wiki/Context_switch
A lot depends on what you are doing and how you are dividing the work. There are many possible causes for this problem.
The most likely cause is, you are using all the bandwidth of your CPU to main memory bus with one thread. This can happen if your data set is larger than your CPU cache. esp if you have some random access behaviour. You could consider trying to reuse the original array, rather than taking multiple copies to reduce cache churn.
Your locking overhead is greater than the performance gain. I suspect you have used very course locking so this shouldnt be an issue.
Starting stopping threads takes too long. As your code is multi second, I doubt this too.
There is a cost associated with opening new threads. I don't think it should be up to 8 second but it depends on what threads you are using. Some threads needs to create a copy of the data that you are handling to be thread safe and that can take some time. This cost is commonly referred to as overhead. If the execution you are doing is somewhere not serializable for instance reads the same file or needs access to a shared resource the threads might need to wait on each other this can take some time and under sub optimal conditions it can take more time than serial execution. My tip is try and check for these unserializable events remove them from the threaded part if possible. Also try and use a lower amount of threads 4 threads for 4 cpus is not always optimal.
Hope it helps.
Unless you are constantly creating and killing threads the thread overhead shouldn't be a problem. Four threads running simultaeously is no big deal for the scheduler.
As Peter Lawrey suggested the memory bandwidth could be the problem. Your 50-second code is running on a java engine and they both compete for the available memory bandwidth. The java engine needs memory bandwidth to execute your code and your code needs it to do its calculations.
You write "perfectly scalable" which would be the case if your code was compiled. Since it runs on a java engine this is not the case. So the 16% increase in overall time could be seen as the difference between the smoothness of one thread vs the chaos of four colliding over memory accesses.
The task is - need to process multiple I/O streams (HTTP downloads) with some CPU-heavy operation. Ideally would like to have full bandwidth and CPU 100% used. Of course - heavy CPU processing is slower then internet download. Unprocessed data could be cached to disk. Are there any existing Executors in ASF or other components providing this functionality? If not - what's the best way to achieve this? Thinking of having 2 thread pools one for Internet-To-Disk and other for Disk-To-CPU-To-Disk operations.
EDITED:
I'll clarify my question:
2 thread pools: Internet-To-Disk and Disk-To-CPU-To-Disk is producer/consumer approach itself. The question was HOW to make sure I've selected right number of threads for producers and consumers? Same code will work simultenously on different boxes, arches with different number of cores and different bandwidth. How to make sure I've chosen right number of threads so 100% bandwidth and 100% CPU are consumed?
Assuming that CPU processing is going to be the main bottleneck of your system, the number of threads for CPU processing should be, at the least, set to the number of CPUs or cores available.
I/O part is probably not going to use much CPU at all, but you may want to allocate a fixed pool of few threads (equal to, or less than, the number of cores) to prevent excess thread context switching for simultaneous I/O streams.
You may also set the number of threads for CPU processing to a number slightly bigger than the number of cores, if your CPU processing threads do not always use 100% of CPU from start to finish. For example, if they may do some I/O or access some shared resource in the middle of processing.
But as with any system, the ideal number of threads will greatly depend on the nature of your program. You can use tools like JVisual VM (bundled with JDK) to analyse how threads are utilised in your program, and try different thread setting variations.
You can use producer-consumer for this purpose. Use as many producers and consumers as its needed to fulfill the needs.
If your CPU stage is more intensive than the download time, why not just download the data as you are able to process it. That way you can have multiple Internet-To-CPU-To-Disk processes. By skipping a stage it may be faster, and it will certainly be simpler.
I'd go for a producer-consumer architecture : one thread pool to process the data (managed by an ExecutorService), and one or more threads to download the data from the internet.
The data to be processed would be put into a bounded blocking queue (ex: LinkedBlockingQueue), so that the downloading threads would only fetch data when required (that is, when a computing thread is able to process new data). Plus, this structure guaranteed thread safety and memory publication.
I am fairly new with concurrent programming and I am learning it.
I am implementing a quick sort in Java JDK 7 (Fork Join API) to sort a list of objects (100K).
While using this recursive piece of code without using concurrency,i observe no memory explosion, everything is fine.
I just added the code to use it on multi cores (by extending the class RecursiveAction) and then the memory usage jumped very high until it reached its limits. By doing some profiling i observe a high creation rate of threads and i think its expectable.
But, is a java Thread by itself much more memory demanding or am i missing something here ?
Quicksort must requires a lot of threads but not much than regular objects.
Should I stop creating RecursiveAction Threads when i meet a threshold and then just switch to a sequential piece of code (no more threads)?
Thank you very much.
Java threads usually take 256k/512k(depeding in OS, jdk versions..) of stack space alone, by default.
You're wasting huge resources and speed if you're running more threads than you have processors/cores for a CPU intensive process such as doing quicksort, so try to not run more threads than you have cores.
Yes, switching over to sequential code is a good idea when the unit of work is in the region of ca. 10,000-100,000 operations. This is just a rule of thumb. So, for quick sort, I'd drop out to sequential execution when the size to be sorted is less than say 10-20,000 elements, depending upon the complexity of the comparison operation.
What's the size of the ForkJoinPool - usually it's set to create the same number of threads as processors, so you shouldn't be seeing too many threads. If you've manually set the parallelism to be high (say, in the hundreds or thousands) then you will see high (virtual) memory use, since each thread allocates space for the stack (256K by default on 32-bit windows and linux.)
As a rule of thumb for a CPU bound computation, once your number of threads exceeds the number of available cores, adding more threads is not going to speed things up. In fact, it will probably slow you down due to the overheads of creating the threads, the resources tied down by each thread (e.g. the thread stacks), and the cost of synchronizing.
Indeed, even if you had an infinite number of cores, it would not be worth creating threads to do small tasks. Even with thread pools and other clever tricks, if the amount of work to be done in a task is too small the overheads of using a thread will exceed any savings. (It is difficult to predict exactly where that threshold is, and it certainly depends on the nature of the task as well as platform-related factors.)
I changed my code and so far I have better results. I invoke the main Thread task in the ForkJoinPool, in the Threads, I dont create more threads if there are a lot more active threads than available cores in the ForkJoinPool.
I dont do synchronism through the join() method. As a result a parent thread will die as soon as it created its offsprings. In the main function that invoked the root task. I wait for the tasks to be completed, aka no more active threads. Its seems to work fine as the memory stays normal and i gained lots of time over a the same piece of code executed sequentially.
I am going to learn more.
Thank you all !
I am testing my java application for any performance bottlenecks. The application uses concurrent.jar for locking purposes.
I have a high computation call which calls lock and unlock functions for its operations.
On removing the lock-unlock mechanism from the code, I have seen the performance degradation by multiple folds contrary to my expectations. Among other things observed was increase in CPU consumption which made me feel that the program is running faster but actually it was not.
Q1. What can be the reason for this degradation in performance when we remove locks?
Best Regards !!!
This can be quite a usual finding, depending on what you're doing and what you're using as an alternative to Locks.
Essentially, what happens is that constructs such as ReentrantLock have some logic built into them that knows "when to back off" when they realistically can't acquire the lock. This reduces the amount of CPU that's burnt just in the logic of repeatedly trying to acquire the lock, which can happen if you use simpler locking constructs.
As an example, have a look at the graph I've hurriedly put up here. It shows the throughput of threads continually accessing random elements of an array, using different constructs as the locking mechanism. Along the X axis is the number of threads; Y axis is throughput. The blue line is a ReentrantLock; the yellow, green and brown lines use variants of a spinlock. Notice how with low numbers of threads, the spinlock gives heigher throughput as you might expect, but as the number of threads ramps up, the back-off logic of ReentrantLock kicks in, and it ends up doing better, while with high contention, the spinlocks just sit burning CPU.
By the way, this was really a trial run done on a dual-processor machine; I also ran it in the Amazon cloud (effectively an 8-way Xeon) but I've ahem... mislaid the file, but I'll either find it or run the experiment again soon and train and post an update. But you get an essentially similar pattern as I recall.
Update: whether it's in locking code or not, a phenomenon that can happen on some multiprocessor architectures is that as the multiple processors do a high volume of memory accesses, you can end up flooding the memory bus, and in effect the processors slow each other down. (It's a bit like with ethernet-- the more machines you add to the network, the more chance of collisions as they send data.)
Profile it. Anything else here will be just a guess and an uninformed one at that.
Using a profiler like YourKit will not only tell you which methods are "hot spots" in terms of CPU time but it will also tell you where threads are spending most of their time BLOCKED or WAITING
Is it still performing correctly? For instance, there was a case in an app server where an unsychronised HashMap caused an occasional infinite loop. It is not to difficult to see how work could simply be repeated.
The most likely culprit for seeing performance decline and CPU usage increase when you remove shared memory protection is a race condition. Two or more threads could be continually flipping a state flag back and forth on a shared object.
More description of the purpose of your application would help with diagnosis.