What is the difference between concurrency and parallelism?
Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. For example, multitasking on a single-core machine.
Parallelism is when tasks literally run at the same time, e.g., on a multicore processor.
Quoting Sun's Multithreaded Programming Guide:
Concurrency: A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.
Parallelism: A condition that arises when at least two threads are executing simultaneously.
Why the Confusion Exists
Confusion exists because dictionary meanings of both these words are almost the same:
Concurrent: existing, happening, or done at the same time(dictionary.com)
Parallel: very similar and often happening at the same time(merriam webster).
Yet the way they are used in computer science and programming are quite different. Here is my interpretation:
Concurrency: Interruptability
Parallelism: Independentability
So what do I mean by above definitions?
I will clarify with a real world analogy. Let’s say you have to get done 2 very important tasks in one day:
Get a passport
Get a presentation done
Now, the problem is that task-1 requires you to go to an extremely bureaucratic government office that makes you wait for 4 hours in a line to get your passport. Meanwhile, task-2 is required by your office, and it is a critical task. Both must be finished on a specific day.
Case 1: Sequential Execution
Ordinarily, you will drive to passport office for 2 hours, wait in the line for 4 hours, get the task done, drive back two hours, go home, stay awake 5 more hours and get presentation done.
Case 2: Concurrent Execution
But you’re smart. You plan ahead. You carry a laptop with you, and while waiting in the line, you start working on your presentation. This way, once you get back at home, you just need to work 1 extra hour instead of 5.
In this case, both tasks are done by you, just in pieces. You interrupted the passport task while waiting in the line and worked on presentation. When your number was called, you interrupted presentation task and switched to passport task. The saving in time was essentially possible due to interruptability of both the tasks.
Concurrency, IMO, can be understood as the "isolation" property in ACID. Two database transactions are considered isolated if sub-transactions can be performed in each and any interleaved way and the final result is same as if the two tasks were done sequentially. Remember, that for both the passport and presentation tasks, you are the sole executioner.
Case 3: Parallel Execution
Now, since you are such a smart fella, you’re obviously a higher-up, and you have got an assistant. So, before you leave to start the passport task, you call him and tell him to prepare first draft of the presentation. You spend your entire day and finish passport task, come back and see your mails, and you find the presentation draft. He has done a pretty solid job and with some edits in 2 more hours, you finalize it.
Now since, your assistant is just as smart as you, he was able to work on it independently, without needing to constantly ask you for clarifications. Thus, due to the independentability of the tasks, they were performed at the same time by two different executioners.
Still with me? Alright...
Case 4: Concurrent But Not Parallel
Remember your passport task, where you have to wait in the line?
Since it is your passport, your assistant cannot wait in line for you. Thus, the passport task has interruptability (you can stop it while waiting in the line, and resume it later when your number is called), but no independentability (your assistant cannot wait in your stead).
Case 5: Parallel But Not Concurrent
Suppose the government office has a security check to enter the premises. Here, you must remove all electronic devices and submit them to the officers, and they only return your devices after you complete your task.
In this, case, the passport task is neither independentable nor interruptible. Even if you are waiting in the line, you cannot work on something else because you do not have necessary equipment.
Similarly, say the presentation is so highly mathematical in nature that you require 100% concentration for at least 5 hours. You cannot do it while waiting in line for passport task, even if you have your laptop with you.
In this case, the presentation task is independentable (either you or your assistant can put in 5 hours of focused effort), but not interruptible.
Case 6: Concurrent and Parallel Execution
Now, say that in addition to assigning your assistant to the presentation, you also carry a laptop with you to passport task. While waiting in the line, you see that your assistant has created the first 10 slides in a shared deck. You send comments on his work with some corrections. Later, when you arrive back home, instead of 2 hours to finalize the draft, you just need 15 minutes.
This was possible because presentation task has independentability (either one of you can do it) and interruptability (you can stop it and resume it later). So you concurrently executed both tasks, and executed the presentation task in parallel.
Let’s say that, in addition to being overly bureaucratic, the government office is corrupt. Thus, you can show your identification, enter it, start waiting in line for your number to be called, bribe a guard and someone else to hold your position in the line, sneak out, come back before your number is called, and resume waiting yourself.
In this case, you can perform both the passport and presentation tasks concurrently and in parallel. You can sneak out, and your position is held by your assistant. Both of you can then work on the presentation, etc.
Back to Computer Science
In computing world, here are example scenarios typical of each of these cases:
Case 1: Interrupt processing.
Case 2: When there is only one processor, but all executing tasks have wait times due to I/O.
Case 3: Often seen when we are talking about map-reduce or hadoop clusters.
Case 4: I think Case 4 is rare. It’s uncommon for a task to be concurrent but not parallel. But it could happen. For example, suppose your task requires access to a special computational chip that can be accessed through only processor-1. Thus, even if processor-2 is free and processor-1 is performing some other task, the special computation task cannot proceed on processor-2.
Case 5: also rare, but not quite as rare as Case 4. A non-concurrent code can be a critical region protected by mutexes. Once it is started, it must execute to completion. However, two different critical regions can progress simultaneously on two different processors.
Case 6: IMO, most discussions about parallel or concurrent programming are basically talking about Case 6. This is a mix and match of both parallel and concurrent executions.
Concurrency and Go
If you see why Rob Pike is saying concurrency is better, you have to understand what the reason is. You have a really long task in which there are multiple waiting periods where you wait for some external operations like file read, network download. In his lecture, all he is saying is, “just break up this long sequential task so that you can do something useful while you wait.” That is why he talks about different organizations with various gophers.
Now the strength of Go comes from making this breaking really easy with go keyword and channels. Also, there is excellent underlying support in the runtime to schedule these goroutines.
But essentially, is concurrency better than parallelism?
Are apples better than oranges?
I like Rob Pike's talk: Concurrency is not Parallelism (it's better!)
(slides)
(talk)
Rob usually talks about Go and usually addresses the question of Concurrency vs Parallelism in a visual and intuitive explanation! Here is a short summary:
Task: Let's burn a pile of obsolete language manuals! One at a time!
Concurrency: There are many concurrently decompositions of the task! One example:
Parallelism: The previous configuration occurs in parallel if there are at least 2 gophers working at the same time or not.
To add onto what others have said:
Concurrency is like having a juggler juggle many balls. Regardless of how it seems, the juggler is only catching/throwing one ball per hand at a time. Parallelism is having multiple jugglers juggle balls simultaneously.
Say you have a program that has two threads. The program can run in two ways:
Concurrency Concurrency + parallelism
(Single-Core CPU) (Multi-Core CPU)
___ ___ ___
|th1| |th1|th2|
| | | |___|
|___|___ | |___
|th2| |___|th2|
___|___| ___|___|
|th1| |th1|
|___|___ | |___
|th2| | |th2|
In both cases we have concurrency from the mere fact that we have more than one thread running.
If we ran this program on a computer with a single CPU core, the OS would be switching between the two threads, allowing one thread to run at a time.
If we ran this program on a computer with a multi-core CPU then we would be able to run the two threads in parallel - side by side at the exact same time.
Concurrency: If two or more problems are solved by a single processor.
Parallelism: If one problem is solved by multiple processors.
Imagine learning a new programming language by watching a video tutorial. You need to pause the video, apply what been said in code then continue watching. That's concurrency.
Now you're a professional programmer. And you enjoy listening to calm music while coding. That's Parallelism.
As Andrew Gerrand said in GoLang Blog
Concurrency is about dealing with lots of things at once. Parallelism
is about doing lots of things at once.
Enjoy.
I will try to explain with an interesting and easy to understand example. :)
Assume that an organization organizes a chess tournament where 10 players (with equal chess playing skills) will challenge a professional champion chess player. And since chess is a 1:1 game thus organizers have to conduct 10 games in time efficient manner so that they can finish the whole event as quickly as possible.
Hopefully following scenarios will easily describe multiple ways of conducting these 10 games:
1) SERIAL - let's say that the professional plays with each person one by one i.e. starts and finishes the game with one person and then starts the next game with the next person and so on. In other words, they decided to conduct the games sequentially. So if one game takes 10 mins to complete then 10 games will take 100 mins, also assume that transition from one game to other takes 6 secs then for 10 games it will be 54 secs (approx. 1 min).
so the whole event will approximately complete in 101 mins (WORST APPROACH)
2) CONCURRENT - let's say that the professional plays his turn and moves on to the next player so all 10 players are playing simultaneously but the professional player is not with two person at a time, he plays his turn and moves on to the next person. Now assume a professional player takes 6 sec to play his turn and also transition time of a professional player b/w two players is 6 sec so the total transition time to get back to the first player will be 1min (10x6sec). Therefore, by the time he is back to the first person with whom the event was started, 2mins have passed (10xtime_per_turn_by_champion + 10xtransition_time=2mins)
Assuming that all player take 45sec to complete their turn so based on 10mins per game from SERIAL event the no. of rounds before a game finishes should 600/(45+6) = 11 rounds (approx)
So the whole event will approximately complete in 11xtime_per_turn_by_player_&_champion + 11xtransition_time_across_10_players = 11x51 + 11x60sec= 561 + 660 = 1221sec = 20.35mins (approximately)
SEE THE IMPROVEMENT from 101 mins to 20.35 mins (BETTER APPROACH)
3) PARALLEL - let's say organizers get some extra funds and thus decided to invite two professional champion players (both equally capable) and divided the set of same 10 players (challengers) into two groups of 5 each and assigned them to two champions i.e. one group each. Now the event is progressing in parallel in these two sets i.e. at least two players (one in each group) are playing against the two professional players in their respective group.
However within the group the professional player with take one player at a time (i.e. sequentially) so without any calculation you can easily deduce that whole event will approximately complete in 101/2=50.5mins to complete
SEE THE IMPROVEMENT from 101 mins to 50.5 mins (GOOD APPROACH)
4) CONCURRENT + PARALLEL - In the above scenario, let's say that the two champion players will play concurrently (read 2nd point) with the 5 players in their respective groups so now games across groups are running in parallel but within group, they are running concurrently.
So the games in one group will approximately complete in 11xtime_per_turn_by_player_&_champion + 11xtransition_time_across_5_players = 11x51 + 11x30 = 600 + 330 = 930sec = 15.5mins (approximately)
So the whole event (involving two such parallel running group) will approximately complete in 15.5mins
SEE THE IMPROVEMENT from 101 mins to 15.5 mins (BEST APPROACH)
NOTE: in the above scenario if you replace 10 players with 10 similar jobs and two professional players with two CPU cores then again the following ordering will remain true:
SERIAL > PARALLEL > CONCURRENT > CONCURRENT+PARALLEL
(NOTE: this order might change for other scenarios as this ordering highly depends on inter-dependency of jobs, communication needs between jobs and transition overhead between jobs)
Concurrent programming execution has 2 types : non-parallel concurrent programming and parallel concurrent programming (also known as parallelism).
The key difference is that to the human eye, threads in non-parallel concurrency appear to run at the same time but in reality they don't. In non - parallel concurrency threads rapidly switch and take turns to use the processor through time-slicing.
While in parallelism there are multiple processors available so, multiple threads can run on different processors at the same time.
Reference: Introduction to Concurrency in Programming Languages
Simple example:
Concurrent is: "Two queues accessing one ATM machine"
Parallel is: "Two queues and two ATM machines"
Parallelism is simultaneous execution of processes on a multiple cores per CPU or multiple CPUs (on a single motherboard).
Concurrency is when Parallelism is achieved on a single core/CPU by using scheduling algorithms that divides the CPU’s time (time-slice). Processes are interleaved.
Units:
1 or many cores in a single CPU (pretty much all modern day processors)
1 or many CPUs on a motherboard (think old school servers)
1 application is 1 program (think Chrome browser)
1 program can have 1 or many processes (think each Chrome browser tab is a process)
1 process can have 1 or many threads from 1 program (Chrome tab playing Youtube video in 1 thread, another thread spawned for comments
section, another for users login info)
Thus, 1 program can have 1 or many threads of execution
1 process is thread(s)+allocated memory resources by OS (heap, registers, stack, class memory)
They solve different problems. Concurrency solves the problem of having scarce CPU resources and many tasks. So, you create threads or independent paths of execution through code in order to share time on the scarce resource. Up until recently, concurrency has dominated the discussion because of CPU availability.
Parallelism solves the problem of finding enough tasks and appropriate tasks (ones that can be split apart correctly) and distributing them over plentiful CPU resources. Parallelism has always been around of course, but it's coming to the forefront because multi-core processors are so cheap.
concurency:
multiple execution flows with the potential to share resources
Ex:
two threads competing for a I/O port.
paralelism:
splitting a problem in multiple similar chunks.
Ex:
parsing a big file by running two processes on every half of the file.
Concurrency => When multiple tasks are performed in overlapping time periods with shared resources (potentially maximizing the resources utilization).
Parallel => when single task is divided into multiple simple independent sub-tasks which can be performed simultaneously.
Concurrency vs Parallelism
Rob Pike in 'Concurrency Is Not Parallelism'
Concurrency is about dealing with lots of things at once.
Parallelism is about doing lots of things at once.
[Concurrency theory]
Concurrency - handles several tasks at once
Parallelism - handles several thread at once
My vision of concurrency and parallelism
[Sync vs Async]
[Swift Concurrency]
If at all you want to explain this to a 9-year-old.
Think of it as servicing queues where server can only serve the 1st job in a queue.
1 server , 1 job queue (with 5 jobs) -> no concurrency, no parallelism (Only one job is being serviced to completion, the next job in the queue has to wait till the serviced job is done and there is no other server to service it)
1 server, 2 or more different queues (with 5 jobs per queue) -> concurrency (since server is sharing time with all the 1st jobs in queues, equally or weighted) , still no parallelism since at any instant, there is one and only job being serviced.
2 or more servers , one Queue -> parallelism ( 2 jobs done at the same instant) but no concurrency ( server is not sharing time, the 3rd job has to wait till one of the server completes.)
2 or more servers, 2 or more different queues -> concurrency and parallelism
In other words, concurrency is sharing time to complete a job, it MAY take up the same time to complete its job but at least it gets started early. Important thing is , jobs can be sliced into smaller jobs, which allows interleaving.
Parallelism is achieved with just more CPUs , servers, people etc that run in parallel.
Keep in mind, if the resources are shared, pure parallelism cannot be achieved, but this is where concurrency would have it's best practical use, taking up another job that doesn't need that resource.
I really like Paul Butcher's answer to this question (he's the writer of Seven Concurrency Models in Seven Weeks):
Although they’re often confused, parallelism and concurrency are
different things. Concurrency is an aspect of the problem domain—your
code needs to handle multiple simultaneous (or near simultaneous)
events. Parallelism, by contrast, is an aspect of the solution
domain—you want to make your program run faster by processing
different portions of the problem in parallel. Some approaches are
applicable to concurrency, some to parallelism, and some to both.
Understand which you’re faced with and choose the right tool for the
job.
In electronics serial and parallel represent a type of static topology, determining the actual behaviour of the circuit. When there is no concurrency, parallelism is deterministic.
In order to describe dynamic, time-related phenomena, we use the terms sequential and concurrent. For example, a certain outcome may be obtained via a certain sequence of tasks (eg. a recipe). When we are talking with someone, we are producing a sequence of words. However, in reality, many other processes occur in the same moment, and thus, concur to the actual result of a certain action. If a lot of people is talking at the same time, concurrent talks may interfere with our sequence, but the outcomes of this interference are not known in advance. Concurrency introduces indeterminacy.
The serial/parallel and sequential/concurrent characterization are orthogonal. An example of this is in digital communication. In a serial adapter, a digital message is temporally (i.e. sequentially) distributed along the same communication line (eg. one wire). In a parallel adapter, this is divided also on parallel communication lines (eg. many wires), and then reconstructed on the receiving end.
Let us image a game, with 9 children. If we dispose them as a chain, give a message at the first and receive it at the end, we would have a serial communication. More words compose the message, consisting in a sequence of communication unities.
I like ice-cream so much. > X > X > X > X > X > X > X > X > X > ....
This is a sequential process reproduced on a serial infrastructure.
Now, let us image to divide the children in groups of 3. We divide the phrase in three parts, give the first to the child of the line at our left, the second to the center line's child, etc.
I like ice-cream so much. > I like > X > X > X > .... > ....
> ice-cream > X > X > X > ....
> so much > X > X > X > ....
This is a sequential process reproduced on a parallel infrastructure (still partially serialized although).
In both cases, supposing there is a perfect communication between the children, the result is determined in advance.
If there are other persons that talk to the first child at the same time as you, then we will have concurrent processes. We do no know which process will be considered by the infrastructure, so the final outcome is non-determined in advance.
I'm going to offer an answer that conflicts a bit with some of the popular answers here. In my opinion, concurrency is a general term that includes parallelism. Concurrency applies to any situation where distinct tasks or units of work overlap in time. Parallelism applies more specifically to situations where distinct units of work are evaluated/executed at the same physical time. The raison d'etre of parallelism is speeding up software that can benefit from multiple physical compute resources. The other major concept that fits under concurrency is interactivity. Interactivity applies when the overlapping of tasks is observable from the outside world. The raison d'etre of interactivity is making software that is responsive to real-world entities like users, network peers, hardware peripherals, etc.
Parallelism and interactivity are almost entirely independent dimension of concurrency. For a particular project developers might care about either, both or neither. They tend to get conflated, not least because the abomination that is threads gives a reasonably convenient primitive to do both.
A little more detail about parallelism:
Parallelism exists at very small scales (e.g. instruction-level parallelism in processors), medium scales (e.g. multicore processors) and large scales (e.g. high-performance computing clusters). Pressure on software developers to expose more thread-level parallelism has increased in recent years, because of the growth of multicore processors. Parallelism is intimately connected to the notion of dependence. Dependences limit the extent to which parallelism can be achieved; two tasks cannot be executed in parallel if one depends on the other (Ignoring speculation).
There are lots of patterns and frameworks that programmers use to express parallelism: pipelines, task pools, aggregate operations on data structures ("parallel arrays").
A little more detail about interactivity:
The most basic and common way to do interactivity is with events (i.e. an event loop and handlers/callbacks). For simple tasks events are great. Trying to do more complex tasks with events gets into stack ripping (a.k.a. callback hell; a.k.a. control inversion). When you get fed up with events you can try more exotic things like generators, coroutines (a.k.a. Async/Await), or cooperative threads.
For the love of reliable software, please don't use threads if what you're going for is interactivity.
Curmudgeonliness
I dislike Rob Pike's "concurrency is not parallelism; it's better" slogan. Concurrency is neither better nor worse than parallelism. Concurrency includes interactivity which cannot be compared in a better/worse sort of way with parallelism. It's like saying "control flow is better than data".
From the book Linux System Programming by Robert Love:
Concurrency, Parallelism, and Races
Threads create two related but distinct phenomena: concurrency and
parallelism. Both are bittersweet, touching on the costs of threading
as well as its benefits. Concurrency is the ability of two or more
threads to execute in overlapping time periods. Parallelism is
the ability to execute two or more threads simultaneously.
Concurrency can occur without parallelism: for example, multitasking
on a single processor system. Parallelism (sometimes emphasized as
true parallelism) is a specific form of concurrency requiring multiple processors (or a single processor capable of multiple engines
of execution, such as a GPU). With concurrency, multiple threads make
forward progress, but not necessarily simultaneously. With
parallelism, threads literally execute in parallel, allowing
multithreaded programs to utilize multiple processors.
Concurrency is a programming pattern, a way of approaching problems.
Parallelism is a hardware feature, achievable through concurrency.
Both are useful.
This explanation is consistent with the accepted answer. Actually the concepts are far simpler than we think. Don't think them as magic. Concurrency is about a period of time, while Parallelism is about exactly at the same time, simultaneously.
Concurrency is the generalized form of parallelism. For example parallel program can also be called concurrent but reverse is not true.
Concurrent execution is possible on single processor (multiple threads, managed by scheduler or thread-pool)
Parallel execution is not possible on single processor but on multiple processors. (One process per processor)
Distributed computing is also a related topic and it can also be called concurrent computing but reverse is not true, like parallelism.
For details read this research paper
Concepts of Concurrent Programming
I really liked this graphical representation from another answer - I think it answers the question much better than a lot of the above answers
Parallelism vs Concurrency
When two threads are running in parallel, they are both running at the same time. For example, if we have two threads, A and B, then their parallel execution would look like this:
CPU 1: A ------------------------->
CPU 2: B ------------------------->
When two threads are running concurrently, their execution overlaps. Overlapping can happen in one of two ways: either the threads are executing at the same time (i.e. in parallel, as above), or their executions are being interleaved on the processor, like so:
CPU 1: A -----------> B ----------> A -----------> B ---------->
So, for our purposes, parallelism can be thought of as a special case of concurrency
Source: Another answer here
Hope that helps.
"Concurrency" is when there are multiple things in progress.
"Parallelism" is when concurrent things are progressing at the same time.
Examples of concurrency without parallelism:
Multiple threads on a single core.
Multiple messages in a Win32 message queue.
Multiple SqlDataReaders on a MARS connection.
Multiple JavaScript promises in a browser tab.
Note, however, that the difference between concurrency and parallelism is often a matter of perspective. The above examples are non-parallel from the perspective of (observable effects of) executing your code. But there is instruction-level parallelism even within a single core. There are pieces of hardware doing things in parallel with CPU and then interrupting the CPU when done. GPU could be drawing to screen while you window procedure or event handler is being executed. The DBMS could be traversing B-Trees for the next query while you are still fetching the results of the previous one. Browser could be doing layout or networking while your Promise.resolve() is being executed. Etc, etc...
So there you go. The world is as messy as always ;)
The simplest and most elegant way of understanding the two in my opinion is this. Concurrency allows interleaving of execution and so can give the illusion of parallelism. This means that a concurrent system can run your Youtube video alongside you writing up a document in Word, for example. The underlying OS, being a concurrent system, enables those tasks to interleave their execution. Because computers execute instructions so quickly, this gives the appearance of doing two things at once.
Parallelism is when such things really are in parallel. In the example above, you might find the video processing code is being executed on a single core, and the Word application is running on another. Note that this means that a concurrent program can also be in parallel! Structuring your application with threads and processes enables your program to exploit the underlying hardware and potentially be done in parallel.
Why not have everything be parallel then? One reason is because concurrency is a way of structuring programs and is a design decision to facilitate separation of concerns, whereas parallelism is often used in the name of performance. Another is that some things fundamentally cannot fully be done in parallel. An example of this would be adding two things to the back of a queue - you cannot insert both at the same time. Something must go first and the other behind it, or else you mess up the queue. Although we can interleave such execution (and so we get a concurrent queue), you cannot have it parallel.
Hope this helps!
"Concurrent" is doing things -- anything -- at the same time. They could be different things, or the same thing. Despite the accepted answer, which is lacking, it's not about "appearing to be at the same time." It's really at the same time. You need multiple CPU cores, either using shared memory within one host, or distributed memory on different hosts, to run concurrent code. Pipelines of 3 distinct tasks that are concurrently running at the same time are an example: Task-level-2 has to wait for units completed by task-level-1, and task-level-3 has to wait for units of work completed by task-level-2. Another example is concurrency of 1-producer with 1-consumer; or many-producers and 1-consumer; readers and writers; et al.
"Parallel" is doing the same things at the same time. It is concurrent, but furthermore it is the same behavior happening at the same time, and most typically on different data. Matrix algebra can often be parallelized, because you have the same operation running repeatedly: For example the column sums of a matrix can all be computed at the same time using the same behavior (sum) but on different columns. It is a common strategy to partition (split up) the columns among available processor cores, so that you have close to the same quantity of work (number of columns) being handled by each processor core. Another way to split up the work is bag-of-tasks where the workers who finish their work go back to a manager who hands out the work and get more work dynamically until everything is done. Ticketing algorithm is another.
Not just numerical code can be parallelized. Files too often can be processed in parallel. In a natural language processing application, for each of the millions of document files, you may need to count the number of tokens in the document. This is parallel, because you are counting tokens, which is the same behavior, for every file.
In other words, parallelism is when same behavior is being performed concurrently. Concurrently means at the same time, but not necessarily the same behavior. Parallel is a particular kind of concurrency where the same thing is happening at the same time.
Terms for example will include atomic instructions, critical sections, mutual exclusion, spin-waiting, semaphores, monitors, barriers, message-passing, map-reduce, heart-beat, ring, ticketing algorithms, threads, MPI, OpenMP.
Gregory Andrews' work is a top textbook on it: Multithreaded, Parallel, and Distributed Programming.
Concurrency can involve tasks run simultaneously or not (they can indeed be run in separate processors/cores but they can as well be run in "ticks"). What is important is that concurrency always refer to doing a piece of one greater task. So basically it's a part of some computations. You have to be smart about what you can do simultaneously and what not to and how to synchronize.
Parallelism means that you're just doing some things simultaneously. They don't need to be a part of solving one problem. Your threads can, for instance, solve a single problem each. Of course synchronization stuff also applies but from different perspective.
Parallelism:
Having multiple threads do similar task which are independent of each other in terms of data and resource that they require to do so. Eg: Google crawler can spawn thousands of threads and each thread can do it's task independently.
Concurrency:
Concurrency comes into picture when you have shared data, shared resource among the threads. In a transactional system this means you have to synchronize the critical section of the code using some techniques like Locks, semaphores, etc.
Explanation from this source was helpful for me:
Concurrency is related to how an application handles multiple tasks it
works on. An application may process one task at at time
(sequentially) or work on multiple tasks at the same time
(concurrently).
Parallelism on the other hand, is related to how an application
handles each individual task. An application may process the task
serially from start to end, or split the task up into subtasks which
can be completed in parallel.
As you can see, an application can be concurrent, but not parallel.
This means that it processes more than one task at the same time, but
the tasks are not broken down into subtasks.
An application can also be parallel but not concurrent. This means
that the application only works on one task at a time, and this task
is broken down into subtasks which can be processed in parallel.
Additionally, an application can be neither concurrent nor parallel.
This means that it works on only one task at a time, and the task is
never broken down into subtasks for parallel execution.
Finally, an application can also be both concurrent and parallel, in
that it both works on multiple tasks at the same time, and also breaks
each task down into subtasks for parallel execution. However, some of
the benefits of concurrency and parallelism may be lost in this
scenario, as the CPUs in the computer are already kept reasonably busy
with either concurrency or parallelism alone. Combining it may lead to
only a small performance gain or even performance loss.
Concurrent programming regards operations that appear to overlap and is primarily concerned with the complexity that arises due to non-deterministic control flow. The quantitative costs associated with concurrent programs are typically both throughput and latency. Concurrent programs are often IO bound but not always, e.g. concurrent garbage collectors are entirely on-CPU. The pedagogical example of a concurrent program is a web crawler. This program initiates requests for web pages and accepts the responses concurrently as the results of the downloads become available, accumulating a set of pages that have already been visited. Control flow is non-deterministic because the responses are not necessarily received in the same order each time the program is run. This characteristic can make it very hard to debug concurrent programs. Some applications are fundamentally concurrent, e.g. web servers must handle client connections concurrently. Erlang is perhaps the most promising upcoming language for highly concurrent programming.
Parallel programming concerns operations that are overlapped for the specific goal of improving throughput. The difficulties of concurrent programming are evaded by making control flow deterministic. Typically, programs spawn sets of child tasks that run in parallel and the parent task only continues once every subtask has finished. This makes parallel programs much easier to debug. The hard part of parallel programming is performance optimization with respect to issues such as granularity and communication. The latter is still an issue in the context of multicores because there is a considerable cost associated with transferring data from one cache to another. Dense matrix-matrix multiply is a pedagogical example of parallel programming and it can be solved efficiently by using Straasen's divide-and-conquer algorithm and attacking the sub-problems in parallel. Cilk is perhaps the most promising language for high-performance parallel programming on shared-memory computers (including multicores).
Copied from my answer: https://stackoverflow.com/a/3982782
Related
I am trying to make a program that will execute a variable number of possibly (but not certainly) computationally heavy tasks in parallel. These tasks (of Runnable type) will all be submitted at the same time and the thread pool should shut down once all these tasks are complete (in other words, the pool will only need to accept the initial tasks and nothing more).
In most of the answers that I found on this site, the question was about a server-based task (I am running my program on a decent desktop) or a pool that accepts tasks over irregular time intervals. In the questions that were not specific about the use, the answer was usually "it depends."
I have basically zero experience with threads, so I really do not know what is the optimal "thread count to task intensity" ratio.
For context, the program that I am working on deals with collections of matrices (represented by 3D arrays) where each matrix can contain up to 1000x1000 elements. One of the tasks may be to perform a convolution operation, and each task is an operation on one of the matrices in the collection.
Is there a recommendation for this specific type of problem?
The same that you hear when that question gets asked for a server: don't make assumptions, make experiments.
Try to identify (worst case: guess) the typical hardware setup that your users are running your software on. Then make sure you can do nicely automated performance testing. And then see what happens.
But thing is: that won't help much. You see, when you run your own server, you are (hopefully) in control about the workload that these machines are busy with. For a desktop setup, where remote users run your code on their boxes ... you have zero insights what else is running there. You might find that 16 threads are fine for 50% of the users. But the rest is maybe doing a lot of other things on their machines, and 16 is already way too much for them.
And that is the real crux. No matter what number you find "good to go" for a specific hardware configuration, you have no control about other workloads.
From that point of view, I would be pretty conservative. For a CPU intensive workload "too many" threads isn't helpful anyway, so go with the number of CPUs, or better number of cores as starting point.
Beyond that, what might be really helpful here: add some sort of "data gathering" to your application. Meaning: have it call home regularly, to tell you things like: "this is the hardware I am running on, I am using X threads, and the other workload on the system is Y". That might help you to get to some heuristics to adapt to the most important user setups. But be diligent about what data to collect. Define the questions you want to be answered upfront, and then pull the data you need to answer these questions.
If you workload is computationally intensive (CPU bound) you might want to look into ForkJoinPool which implements worker stealing.
A ForkJoinPool differs from other kinds of ExecutorService mainly by virtue of employing work-stealing: all threads in the pool attempt to find and execute tasks submitted to the pool and/or created by other active tasks (eventually blocking waiting for work if none exist). This enables efficient processing when most tasks spawn other subtasks (as do most ForkJoinTasks), as well as when many small tasks are submitted to the pool from external clients.
EDIT: This question might be appropriate for other languages as well - the overall theory behind it seems mostly language agnostic. However, as this will run in a JVM, I'm sure there's differences between JVM overheads/threading and those of other environments.
EDIT 2: To clarify a little better, I guess the main question is which is better for scalability: to have smaller threads that can return quicker to enable processing other chunks of work for other workloads, or try to get a single workload through as quickly as possible? The workloads are sequential and multithreading won't help speed up a single unit of work in this case; it's more in hopes of increasing the throughput of the system overall (thanks to Uri for leading me towards the clarification).
I'm working on a system that's replacing an existing system; the current system has a pretty heavy load, so we already know the replacement needs to be highly scalable. It communicates with several outside processes, such as email, other services, databases, etc., and I'm already planning on making it multithreaded to help with scaling. I've worked on multithreaded apps before, just nothing with this high of a performance/scalability requirement, so I don't have much experience when it comes to getting the absolute most out of concurrency.
The question I have is what's the best way to divide the work up between threads? I'm looking at two different versions, one that creates a single thread for the full workflow, and another that creates a thread for each of the individual steps, continuing on to the next step (in a new/different thread) when the previous step completes - probably with a NodeJS-style callback system, but not terribly concerned about the direct implementation details.
I don't know much about the nitty-gritty details of multithreading - things like context switching, for example - so I don't know if the overhead of multiple threads would swamp the execution time in each of the threads. On one hand, the single thread model seems like it would be fastest for an individual work flow compared to the multiple threads; however, it would also tie up a single thread for the entire workflow, whereas the multiple threads would be shorter lived and would return to the pool quicker (I imagine, at least).
Hopefully the underlying concept is easy enough to understand; here's a contrived pseudo-code example though:
// Single-thread approach
foo();
bar();
baz();
Or:
// Multiple Thread approach
Thread.run(foo);
when foo.isDone()
Thread.run(bar);
when bar.isDone()
Thread.run(baz);
UPDATE: Completely forgot. The reason I'm considering the multithreaded approach is the (possibly mistaken) belief that, since the threads will have smaller execution times, they'll be available for other instances of the overall workload. If each operation takes, say 5 seconds, then the single-thread version locks up a thread for 15 seconds; the multiple thread version would lock up a single thread for 5 seconds, and then it can be used for another process.
Any ideas? If there's anything similar out there in the interwebs, I'd love even a link - I couldn't think of how to search for this (I blame Monday for that, but it would probably be the same tomorrow).
Multithreading is not a silver bullet. It's means to an end.
Before making any changes, you need to ask yourself where your bottlenecks are, and what you're really trying to parallelize. I'm not sure that without more information that we can give good advice here.
If foo, bar, and baz are part of a pipeline, you're not necessarily going to improve the overall latency of a single sequence by using multiple threads.
What you might be able to do is to increase your throughput by letting multiple executions of the pipeline over different input pieces work in parallel, by letting later items to travel through the pipeline while earlier items are blocked on something (e.g., I/O). For instance, if bar() for a particular input is blocked and waiting on a notification, it's possible that you could do computationally heavy operations on another input, or have CPU resources to devote to foo(). A particularly important question is whether any of the external dependencies act as a limited shared resource. e.g., if one thread is accessing system X, is another thread going to be affected?
Threads are also very effective if you want to divide and conquer your problem - splitting your input into smaller parts, running each part through the pipeline, and then waiting on all the pieces to be ready. Is that possible with the kind of workflow you're looking at?
If you need to first do foo, then do bar, and then do baz, you should have one thread do each of these steps in sequence. This is simple and makes obvious sense.
The most common case where you're better off with the assembly line approach is when keeping the code in cache is more important than keeping the data in cache. In this case, having one thread that does foo over and over can keep the code for this step in cache, keep branch prediction information around, and so on. However, you will have data cache misses when you hand the results of foo to the thread that does bar.
This is more complex and should only be attempted if you have good reason to think it will work better.
Use a single thread for the full workflow.
Dividing up the workflow can't improve the completion time for one piece of work: since the parts of the workflow have to be done sequentially anyway, only one thread can work on the piece of work at a time. However, breaking up the stages can delay the completion time for one piece of work, because a processor which could have picked up the last part of one piece of work might instead pick up the first part of another piece of work.
Breaking up the stages into multiple threads is also unlikely to improve the time to completion of all your work, relative to executing all the stages in one thread, since ultimately you still have to execute all the stages for all the pieces of work.
Here's an example. If you have 200 of these pieces of work, each requiring three 5 second stages, and say a thread pool of two threads running on two processors, keeping the entire workflow in a single thread results in your first two results after 15 seconds. It will take 1500 seconds to get all your results, but you only need the working memory for two of the pieces of work at a time. If you break up the stages, then it may take a lot longer than 15 seconds to get your first results, and you potentially may need memory for all 200 pieces of work proceeding in parallel if you still want to get all the results in 1500 seconds.
In most cases, there are no efficiency advantages to breaking up sequential stages into different threads, and there may be substantial disadvantages. Threads are generally only useful when you can use them to do work in parallel, which does not seem to be the case for your work stages.
However, there is a huge disadvantage to breaking up the stages into separate threads. That disadvantage is that you now need to write multithreaded code that manages the stages. It's extremely easy to write bugs in such code, and such bugs can be very difficult to catch prior to production deployment.
The way to avoid such bugs is to keep the threading code as simple as possible given your requirements. In the case of your work stages, the simplest possible threading code is none at all.
I am wondering how I would go about doing this. Say I load a list of 1,000 words and for each word a thread is created and say it does a google search on each word. The problem here is obvious. I can't have 1k threads, can I. Keep in mind I am extremely new to threads and synchronization. So basically I am wondering how I would go about using less threads. I assume I have to set thread amount to a fixed number and synchronize the threads. Was wondering how to do this with Apache HttpClient using GetThread and then run it. In run I'm getting the data from webpage and turning it into a String and then checking if it contains a certain word.
Surely you can have as many threads as you want. But in general it is not recommended to use more threads than there are processing cores on your computer.
And don't forget that creating 1000 internet sessions at once affects your networking. A size of one single google page is nearly 0.3 megabytes. Are you really going to download 300 megabytes of data at once?
By the way,
There is a funny thing about concurrency.
Some people say: "synchronization is like concurrency". It is not true.
Synchronization is the opposite of concurrency.
Concurrency is when lots of things happen in parallel.
Synchronization is when I am blocking you.
(Joshua Bloch)
Maybe you can look at this problem this way.
You have 1000 words and for each word you are going to carry out a search.
In other words there are 1000 tasks to be executed and they are not related
to each other, so there is no need for synchronization in the case of this
problem as per the following definition from Wiki.
"In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data Synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity"
So in this problem you do not have to synchronize the 1000 processes which
execute the word searches since they can run independently and dont need
to join forces. So it is not a Process synchronization.
It is not a Data synchronization either since the data of each search is
independent of the other 999 searches.
Hence when Joshua says Synchronization is when I am blocking you, there is no need of blocking in this case.
Yes all tasks can concurrently get executed in different threads.
Of course your system may not have the resources to run 1000 threads
concurrently ( read same time ).
So you need concepts like pools where a pool has a certain no of
threads...say if it has 10 threads...then those 10 will start
10 independent searches on 10 words from your list.
If any of them is done with its task then it will take up the next
word search task available and the process goes on....
I'm writing a simple utility which accepts a collection of Callable tasks, and runs them in parallel. The hope is that the total time taken is little over the time taken by the longest task. The utility also adds some error handling logic - if any task fails, and the failure is something that can be treated as "retry-able" (e.g. a timeout, or a user-specified exception), then we run the task directly.
I've implemented this utility around an ExecutorService. There are two parts:
submit() all the Callable tasks to the ExecutorService, storing the Future objects.
in a for-loop, get() the result of each Future. In case of exceptions, do the "retry-able" logic.
I wrote some unit tests to ensure that using this utility is faster than running the tasks in sequence. For each test, I'd generate a certain number of Callable's, each essentially performing a Thread.sleep() for a random amount of time within a bound. I experimented with different timeouts, different number of tasks, etc. and the utility seemed to outperform sequential execution.
But when I added it to the actual system which needs this kind of utility, I saw results that were very variable - sometimes the parallel execution was faster, sometimes it was slower, and sometimes it was faster, but still took a lot more time than the longest individual task.
Am I just doing it all wrong? I know ExecutorService has invokeAll() but that swallows the underlying exceptions. I also tried using a CompletionService to fetch task results in the order in which they completed, but it exhibited more or less the same behavior. I'm reading up now on latches and barriers - is this the right direction for solving this problem?
I wrote some unit tests to ensure that using this utility is faster than running the tasks in sequence. For each test, I'd generate a certain number of Callable's, each essentially performing a Thread.sleep() for a random amount of time within a bound
Yeah this is certainly not a fair test since it is using neither CPU nor IO. I certainly hope that parallel sleeps would run faster than serial. :-)
But when I added it to the actual system which needs this kind of utility, I saw results that were very variable
Right. Whether or not a threaded application runs faster than a serial one depends a lot on a number of factors. In particular, IO bound applications will not improve in performance since they are bound by the IO channel and really cannot do concurrent operations because of this. The more processing that is needed by the application, the larger the win is to convert it to be multi-threaded.
Am I just doing it all wrong?
Hard to know without more details. You might consider playing around with the number of threads that are running concurrently. If you have a ton of jobs to process you should not be using a Executos.newCachedThreadPool() and should optimized the newFixedSizeThreadPool(...) depending on the number of CPUs your architecture has.
You also may want to see if you can isolate the IO operations in a few threads and the processing to other threads. Like one input thread reading from a file and one output thread (or a couple) writing to the database or something. So multiple sized pools may do better for different types of tasks instead of using a single thread-pool.
tried using a CompletionService to fetch task results in the order in which they completed
If you are retrying operations, using a CompletionService is exactly the way to go. As jobs finish and throw exceptions (or return failure), they can be restarted and put back into the thread-pool immediately. I don't see any reason why your performance problems would be because of this.
Multi-threaded programming doesn't come for free. It has an overhead. The over head can easily exceed and performance gain and usually makes your code more complex.
Additional threads give access to more cpu power (assuming you have spare cpus) but in general they won't make you HDD spin faster , give you more network bandwidth or speed up something which is not cpu bound.
Multiple threads can help give you a greater share of an external resource.
I have a Java program that runs many small simulations. It runs a genetic algorithm, where each fitness function is a simulation using parameters on each chromosome. Each one takes maybe 10 or so seconds if run by itself, and I want to run a pretty big population size (say 100?). I can't start the next round of simulations until the previous one has finished. I have access to a machine with a whack of processors in it and I'm wondering if I need to do anything to make the simulations run in parallel. I've never written anything explicitly for multicore processors before and I understand it's a daunting task.
So this is what I would like to know: To what extent and how well does the JVM parallel-ize? I have read that it creates low level threads, but how smart is it? How efficient is it? Would my program run faster if I made each simulation a thread? I know this is a huge topic, but could you point me towards some introductory literature concerning parallel processing and Java?
Thanks very much!
Update:
Ok, I've implemented an ExecutorService and made my small simulations implement Runnable and have run() methods. Instead of writing this:
Simulator sim = new Simulator(args);
sim.play();
return sim.getResults();
I write this in my constructor:
ExecutorService executor = Executors.newFixedThreadPool(32);
And then each time I want to add a new simulation to the pool, I run this:
RunnableSimulator rsim = new RunnableSimulator(args);
exectuor.exectue(rsim);
return rsim.getResults();
The RunnableSimulator::run() method calls the Simulator::play() method, neither have arguments.
I think I am getting thread interference, because now the simulations error out. By error out I mean that variables hold values that they really shouldn't. No code from within the simulation was changed, and before the simulation ran perfectly over many many different arguments. The sim works like this: each turn it's given a game-piece and loops through all the location on the game board. It checks to see if the location given is valid, and if so, commits the piece, and measures that board's goodness. Now, obviously invalid locations are being passed to the commit method, resulting in index out of bounds errors all over the place.
Each simulation is its own object right? Based on the code above? I can pass the exact same set of arguments to the RunnableSimulator and Simulator classes and the runnable version will throw exceptions. What do you think might cause this and what can I do to prevent it? Can I provide some code samples in a new question to help?
Java Concurrency Tutorial
If you're just spawning a bunch of stuff off to different threads, and it isn't going to be talking back and forth between different threads, it isn't too hard; just write each in a Runnable and pass them off to an ExecutorService.
You should skim the whole tutorial, but for this particular task, start here.
Basically, you do something like this:
ExecutorService executorService = Executors.newFixedThreadPool(n);
where n is the number of things you want running at once (usually the number of CPUs). Each of your tasks should be an object that implements Runnable, and you then execute it on your ExecutorService:
executorService.execute(new SimulationTask(parameters...));
Executors.newFixedThreadPool(n) will start up n threads, and execute will insert the tasks into a queue that feeds to those threads. When a task finishes, the thread it was running on is no longer busy, and the next task in the queue will start running on it. Execute won't block; it will just put the task into the queue and move on to the next one.
The thing to be careful of is that you really AREN'T sharing any mutable state between tasks. Your task classes shouldn't depend on anything mutable that will be shared among them (i.e. static data). There are ways to deal with shared mutable state (locking), but if you can avoid the problem entirely it will be a lot easier.
EDIT: Reading your edits to your question, it looks like you really want something a little different. Instead of implementing Runnable, implement Callable. Your call() method should be pretty much the same as your current run(), except it should return getResults();. Then, submit() it to your ExecutorService. You will get a Future in return, which you can use to test if the simulation is done, and, when it is, get your results.
You can also see the new fork join framework by Doug Lea. One of the best book on the subject is certainly Java Concurrency in Practice. I would strong recommend you to take a look at the fork join model.
Java threads are just too heavyweight. We have implement parallel branches in Ateji PX as very lightweight scheduled objects. As in Erlang, you can create tens of millions of parallel branches before you start noticing an overhead. But it's still Java, so you don't need to switch to a different language.
If you are doing full-out processing all the time in your threads, you won't benefit from having more threads than processors. If your threads occasionally wait on each other or on the system, then Java scales well up to thousands of threads.
I wrote an app that discovered a class B network (65,000) in a few minutes by pinging each node, and each ping had retries with an increasing delay. When I put each ping on a separate thread (this was before NIO, I could probably improve it now), I could run to about 4000 threads in windows before things started getting flaky. Linux the number was nearer 1000 (Never figured out why).
No matter what language or toolkit you use, if your data interacts, you will have to pay some attention to those areas where it does. Java uses a Synchronized keyword to prevent two threads from accessing a section at the same time. If you write your Java in a more functional manner (making all your members final) you can run without synchronization, but it can be--well let's just say solving problems takes a different approach that way.
Java has other tools to manage units of independent work, look in the "Concurrent" package for more information.
Java is pretty good at parallel processing, but there are two caveats:
Java threads are relatively heavyweight (compared with e.g. Erlang), so don't start creating them in the hundreds or thousands. Each thread gets its own stack memory (default: 256KB) and you could run out of memory, among other things.
If you run on a very powerful machine (especially with a lot of CPUs and a large amount of RAM), then the VM's default settings (especially concerning GC) may result in suboptimal performance and you may have to spend some times tuning them via command line options. Unfortunately, this is not a simple task and requires a lot of knowledge.