What is the difference between concurrent programming and parallel programing? I asked google but didn't find anything that helped me to understand that difference. Could you give me an example for both?
For now I found this explanation: http://www.linux-mag.com/id/7411 - but "concurrency is a property of the program" vs "parallel execution is a property of the machine" isn't enough for me - still I can't say what is what.
If you program using threads (concurrent programming), it's not necessarily going to be executed as such (parallel execution), since it depends on whether the machine can handle several threads.
Here's a visual example. Threads on a non-threaded machine:
Threads on a threaded machine:
The dashes represent executed code. As you can see, they both split up and execute separately, but the threaded machine can execute several separate pieces at once.
Interpreting the original question as parallel/concurrent computation instead of programming.
In concurrent computation two computations both advance independently of each other. The second computation doesn't have to wait until the first is finished for it to advance. It doesn't state however, the mechanism how this is achieved. In single-core setup, suspending and alternating between threads is required (also called pre-emptive multithreading).
In parallel computation two computations both advance simultaneously - that is literally at the same time. This is not possible with single CPU and requires multi-core setup instead.
According to: "Parallel vs Concurrent in Node.js".
Source: PThreads Programming - A POSIX Standard for Better Multiprocessing, Buttlar, Farrell, Nichols
And
To understand the difference, I strongly recommend to see this Rob Pike(one of Golang creators)'s video. Concurrency Is Not Parallelism
Although there isn’t complete agreement on the distinction between the terms parallel and concurrent, many authors make the following distinctions:
So parallel programs are concurrent, but a program such as a multitasking operating system is also concurrent, even when it is run on a machine with only one core, since multiple tasks can be in progress at any instant.
Source: An introduction to parallel programming, Peter Pacheco
1. Definitions:
Classic scheduling of tasks can be
SERIAL
,PARALLEL
orCONCURRENT
SERIAL:
Analysis shows that tasks MUST BE executed one after the other in a known sequence tricked order OR it will not work.I.e.: Easy enough, we can live with this
PARALLEL:
Analysis shows that tasks MUST BE executed at the same time OR it will not work.I.e.: Try to avoid this or we will have tears by tea time.
CONCURRENT.
Analysis shows that we NEED NOT CARE. We are not careless, we have analysed it and it does not matter; we can therefore execute any task using any available facility at any time.I.e.: HAPPY DAYS
Often the scheduling available changes at known events which I called a state change.
2. This is not a { Software | Programming } Feature but a Systems Design approach:
People often think this is about software but it is in fact a systems design concept that pre-dates computers
Software systems were a little slow in the uptake, very few software languages even attempt to address the problem.
You might try looking up the TRANSPUTER language
occam
if you are interested in a good try.(
occam
has many principally innovative ( if not second to none ) features, incl. explicit language support forPAR
andSER
code-parts execution constructors that other languages principally suffer from having in the forthcomming era of Massive Parallel Processor Arrays available in recent years, re-inventing the wheel InMOS Transputers used more than 35 years ago (!!!) )3. What a good Systems Design takes care to cover:
Succinctly, systems design addresses the following:
THE VERB - What are you doing. ( operation or algorithm )
THE NOUN - What are you doing it to. ( Data or interface )
WHEN - Initiation, schedule, state changes,
SERIAL
,PARALLEL
,CONCURRENT
WHERE - Once you know when things happen then you can say where they can happen and not before.
WHY - Is this a way to do it? Is there any other ways? Is there a best way?
.. and last but not least .. WHAT HAPPENS IF YOU DO NOT DO IT ?
4. Visual examples of PARALLEL vs. SERIAL approaches:
Recent Parallel architectures available in 2014 in action on arrays of 16-, 64-, 1024- parallel RISC uP-s
Quarter of century back - a part of the true parallel history with Inmos Transputer CPU demo video from the early 1990s
Good luck