
These examples show that it is important to separate the idea of a "task"
from the idea of a "thread". We can "schedule" many tasks onto a single
thread.

We would like our solutions to have "maximal parallelism" but with the
minimum number of threads. But a solution with the minimum number of
threads can sometimes be pretty complex.

If we implement a dataflow graph using "message passing", then we can
always guarantee a solution with maximal parallelism that is easy to
write and easy to read. But a message passing solution will not have
the minimum possible number of threads.

We can always "linearize" these dataflow graphs and use a single thread.
(The dataflow graphs are DAGs (directed acyclic graphs) and "linearizing"
a DAG is an application of the "topological sort" algorithm.) If we linearize
a dataflow graph, then we have no parallelism and no concurrency, but the
solution is easy to write and guaranteed to be free of deadlocks and race
conditions.
