X-Git-Url: http://git.megacz.com/?a=blobdiff_plain;ds=sidebyside;f=docs%2Fusers_guide%2Fparallel.xml;h=37cafd2d764e2aad43ab3fa636099383f18a9ce8;hb=aedb94f5f220b5e442b23ecc445fd38c8d9b6ba0;hp=fc7ca94c26fea34d3258ccdc5027730b7232c1a4;hpb=6ba49a2db7549b7a14f1aafa4f57934098dd8240;p=ghc-hetmet.git
diff --git a/docs/users_guide/parallel.xml b/docs/users_guide/parallel.xml
index fc7ca94..37cafd2 100644
--- a/docs/users_guide/parallel.xml
+++ b/docs/users_guide/parallel.xml
@@ -5,7 +5,7 @@
GHC implements some major extensions to Haskell to support
- concurrent and parallel programming. Let us first etablish terminology:
+ concurrent and parallel programming. Let us first establish terminology:
Parallelism means running
a Haskell program on multiple processors, with the goal of improving
@@ -33,12 +33,12 @@
url="http://research.microsoft.com/copyright/accept.asp?path=/users/simonpj/papers/concurrent-haskell.ps.gz">
Concurrent Haskell paper is still an excellent
resource, as is Tackling
+ url="http://research.microsoft.com/%7Esimonpj/papers/marktoberdorf/">Tackling
the awkward squad.
To the programmer, Concurrent Haskell introduces no new language constructs;
rather, it appears simply as a library,
+ url="../libraries/base/Control-Concurrent.html">
Control.Concurrent. The functions exported by this
library include:
@@ -62,7 +62,7 @@ the FFI with concurrency.
it.The main library you need to use STM is
+ url="../libraries/stm/Control-Concurrent-STM.html">
Control.Concurrent.STM. The main features supported are these:
Atomic blocks.
@@ -83,7 +83,7 @@ All these features are described in the papers mentioned earlier.
By default GHC runs your program on one processor; if you
want it to run in parallel you must link your program
with the , and run it with the RTS
- option; see ).
+ option; see ).
The runtime will
schedule the running Haskell threads among the available OS
threads, running as many in parallel as you specified with the
@@ -110,14 +110,14 @@ All these features are described in the papers mentioned earlier.
linkend="concurrent-haskell"/>), but the simplest mechanism for extracting parallelism from pure code is
to use the par combinator, which is closely related to (and often used
with) seq. Both of these are available from Control.Parallel:
+ url="../libraries/parallel/Control-Parallel.html">Control.Parallel:
infixr 0 `par`
-infixr 1 `seq`
+infixr 1 `pseq`
-par :: a -> b -> b
-seq :: a -> b -> b
+par :: a -> b -> b
+pseq :: a -> b -> b
The expression (x `par` y)sparks the evaluation of x
@@ -136,24 +136,35 @@ import Control.Parallel
nfib :: Int -> Int
nfib n | n <= 1 = 1
- | otherwise = par n1 (seq n2 (n1 + n2 + 1))
+ | otherwise = par n1 (pseq n2 (n1 + n2 + 1))
where n1 = nfib (n-1)
n2 = nfib (n-2)
For values of n greater than 1, we use
par to spark a thread to evaluate nfib (n-1),
- and then we use seq to force the
+ and then we use pseq to force the
parent thread to evaluate nfib (n-2) before going on
to add together these two subexpressions. In this divide-and-conquer
approach, we only spark a new thread for one branch of the computation
(leaving the parent to evaluate the other branch). Also, we must use
- seq to ensure that the parent will evaluate
+ pseq to ensure that the parent will evaluate
n2beforen1
in the expression (n1 + n2 + 1). It is not sufficient
to reorder the expression as (n2 + n1 + 1), because
the compiler may not generate code to evaluate the addends from left to
right.
+
+ Note that we use pseq rather
+ than seq. The two are almost equivalent, but
+ differ in their runtime behaviour in a subtle
+ way: seq can evaluate its arguments in either
+ order, but pseq is required to evaluate its
+ first argument before its second, which makes it more suitable
+ for controlling the evaluation order in conjunction
+ with par.
+
+
When using par, the general rule of thumb is that
the sparked computation should be required at a later time, but not too
soon. Also, the sparked computation should not be too small, otherwise
@@ -161,14 +172,26 @@ nfib n | n <= 1 = 1
amount of parallelism gained. Getting these factors right is tricky in
practice.
+ It is possible to glean a little information about how
+ well par is working from the runtime
+ statistics; see .
+
More sophisticated combinators for expressing parallelism are
available from the Control.Parallel.Strategies module.
+ url="../libraries/parallel/Control-Parallel-Strategies.html">Control.Parallel.Strategies module.
This module builds functionality around par,
expressing more elaborate patterns of parallel computation, such as
parallel map.
+Data Parallel Haskell
+ GHC includes experimental support for Data Parallel Haskell (DPH). This code
+ is highly unstable and is only provided as a technology preview. More
+ information can be found on the corresponding DPH
+ wiki page.
+
+