-<Chapter id="sooner-faster-quicker">
-<Title>Advice on: sooner, faster, smaller, stingier
-</Title>
-
-<Para>
-Please advise us of other “helpful hints” that should go here!
-</Para>
-
-<Sect1 id="sooner">
-<Title>Sooner: producing a program more quickly
-</Title>
-
-<Para>
-<IndexTerm><Primary>compiling faster</Primary></IndexTerm>
-<IndexTerm><Primary>faster compiling</Primary></IndexTerm>
-<VariableList>
-
-<VarListEntry>
-<Term>Don't use <Option>-O</Option> or (especially) <Option>-O2</Option>:</Term>
-<ListItem>
-<Para>
-By using them, you are telling GHC that you are willing to suffer
-longer compilation times for better-quality code.
-</Para>
-
-<Para>
-GHC is surprisingly zippy for normal compilations without <Option>-O</Option>!
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Use more memory:</Term>
-<ListItem>
-<Para>
-Within reason, more memory for heap space means less garbage
-collection for GHC, which means less compilation time. If you use
-the <Option>-Rgc-stats</Option> option, you'll get a garbage-collector report.
-(Again, you can use the cheap-and-nasty <Option>-optCrts-Sstderr</Option> option to
-send the GC stats straight to standard error.)
-</Para>
-
-<Para>
-If it says you're using more than 20% of total time in garbage
-collecting, then more memory would help.
-</Para>
-
-<Para>
-If the heap size is approaching the maximum (64M by default), and you
-have lots of memory, try increasing the maximum with the
-<Option>-M<size></Option><IndexTerm><Primary>-M<size> option</Primary></IndexTerm> option, e.g.: <Command>ghc -c -O
--M1024m Foo.hs</Command>.
-</Para>
-
-<Para>
-Increasing the default allocation area size used by the compiler's RTS
-might also help: use the <Option>-A<size></Option><IndexTerm><Primary>-A<size> option</Primary></IndexTerm>
-option.
-</Para>
-
-<Para>
-If GHC persists in being a bad memory citizen, please report it as a
-bug.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Don't use too much memory!</Term>
-<ListItem>
-<Para>
-As soon as GHC plus its “fellow citizens” (other processes on your
-machine) start using more than the <Emphasis>real memory</Emphasis> on your
-machine, and the machine starts “thrashing,” <Emphasis>the party is
-over</Emphasis>. Compile times will be worse than terrible! Use something
-like the csh-builtin <Command>time</Command> command to get a report on how many page
-faults you're getting.
-</Para>
-
-<Para>
-If you don't know what virtual memory, thrashing, and page faults are,
-or you don't know the memory configuration of your machine,
-<Emphasis>don't</Emphasis> try to be clever about memory use: you'll just make
-your life a misery (and for other people, too, probably).
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Try to use local disks when linking:</Term>
-<ListItem>
-<Para>
-Because Haskell objects and libraries tend to be large, it can take
-many real seconds to slurp the bits to/from a remote filesystem.
-</Para>
-
-<Para>
-It would be quite sensible to <Emphasis>compile</Emphasis> on a fast machine using
-remotely-mounted disks; then <Emphasis>link</Emphasis> on a slow machine that had
-your disks directly mounted.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Don't derive/use <Function>Read</Function> unnecessarily:</Term>
-<ListItem>
-<Para>
-It's ugly and slow.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>GHC compiles some program constructs slowly:</Term>
-<ListItem>
-<Para>
-Deeply-nested list comprehensions seem to be one such; in the past,
-very large constant tables were bad, too.
-</Para>
-
-<Para>
-We'd rather you reported such behaviour as a bug, so that we can try
-to correct it.
-</Para>
-
-<Para>
-The parts of the compiler that seem most prone to wandering off for a
-long time are the abstract interpreters (strictness and update
-analysers). You can turn these off individually with
-<Option>-fno-strictness</Option><IndexTerm><Primary>-fno-strictness anti-option</Primary></IndexTerm> and
-<Option>-fno-update-analysis</Option>.<IndexTerm><Primary>-fno-update-analysis anti-option</Primary></IndexTerm>
-</Para>
-
-<Para>
-To figure out which part of the compiler is badly behaved, the
-<Option>-dshow-passes</Option><IndexTerm><Primary>-dshow-passes option</Primary></IndexTerm> option is your
-friend.
-</Para>
-
-<Para>
-If your module has big wads of constant data, GHC may produce a huge
-basic block that will cause the native-code generator's register
-allocator to founder. Bring on <Option>-fvia-C</Option><IndexTerm><Primary>-fvia-C option</Primary></IndexTerm>
-(not that GCC will be that quick about it, either).
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Avoid the consistency-check on linking:</Term>
-<ListItem>
-<Para>
-Use <Option>-no-link-chk</Option><IndexTerm><Primary>-no-link-chk</Primary></IndexTerm>; saves effort. This is
-probably safe in a I-only-compile-things-one-way setup.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Explicit <Literal>import</Literal> declarations:</Term>
-<ListItem>
-<Para>
-Instead of saying <Literal>import Foo</Literal>, say <Literal>import Foo (...stuff I want...)</Literal>.
-</Para>
-
-<Para>
-Truthfully, the reduction on compilation time will be very small.
-However, judicious use of <Literal>import</Literal> declarations can make a
-program easier to understand, so it may be a good idea anyway.
-</Para>
-</ListItem>
-</VarListEntry>
-</VariableList>
-</Para>
-
-</Sect1>
-
-<Sect1 id="faster">
-<Title>Faster: producing a program that runs quicker
-</Title>
-
-<Para>
-<IndexTerm><Primary>faster programs, how to produce</Primary></IndexTerm>
-</Para>
-
-<Para>
-The key tool to use in making your Haskell program run faster are
-GHC's profiling facilities, described separately in <XRef LinkEnd="profiling">. There is <Emphasis>no substitute</Emphasis> for
-finding where your program's time/space is <Emphasis>really</Emphasis> going, as
-opposed to where you imagine it is going.
-</Para>
-
-<Para>
-Another point to bear in mind: By far the best way to improve a
-program's performance <Emphasis>dramatically</Emphasis> is to use better
-algorithms. Once profiling has thrown the spotlight on the guilty
-time-consumer(s), it may be better to re-think your program than to
-try all the tweaks listed below.
-</Para>
-
-<Para>
-Another extremely efficient way to make your program snappy is to use
-library code that has been Seriously Tuned By Someone Else. You
-<Emphasis>might</Emphasis> be able to write a better quicksort than the one in the
-HBC library, but it will take you much longer than typing <Literal>import
-QSort</Literal>. (Incidentally, it doesn't hurt if the Someone Else is Lennart
-Augustsson.)
-</Para>
-
-<Para>
-Please report any overly-slow GHC-compiled programs. The current
-definition of “overly-slow” is “the HBC-compiled version ran
-faster”…
-</Para>
-
-<Para>
-<VariableList>
-
-<VarListEntry>
-<Term>Optimise, using <Option>-O</Option> or <Option>-O2</Option>:</Term>
-<ListItem>
-<Para>
-This is the most basic way
-to make your program go faster. Compilation time will be slower,
-especially with <Option>-O2</Option>.
-</Para>
-
-<Para>
-At present, <Option>-O2</Option> is nearly indistinguishable from <Option>-O</Option>.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Compile via C and crank up GCC:</Term>
-<ListItem>
-<Para>
-Even with <Option>-O</Option>, GHC tries to
-use a native-code generator, if available. But the native
-code-generator is designed to be quick, not mind-bogglingly clever.
-Better to let GCC have a go, as it tries much harder on register
-allocation, etc.
-</Para>
-
-<Para>
-So, when we want very fast code, we use: <Option>-O -fvia-C -O2-for-C</Option>.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Overloaded functions are not your friend:</Term>
-<ListItem>
-<Para>
-Haskell's overloading (using type classes) is elegant, neat, etc.,
-etc., but it is death to performance if left to linger in an inner
-loop. How can you squash it?
-</Para>
-
-<Para>
-<VariableList>
-
-<VarListEntry>
-<Term>Give explicit type signatures:</Term>
-<ListItem>
-<Para>
-Signatures are the basic trick; putting them on exported, top-level
-functions is good software-engineering practice, anyway. (Tip: using
-<Option>-fwarn-missing-signatures</Option><IndexTerm><Primary>-fwarn-missing-signatures
-option</Primary></IndexTerm> can help enforce good signature-practice).
-</Para>
-
-<Para>
-The automatic specialisation of overloaded functions (with <Option>-O</Option>)
-should take care of overloaded local and/or unexported functions.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Use <Literal>SPECIALIZE</Literal> pragmas:</Term>
-<ListItem>
-<Para>
-<IndexTerm><Primary>SPECIALIZE pragma</Primary></IndexTerm>
-<IndexTerm><Primary>overloading, death to</Primary></IndexTerm>
-</Para>
-
-<Para>
-Specialize the overloading on key functions in your program. See
-<XRef LinkEnd="specialize-pragma"> and
-<XRef LinkEnd="specialize-instance-pragma">.
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>“But how do I know where overloading is creeping in?”:</Term>
-<ListItem>
-<Para>
-A low-tech way: grep (search) your interface files for overloaded
-type signatures; e.g.,:
-
-<ProgramListing>
-% egrep '^[a-z].*::.*=>' *.hi
-</ProgramListing>
-
-</Para>
-</ListItem>
-</VarListEntry>
-</VariableList>
-</Para>
-</ListItem>
-</VarListEntry>
-<VarListEntry>
-<Term>Strict functions are your dear friends:</Term>
-<ListItem>
-<Para>
-and, among other things, lazy pattern-matching is your enemy.
-</Para>
-
-<Para>
-(If you don't know what a “strict function” is, please consult a
-functional-programming textbook. A sentence or two of
-explanation here probably would not do much good.)
-</Para>
-
-<Para>
-Consider these two code fragments:
-
-<ProgramListing>
+<chapter id="sooner-faster-quicker">
+<title>Advice on: sooner, faster, smaller, thriftier</title>
+
+<para>Please advise us of other “helpful hints” that
+should go here!</para>
+
+<sect1 id="sooner">
+<title>Sooner: producing a program more quickly
+</title>
+
+<indexterm><primary>compiling faster</primary></indexterm>
+<indexterm><primary>faster compiling</primary></indexterm>
+
+ <variablelist>
+ <varlistentry>
+ <term>Don't use <option>-O</option> or (especially) <option>-O2</option>:</term>
+ <listitem>
+ <para>By using them, you are telling GHC that you are
+ willing to suffer longer compilation times for
+ better-quality code.</para>
+
+ <para>GHC is surprisingly zippy for normal compilations
+ without <option>-O</option>!</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Use more memory:</term>
+ <listitem>
+ <para>Within reason, more memory for heap space means less
+ garbage collection for GHC, which means less compilation
+ time. If you use the <option>-Rghc-timing</option> option,
+ you'll get a garbage-collector report. (Again, you can use
+ the cheap-and-nasty <option>+RTS -Sstderr -RTS</option>
+ option to send the GC stats straight to standard
+ error.)</para>
+
+ <para>If it says you're using more than 20% of total
+ time in garbage collecting, then more memory would
+ help.</para>
+
+ <para>If the heap size is approaching the maximum (64M by
+ default), and you have lots of memory, try increasing the
+ maximum with the
+ <option>-M<size></option><indexterm><primary>-M<size>
+ option</primary></indexterm> option, e.g.: <Command>ghc -c
+ -O -M1024m Foo.hs</Command>.</para>
+
+ <para>Increasing the default allocation area size used by
+ the compiler's RTS might also help: use the
+ <option>-A<size></option><indexterm><primary>-A<size>
+ option</primary></indexterm> option.</para>
+
+ <para>If GHC persists in being a bad memory citizen, please
+ report it as a bug.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Don't use too much memory!</term>
+ <listitem>
+ <para>As soon as GHC plus its “fellow citizens”
+ (other processes on your machine) start using more than the
+ <Emphasis>real memory</Emphasis> on your machine, and the
+ machine starts “thrashing,” <Emphasis>the party
+ is over</Emphasis>. Compile times will be worse than
+ terrible! Use something like the csh-builtin
+ <Command>time</Command> command to get a report on how many
+ page faults you're getting.</para>
+
+ <para>If you don't know what virtual memory, thrashing, and
+ page faults are, or you don't know the memory configuration
+ of your machine, <Emphasis>don't</Emphasis> try to be clever
+ about memory use: you'll just make your life a misery (and
+ for other people, too, probably).</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Try to use local disks when linking:</term>
+ <listitem>
+ <para>Because Haskell objects and libraries tend to be
+ large, it can take many real seconds to slurp the bits
+ to/from a remote filesystem.</para>
+
+ <para>It would be quite sensible to
+ <Emphasis>compile</Emphasis> on a fast machine using
+ remotely-mounted disks; then <Emphasis>link</Emphasis> on a
+ slow machine that had your disks directly mounted.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Don't derive/use <Function>Read</Function> unnecessarily:</term>
+ <listitem>
+ <para>It's ugly and slow.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>GHC compiles some program constructs slowly:</term>
+ <listitem>
+ <para>Deeply-nested list comprehensions seem to be one such;
+ in the past, very large constant tables were bad,
+ too.</para>
+
+ <para>We'd rather you reported such behaviour as a bug, so
+ that we can try to correct it.</para>
+
+ <para>The part of the compiler that is occasionally prone to
+ wandering off for a long time is the strictness analyser.
+ You can turn this off individually with
+ <option>-fno-strictness</option>.
+ <indexterm><primary>-fno-strictness
+ anti-option</primary></indexterm></para>
+
+ <para>To figure out which part of the compiler is badly
+ behaved, the
+ <option>-v2</option><indexterm><primary><option>-v</option></primary>
+ </indexterm> option is your friend.</para>
+
+ <para>If your module has big wads of constant data, GHC may
+ produce a huge basic block that will cause the native-code
+ generator's register allocator to founder. Bring on
+ <option>-fvia-C</option><indexterm><primary>-fvia-C
+ option</primary></indexterm> (not that GCC will be that
+ quick about it, either).</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Explicit <literal>import</literal> declarations:</term>
+ <listitem>
+ <para>Instead of saying <literal>import Foo</literal>, say
+ <literal>import Foo (...stuff I want...)</literal> You can
+ get GHC to tell you the minimal set of required imports by
+ using the <option>-ddump-minimal-imports</option> option
+ (see <xref linkend="hi-options">).</para>
+
+ <para>Truthfully, the reduction on compilation time will be
+ very small. However, judicious use of
+ <literal>import</literal> declarations can make a program
+ easier to understand, so it may be a good idea
+ anyway.</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect1>
+
+ <sect1 id="faster">
+ <title>Faster: producing a program that runs quicker</title>
+
+ <indexterm><primary>faster programs, how to produce</primary></indexterm>
+
+ <para>The key tool to use in making your Haskell program run
+ faster are GHC's profiling facilities, described separately in
+ <XRef LinkEnd="profiling">. There is <Emphasis>no
+ substitute</Emphasis> for finding where your program's time/space
+ is <Emphasis>really</Emphasis> going, as opposed to where you
+ imagine it is going.</para>
+
+ <para>Another point to bear in mind: By far the best way to
+ improve a program's performance <Emphasis>dramatically</Emphasis>
+ is to use better algorithms. Once profiling has thrown the
+ spotlight on the guilty time-consumer(s), it may be better to
+ re-think your program than to try all the tweaks listed below.</para>
+
+ <para>Another extremely efficient way to make your program snappy
+ is to use library code that has been Seriously Tuned By Someone
+ Else. You <Emphasis>might</Emphasis> be able to write a better
+ quicksort than the one in <literal>Data.List</literal>, but it
+ will take you much longer than typing <literal>import
+ Data.List</literal>.</para>
+
+ <para>Please report any overly-slow GHC-compiled programs. Since
+ GHC doesn't have any credible competition in the performance
+ department these days it's hard to say what overly-slow means, so
+ just use your judgement! Of course, if a GHC compiled program
+ runs slower than the same program compiled with NHC or Hugs, then
+ it's definitely a bug.</para>
+
+ <variablelist>
+ <varlistentry>
+ <term>Optimise, using <option>-O</option> or <option>-O2</option>:</term>
+ <listitem>
+ <para>This is the most basic way to make your program go
+ faster. Compilation time will be slower, especially with
+ <option>-O2</option>.</para>
+
+ <para>At present, <option>-O2</option> is nearly
+ indistinguishable from <option>-O</option>.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Compile via C and crank up GCC:</term>
+ <listitem>
+ <para>The native code-generator is designed to be quick, not
+ mind-bogglingly clever. Better to let GCC have a go, as it
+ tries much harder on register allocation, etc.</para>
+
+ <para>At the moment, if you turn on <option>-O</option> you
+ get GCC instead. This may change in the future.</para>
+
+ <para>So, when we want very fast code, we use: <option>-O
+ -fvia-C</option>.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Overloaded functions are not your friend:</term>
+ <listitem>
+ <para>Haskell's overloading (using type classes) is elegant,
+ neat, etc., etc., but it is death to performance if left to
+ linger in an inner loop. How can you squash it?</para>
+
+ <variablelist>
+ <varlistentry>
+ <term>Give explicit type signatures:</term>
+ <listitem>
+ <para>Signatures are the basic trick; putting them on
+ exported, top-level functions is good
+ software-engineering practice, anyway. (Tip: using
+ <option>-fwarn-missing-signatures</option><indexterm><primary>-fwarn-missing-signatures
+ option</primary></indexterm> can help enforce good
+ signature-practice).</para>
+
+ <para>The automatic specialisation of overloaded
+ functions (with <option>-O</option>) should take care
+ of overloaded local and/or unexported functions.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Use <literal>SPECIALIZE</literal> pragmas:</term>
+ <listitem>
+ <indexterm><primary>SPECIALIZE pragma</primary></indexterm>
+ <indexterm><primary>overloading, death to</primary></indexterm>
+
+ <para>Specialize the overloading on key functions in
+ your program. See <XRef LinkEnd="specialize-pragma">
+ and <XRef LinkEnd="specialize-instance-pragma">.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>“But how do I know where overloading is creeping in?”:</term>
+ <listitem>
+ <para>A low-tech way: grep (search) your interface
+ files for overloaded type signatures. You can view
+ interface files using the
+ <option>--show-iface</option> option (see <xref
+ linkend="hi-options">).
+
+<programlisting>
+% ghc --show-iface Foo.hi | egrep '^[a-z].*::.*=>'
+</programlisting>
+</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>Strict functions are your dear friends:</term>
+ <listitem>
+ <para>and, among other things, lazy pattern-matching is your
+ enemy.</para>
+
+ <para>(If you don't know what a “strict
+ function” is, please consult a functional-programming
+ textbook. A sentence or two of explanation here probably
+ would not do much good.)</para>
+
+ <para>Consider these two code fragments:
+
+<programlisting>