</Title>
<Para>
-Please advise us of other ``helpful hints'' that should go here!
+Please advise us of other “helpful hints” that should go here!
</Para>
<Sect1 id="sooner">
<VariableList>
<VarListEntry>
-<Term>Don't use <Literal>-O</Literal> or (especially) <Literal>-O2</Literal>:</Term>
+<Term>Don't use <Option>-O</Option> or (especially) <Option>-O2</Option>:</Term>
<ListItem>
<Para>
By using them, you are telling GHC that you are willing to suffer
</Para>
<Para>
-GHC is surprisingly zippy for normal compilations without <Literal>-O</Literal>!
+GHC is surprisingly zippy for normal compilations without <Option>-O</Option>!
</Para>
</ListItem>
</VarListEntry>
<Para>
Within reason, more memory for heap space means less garbage
collection for GHC, which means less compilation time. If you use
-the <Literal>-Rgc-stats</Literal> option, you'll get a garbage-collector report.
-(Again, you can use the cheap-and-nasty <Literal>-optCrts-Sstderr</Literal> option to
+the <Option>-Rgc-stats</Option> option, you'll get a garbage-collector report.
+(Again, you can use the cheap-and-nasty <Option>-optCrts-Sstderr</Option> option to
send the GC stats straight to standard error.)
</Para>
<Para>
If the heap size is approaching the maximum (64M by default), and you
have lots of memory, try increasing the maximum with the
-<Literal>-M<size></Literal><IndexTerm><Primary>-M<size> option</Primary></IndexTerm> option, e.g.: <Literal>ghc -c -O
--M1024m Foo.hs</Literal>.
+<Option>-M<size></Option><IndexTerm><Primary>-M<size> option</Primary></IndexTerm> option, e.g.: <Command>ghc -c -O
+-M1024m Foo.hs</Command>.
</Para>
<Para>
Increasing the default allocation area size used by the compiler's RTS
-might also help: use the <Literal>-A<size></Literal><IndexTerm><Primary>-A<size> option</Primary></IndexTerm>
+might also help: use the <Option>-A<size></Option><IndexTerm><Primary>-A<size> option</Primary></IndexTerm>
option.
</Para>
<Term>Don't use too much memory!</Term>
<ListItem>
<Para>
-As soon as GHC plus its ``fellow citizens'' (other processes on your
+As soon as GHC plus its “fellow citizens” (other processes on your
machine) start using more than the <Emphasis>real memory</Emphasis> on your
-machine, and the machine starts ``thrashing,'' <Emphasis>the party is
+machine, and the machine starts “thrashing,” <Emphasis>the party is
over</Emphasis>. Compile times will be worse than terrible! Use something
-like the csh-builtin <Literal>time</Literal> command to get a report on how many page
+like the csh-builtin <Command>time</Command> command to get a report on how many page
faults you're getting.
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>Don't derive/use <Literal>Read</Literal> unnecessarily:</Term>
+<Term>Don't derive/use <Function>Read</Function> unnecessarily:</Term>
<ListItem>
<Para>
It's ugly and slow.
</Para>
<Para>
-The parts of the compiler that seem most prone to wandering off for a
-long time are the abstract interpreters (strictness and update
-analysers). You can turn these off individually with
-<Literal>-fno-strictness</Literal><IndexTerm><Primary>-fno-strictness anti-option</Primary></IndexTerm> and
-<Literal>-fno-update-analysis</Literal>.<IndexTerm><Primary>-fno-update-analysis anti-option</Primary></IndexTerm>
+The part of the compiler that is occasionally prone to wandering off
+for a long time is the strictness analyser. You can turn this off
+individually with <Option>-fno-strictness</Option>.
+<IndexTerm><Primary>-fno-strictness anti-option</Primary></IndexTerm>
</Para>
<Para>
To figure out which part of the compiler is badly behaved, the
-<Literal>-dshow-passes</Literal><IndexTerm><Primary>-dshow-passes option</Primary></IndexTerm> option is your
+<Option>-dshow-passes</Option><IndexTerm><Primary>-dshow-passes option</Primary></IndexTerm> option is your
friend.
</Para>
<Para>
If your module has big wads of constant data, GHC may produce a huge
basic block that will cause the native-code generator's register
-allocator to founder. Bring on <Literal>-fvia-C</Literal><IndexTerm><Primary>-fvia-C option</Primary></IndexTerm>
+allocator to founder. Bring on <Option>-fvia-C</Option><IndexTerm><Primary>-fvia-C option</Primary></IndexTerm>
(not that GCC will be that quick about it, either).
</Para>
</ListItem>
<Term>Avoid the consistency-check on linking:</Term>
<ListItem>
<Para>
-Use <Literal>-no-link-chk</Literal><IndexTerm><Primary>-no-link-chk</Primary></IndexTerm>; saves effort. This is
+Use <Option>-no-link-chk</Option><IndexTerm><Primary>-no-link-chk</Primary></IndexTerm>; saves effort. This is
probably safe in a I-only-compile-things-one-way setup.
</Para>
</ListItem>
<Para>
Please report any overly-slow GHC-compiled programs. The current
-definition of ``overly-slow'' is ``the HBC-compiled version ran
-faster''…
+definition of “overly-slow” is “the HBC-compiled version ran
+faster”…
</Para>
<Para>
<VariableList>
<VarListEntry>
-<Term>Optimise, using <Literal>-O</Literal> or <Literal>-O2</Literal>:</Term>
+<Term>Optimise, using <Option>-O</Option> or <Option>-O2</Option>:</Term>
<ListItem>
<Para>
This is the most basic way
to make your program go faster. Compilation time will be slower,
-especially with <Literal>-O2</Literal>.
+especially with <Option>-O2</Option>.
</Para>
<Para>
-At present, <Literal>-O2</Literal> is nearly indistinguishable from <Literal>-O</Literal>.
+At present, <Option>-O2</Option> is nearly indistinguishable from <Option>-O</Option>.
</Para>
</ListItem>
</VarListEntry>
<Term>Compile via C and crank up GCC:</Term>
<ListItem>
<Para>
-Even with <Literal>-O</Literal>, GHC tries to
+Even with <Option>-O</Option>, GHC tries to
use a native-code generator, if available. But the native
code-generator is designed to be quick, not mind-bogglingly clever.
Better to let GCC have a go, as it tries much harder on register
</Para>
<Para>
-So, when we want very fast code, we use: <Literal>-O -fvia-C -O2-for-C</Literal>.
+So, when we want very fast code, we use: <Option>-O -fvia-C -O2-for-C</Option>.
</Para>
</ListItem>
</VarListEntry>
<Para>
Signatures are the basic trick; putting them on exported, top-level
functions is good software-engineering practice, anyway. (Tip: using
-<Literal>-fwarn-missing-signatures</Literal><IndexTerm><Primary>-fwarn-missing-signatures
+<Option>-fwarn-missing-signatures</Option><IndexTerm><Primary>-fwarn-missing-signatures
option</Primary></IndexTerm> can help enforce good signature-practice).
</Para>
<Para>
-The automatic specialisation of overloaded functions (with <Literal>-O</Literal>)
+The automatic specialisation of overloaded functions (with <Option>-O</Option>)
should take care of overloaded local and/or unexported functions.
</Para>
</ListItem>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>``But how do I know where overloading is creeping in?'':</Term>
+<Term>“But how do I know where overloading is creeping in?”:</Term>
<ListItem>
<Para>
A low-tech way: grep (search) your interface files for overloaded
</Para>
<Para>
-(If you don't know what a ``strict function'' is, please consult a
+(If you don't know what a “strict function” is, please consult a
functional-programming textbook. A sentence or two of
explanation here probably would not do much good.)
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>``How do I find out a function's strictness?''</Term>
+<Term>“How do I find out a function's strictness?”</Term>
<ListItem>
<Para>
Don't guess—look it up.
<Para>
Look for your function in the interface file, then for the third field
-in the pragma; it should say <Literal>_S_ <string></Literal>. The <Literal><string></Literal>
-gives the strictness of the function's arguments. <Literal>L</Literal> is lazy
-(bad), <Literal>S</Literal> and <Literal>E</Literal> are strict (good), <Literal>P</Literal> is ``primitive'' (good),
-<Literal>U(...)</Literal> is strict and
-``unpackable'' (very good), and <Literal>A</Literal> is absent (very good).
+in the pragma; it should say <Literal>__S
+<string></Literal>. The <Literal><string></Literal> gives
+the strictness of the function's arguments. <Function>L</Function> is
+lazy (bad), <Function>S</Function> and <Function>E</Function> are
+strict (good), <Function>P</Function> is “primitive”
+(good), <Function>U(...)</Function> is strict and
+“unpackable” (very good), and <Function>A</Function> is
+absent (very good).
</Para>
<Para>
-For an ``unpackable'' <Literal>U(...)</Literal> argument, the info inside
+For an “unpackable” <Function>U(...)</Function> argument, the info inside
tells the strictness of its components. So, if the argument is a
-pair, and it says <Literal>U(AU(LSS))</Literal>, that means ``the first component of the
+pair, and it says <Function>U(AU(LSS))</Function>, that means “the first component of the
pair isn't used; the second component is itself unpackable, with three
-components (lazy in the first, strict in the second \& third).''
+components (lazy in the first, strict in the second \& third).”
</Para>
<Para>
-If the function isn't exported, just compile with the extra flag <Literal>-ddump-simpl</Literal>;
+If the function isn't exported, just compile with the extra flag <Option>-ddump-simpl</Option>;
next to the signature for any binder, it will print the self-same
pragmatic information as would be put in an interface file.
(Besides, Core syntax is fun to look at!)
<ListItem>
<Para>
(The form in which GHC manipulates your code.) Just run your
-compilation with <Literal>-ddump-simpl</Literal> (don't forget the <Literal>-O</Literal>).
+compilation with <Option>-ddump-simpl</Option> (don't forget the <Option>-O</Option>).
</Para>
<Para>
<ListItem>
<Para>
When you are <Emphasis>really</Emphasis> desperate for speed, and you want to get
-right down to the ``raw bits.'' Please see <XRef LinkEnd="glasgow-unboxed"> for some information about using unboxed
+right down to the “raw bits.” Please see <XRef LinkEnd="glasgow-unboxed"> for some information about using unboxed
types.
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>Use <Literal>_ccall_s</Literal> (a GHC extension) to plug into fast libraries:</Term>
+<Term>Use <Function>_ccall_s</Function> (a GHC extension) to plug into fast libraries:</Term>
<ListItem>
<Para>
This may take real work, but… There exist piles of
<Term>Use a bigger heap!</Term>
<ListItem>
<Para>
-If your program's GC stats (<Literal>-S</Literal><IndexTerm><Primary>-S RTS option</Primary></IndexTerm> RTS option)
+If your program's GC stats (<Option>-S</Option><IndexTerm><Primary>-S RTS option</Primary></IndexTerm> RTS option)
indicate that it's doing lots of garbage-collection (say, more than
20% of execution time), more memory might help—with the
-<Literal>-M<size></Literal><IndexTerm><Primary>-M<size> RTS option</Primary></IndexTerm> or
-<Literal>-A<size></Literal><IndexTerm><Primary>-A<size> RTS option</Primary></IndexTerm> RTS options (see
+<Option>-M<size></Option><IndexTerm><Primary>-M<size> RTS option</Primary></IndexTerm> or
+<Option>-A<size></Option><IndexTerm><Primary>-A<size> RTS option</Primary></IndexTerm> RTS options (see
<XRef LinkEnd="rts-options-gc">).
</Para>
</ListItem>
</Para>
<Para>
-Decrease the ``go-for-it'' threshold for unfolding smallish
+Decrease the “go-for-it” threshold for unfolding smallish
expressions. Give a
-<Literal>-funfolding-use-threshold0</Literal><IndexTerm><Primary>-funfolding-use-threshold0
-option</Primary></IndexTerm> option for the extreme case. (``Only unfoldings with
-zero cost should proceed.'') Warning: except in certain specialiised
+<Option>-funfolding-use-threshold0</Option><IndexTerm><Primary>-funfolding-use-threshold0
+option</Primary></IndexTerm> option for the extreme case. (“Only unfoldings with
+zero cost should proceed.”) Warning: except in certain specialiised
cases (like Happy parsers) this is likely to actually
<Emphasis>increase</Emphasis> the size of your program, because unfolding
generally enables extra simplifying optimisations to be performed.
</Para>
<Para>
-Avoid <Literal>Read</Literal>.
+Avoid <Function>Read</Function>.
</Para>
<Para>
</Para>
<Para>
-``I think I have a space leak…'' Re-run your program with
-<Literal>+RTS -Sstderr</Literal>,<IndexTerm><Primary>-Sstderr RTS option</Primary></IndexTerm> and remove all doubt!
-(You'll see the heap usage get bigger and bigger…) [Hmmm…this
-might be even easier with the <Literal>-F2s</Literal><IndexTerm><Primary>-F2s RTS option</Primary></IndexTerm> RTS
-option; so… <Literal>./a.out +RTS -Sstderr -F2s</Literal>...]
+“I think I have a space leak…” Re-run your program
+with <Option>+RTS -Sstderr</Option>, and remove all doubt! (You'll
+see the heap usage get bigger and bigger…)
+[Hmmm…this might be even easier with the
+<Option>-G1</Option> RTS option; so… <Command>./a.out +RTS
+-Sstderr -G1</Command>...]
+<IndexTerm><Primary>-G RTS option</Primary></IndexTerm>
+<IndexTerm><Primary>-Sstderr RTS option</Primary></IndexTerm>
</Para>
<Para>
-Once again, the profiling facilities (<XRef LinkEnd="profiling">) are the basic tool for demystifying the space
-behaviour of your program.
+Once again, the profiling facilities (<XRef LinkEnd="profiling">) are
+the basic tool for demystifying the space behaviour of your program.
</Para>
<Para>