Virtually all of the Glasgow extensions serve to give you access to
the underlying facilities with which we implement Haskell. Thus, you
can get at the Raw Iron, if you are willing to write some non-standard
-code at a more primitive level. You need not be ``stuck'' on
+code at a more primitive level. You need not be “stuck” on
performance because of the implementation costs of Haskell's
-``high-level'' features—you can always code ``under'' them. In an
+“high-level” features—you can always code “under” them. In an
extreme case, you can write all your time-critical code in C, and then
just glue it together with Haskell!
</Para>
<ListItem>
<Para>
You can get right down to the raw machine types and operations;
-included in this are ``primitive arrays'' (direct access to Big Wads
+included in this are “primitive arrays” (direct access to Big Wads
of Bytes). Please see <XRef LinkEnd="glasgow-unboxed"> and following.
</Para>
</ListItem>
<Para>
Before you get too carried away working at the lowest level (e.g.,
sloshing <Literal>MutableByteArray#</Literal>s around your program), you may wish to
-check if there are system libraries that provide a ``Haskellised
-veneer'' over the features you want. See <XRef LinkEnd="ghc-prelude">.
+check if there are system libraries that provide a “Haskellised
+veneer” over the features you want. See <XRef LinkEnd="ghc-prelude">.
</Para>
<Sect1 id="glasgow-unboxed">
</Para>
<Para>
-These types correspond to the ``raw machine'' types you would use in
+These types correspond to the “raw machine” types you would use in
C: <Literal>Int#</Literal> (long int), <Literal>Double#</Literal> (double), <Literal>Addr#</Literal> (void *), etc. The
<Emphasis>primitive operations</Emphasis> (PrimOps) on these types are what you
might expect; e.g., <Literal>(+#)</Literal> is addition on <Literal>Int#</Literal>s, and is the
<Para>
Nevertheless, A numerically-intensive program using unboxed types can
-go a <Emphasis>lot</Emphasis> faster than its ``standard'' counterpart—we saw a
+go a <Emphasis>lot</Emphasis> faster than its “standard” counterpart—we saw a
threefold speedup on one example.
</Para>
<Para>
This monad underlies our implementation of arrays, mutable and
-immutable, and our implementation of I/O, including ``C calls''.
+immutable, and our implementation of I/O, including “C calls”.
</Para>
<Para>
<Term>Immutable:</Term>
<ListItem>
<Para>
-Arrays that do not change (as with ``standard'' Haskell arrays); you
+Arrays that do not change (as with “standard” Haskell arrays); you
can only read from them. Obviously, they do not need the care and
attention of the state-transformer monad.
</Para>
<Term>Mutable:</Term>
<ListItem>
<Para>
-Arrays that may be changed or ``mutated.'' All the operations on them
+Arrays that may be changed or “mutated.” All the operations on them
live within the state-transformer monad and the updates happen
<Emphasis>in-place</Emphasis>.
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>``Static'' (in C land):</Term>
+<Term>“Static” (in C land):</Term>
<ListItem>
<Para>
A C routine may pass an <Literal>Addr#</Literal> pointer back into Haskell land. There
are then primitive operations with which you may merrily grab values
-over in C land, by indexing off the ``static'' pointer.
+over in C land, by indexing off the “static” pointer.
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>``Stable'' pointers:</Term>
+<Term>“Stable” pointers:</Term>
<ListItem>
<Para>
If, for some reason, you wish to hand a Haskell pointer (i.e.,
<Emphasis>not</Emphasis> an unboxed value) to a C routine, you first make the
-pointer ``stable,'' so that the garbage collector won't forget that it
+pointer “stable,” so that the garbage collector won't forget that it
exists. That is, GHC provides a safe way to pass Haskell pointers to
C.
</Para>
</ListItem>
</VarListEntry>
<VarListEntry>
-<Term>``Foreign objects'':</Term>
+<Term>“Foreign objects”:</Term>
<ListItem>
<Para>
-A ``foreign object'' is a safe way to pass an external object (a
+A “foreign object” is a safe way to pass an external object (a
C-allocated pointer, say) to Haskell and have Haskell do the Right
Thing when it no longer references the object. So, for example, C
-could pass a large bitmap over to Haskell and say ``please free this
-memory when you're done with it.''
+could pass a large bitmap over to Haskell and say “please free this
+memory when you're done with it.”
</Para>
<Para>
</Para>
<Para>
-The libraries section gives more details on all these ``primitive
-array'' types and the operations on them, <XRef LinkEnd="ghc-prelude">. Some of these extensions
+The libraries section gives more details on all these “primitive
+array” types and the operations on them, <XRef LinkEnd="ghc-prelude">. Some of these extensions
are also supported by Hugs, and the supporting libraries are described
in the <ULink
URL="libs.html"
<ProgramListing>
fooH :: Char -> Int -> Double -> Word -> IO Double
-fooH c i d w = _ccall_ fooC (``stdin''::Addr) c i d w
+fooH c i d w = _ccall_ fooC (“stdin”::Addr) c i d w
</ProgramListing>
</Para>
import CString
oldGetEnv name
- = _casm_ ``%r = getenv((char *) %0);'' name >>= \ litstring ->
+ = _casm_ “%r = getenv((char *) %0);” name >>= \ litstring ->
return (
if (litstring == nullAddr) then
Left ("Fail:oldGetEnv:"++name)
<Para>
The first literal-literal argument to a <Function>_casm_</Function> is like a <Function>printf</Function>
-format: <Literal>%r</Literal> is replaced with the ``result,'' <Literal>%0</Literal>–<Literal>%n-1</Literal> are
+format: <Literal>%r</Literal> is replaced with the “result,” <Literal>%0</Literal>–<Literal>%n-1</Literal> are
replaced with the 1st–nth arguments. As you can see above, it is an
easy way to do simple C casting. Everything said about <Function>_ccall_</Function> goes
for <Function>_casm_</Function> as well.
<ProgramListing>
fooH :: Char -> Int -> Double -> Word -> IO Double
-fooH c i d w = _ccall_ fooC (``stdin''::Addr) c i d w
+fooH c i d w = _ccall_ fooC (“stdin”::Addr) c i d w
</ProgramListing>
</Para>
</Sect2>
<Sect2 id="glasgow-stablePtrs">
-<Title>Subverting automatic unboxing with ``stable pointers''
+<Title>Subverting automatic unboxing with “stable pointers”
</Title>
<Para>
</Para>
<Para>
-It is possible to subvert the unboxing process by creating a ``stable
-pointer'' to a value and passing the stable pointer instead. For
+It is possible to subvert the unboxing process by creating a “stable
+pointer” to a value and passing the stable pointer instead. For
example, to pass/return an integer lazily to C functions <Function>storeC</Function> and
<Function>fetchC</Function> might write:
</Para>
sincosd :: Double -> (Double, Double)
sincosd x = unsafePerformIO $ do
da <- newDoubleArray (0, 1)
- _casm_ ``sincosd( %0, &((double *)%1[0]), &((double *)%1[1]) );'' x da
+ _casm_ “sincosd( %0, &((double *)%1[0]), &((double *)%1[1]) );” x da
s <- readDoubleArray da 0
c <- readDoubleArray da 1
return (s, c)
((_ccall_ PostTraceHook sTDERR{-msg-}):: IO ()) >>
return expr )
where
- sTDERR = (``stderr'' :: Addr)
+ sTDERR = (“stderr” :: Addr)
</ProgramListing>
</Sect2>
<Sect2 id="ccall-gotchas">
-<Title>C-calling ``gotchas'' checklist
+<Title>C-calling “gotchas” checklist
</Title>
<Para>
<ListItem>
<Para>
-To avoid ambiguity, the type after the ``<Literal>::</Literal>'' in a result
+To avoid ambiguity, the type after the “<Literal>::</Literal>” in a result
pattern signature on a lambda or <Literal>case</Literal> must be atomic (i.e. a single
token or a parenthesised type of some sort). To see why,
consider how one would parse this:
<IndexTerm><Primary>pragma, INLINE</Primary></IndexTerm></Title>
<Para>
-GHC (with <Option>-O</Option>, as always) tries to inline (or ``unfold'')
-functions/values that are ``small enough,'' thus avoiding the call
+GHC (with <Option>-O</Option>, as always) tries to inline (or “unfold”)
+functions/values that are “small enough,” thus avoiding the call
overhead and possibly exposing other more-wonderful optimisations.
</Para>
</Para>
<Para>
-Normally, if GHC decides a function is ``too expensive'' to inline, it
+Normally, if GHC decides a function is “too expensive” to inline, it
will not do so, nor will it export that unfolding for other modules to
use.
</Para>
<Para>
The major effect of an <Literal>INLINE</Literal> pragma is to declare a function's
-``cost'' to be very low. The normal unfolding machinery will then be
+“cost” to be very low. The normal unfolding machinery will then be
very keen to inline it.
</Para>
</ProgramListing>
Notice the <Literal>INLINE</Literal>! That prevents <Literal>(:)</Literal> from being inlined when compiling
-<Literal>PrelBase</Literal>, so that an importing module will ``see'' the <Literal>(:)</Literal>, and can
+<Literal>PrelBase</Literal>, so that an importing module will “see” the <Literal>(:)</Literal>, and can
match it on the LHS of a rule. <Literal>INLINE</Literal> prevents any inlining happening
in the RHS of the <Literal>INLINE</Literal> thing. I regret the delicacy of this.