allows you to define postfix operators. The extension is this: the left section
<programlisting>
(e !)
-</programlisting>
+</programlisting>
is equivalent (from the point of view of both type checking and execution) to the expression
<programlisting>
((!) e)
-</programlisting>
+</programlisting>
(for any expression <literal>e</literal> and operator <literal>(!)</literal>.
The strict Haskell 98 interpretation is that the section is equivalent to
<programlisting>
(\y -> (!) e y)
-</programlisting>
+</programlisting>
That is, the operator must be a function of two arguments. GHC allows it to
take only one argument, and that in turn allows you to write the function
postfix.
extract it on pattern matching.
</para>
-<para>
-Notice the way that the syntax fits smoothly with that used for
-universal quantification earlier.
-</para>
-
</sect3>
<sect3 id="existential-records">
other classes you have to write an explicit instance declaration. For
example, if you define
-<programlisting>
+<programlisting>
newtype Dollars = Dollars Int
-</programlisting>
+</programlisting>
and you want to use arithmetic on <literal>Dollars</literal>, you have to
explicitly define an instance of <literal>Num</literal>:
-<programlisting>
+<programlisting>
instance Num Dollars where
Dollars a + Dollars b = Dollars (a+b)
...
GHC now permits such instances to be derived instead,
using the flag <option>-XGeneralizedNewtypeDeriving</option>,
so one can write
-<programlisting>
+<programlisting>
newtype Dollars = Dollars Int deriving (Eq,Show,Num)
-</programlisting>
+</programlisting>
and the implementation uses the <emphasis>same</emphasis> <literal>Num</literal> dictionary
for <literal>Dollars</literal> as for <literal>Int</literal>. Notionally, the compiler
derives an instance declaration of the form
-<programlisting>
+<programlisting>
instance Num Int => Num Dollars
-</programlisting>
+</programlisting>
which just adds or removes the <literal>newtype</literal> constructor according to the type.
</para>
way. For example, suppose we have implemented state and failure monad
transformers, such that
-<programlisting>
+<programlisting>
instance Monad m => Monad (State s m)
instance Monad m => Monad (Failure m)
-</programlisting>
+</programlisting>
In Haskell 98, we can define a parsing monad by
-<programlisting>
+<programlisting>
type Parser tok m a = State [tok] (Failure m) a
-</programlisting>
+</programlisting>
which is automatically a monad thanks to the instance declarations
above. With the extension, we can make the parser type abstract,
without needing to write an instance of class <literal>Monad</literal>, via
-<programlisting>
+<programlisting>
newtype Parser tok m a = Parser (State [tok] (Failure m) a)
deriving Monad
</programlisting>
In this case the derived instance declaration is of the form
-<programlisting>
+<programlisting>
instance Monad (State [tok] (Failure m)) => Monad (Parser tok m)
-</programlisting>
+</programlisting>
Notice that, since <literal>Monad</literal> is a constructor class, the
instance is a <emphasis>partial application</emphasis> of the new type, not the
application'' of the class appears in the <literal>deriving</literal>
clause. For example, given the class
-<programlisting>
+<programlisting>
class StateMonad s m | m -> s where ...
instance Monad m => StateMonad s (State s m) where ...
-</programlisting>
+</programlisting>
then we can derive an instance of <literal>StateMonad</literal> for <literal>Parser</literal>s by
-<programlisting>
+<programlisting>
newtype Parser tok m a = Parser (State [tok] (Failure m) a)
deriving (Monad, StateMonad [tok])
</programlisting>
The derived instance is obtained by completing the application of the
class to the new type:
-<programlisting>
+<programlisting>
instance StateMonad [tok] (State [tok] (Failure m)) =>
StateMonad [tok] (Parser tok m)
</programlisting>
Derived instance declarations are constructed as follows. Consider the
declaration (after expansion of any type synonyms)
-<programlisting>
+<programlisting>
newtype T v1...vn = T' (t vk+1...vn) deriving (c1...cm)
-</programlisting>
+</programlisting>
where
<itemizedlist>
</itemizedlist>
Then, for each <literal>ci</literal>, the derived instance
declaration is:
-<programlisting>
+<programlisting>
instance ci t => ci (T v1...vk)
</programlisting>
As an example which does <emphasis>not</emphasis> work, consider
-<programlisting>
+<programlisting>
newtype NonMonad m s = NonMonad (State s m s) deriving Monad
-</programlisting>
+</programlisting>
Here we cannot derive the instance
-<programlisting>
+<programlisting>
instance Monad (State s m) => Monad (NonMonad m)
-</programlisting>
+</programlisting>
because the type variable <literal>s</literal> occurs in <literal>State s m</literal>,
and so cannot be "eta-converted" away. It is a good thing that this
important, since we can only derive instances for the last one. If the
<literal>StateMonad</literal> class above were instead defined as
-<programlisting>
+<programlisting>
class StateMonad m s | m -> s where ...
</programlisting>
class F a b | a->b
instance F [a] [[a]]
instance (D c, F a c) => D [a] -- 'c' is not mentioned in the head
-</programlisting>
+</programlisting>
Similarly, it can be tempting to lift the coverage condition:
<programlisting>
class Mul a b c | a b -> c where
<title>Pattern type signatures</title>
<para>
A type signature may occur in any pattern; this is a <emphasis>pattern type
-signature</emphasis>.
+signature</emphasis>.
For example:
<programlisting>
-- f and g assume that 'a' is already in scope
signature simply constrains the type of the pattern in the obvious way.
</para>
<para>
-There is only one situation in which you can write a pattern type signature that
-mentions a type variable that is not already in scope, namely in pattern match
-of an existential data constructor. For example:
+Unlike expression and declaration type signatures, pattern type signatures are not implictly generalised.
+The pattern in a <emphasis>patterm binding</emphasis> may only mention type variables
+that are already in scope. For example:
+<programlisting>
+ f :: forall a. [a] -> (Int, [a])
+ f xs = (n, zs)
+ where
+ (ys::[a], n) = (reverse xs, length xs) -- OK
+ zs::[a] = xs ++ ys -- OK
+
+ Just (v::b) = ... -- Not OK; b is not in scope
+</programlisting>
+Here, the pattern signatures for <literal>ys</literal> and <literal>zs</literal>
+are fine, but the one for <literal>v</literal> is not because <literal>b</literal> is
+not in scope.
+</para>
+<para>
+However, in all patterns <emphasis>other</emphasis> than pattern bindings, a pattern
+type signature may mention a type variable that is not in scope; in this case,
+<emphasis>the signature brings that type variable into scope</emphasis>.
+This is particularly important for existential data constructors. For example:
<programlisting>
data T = forall a. MkT [a]
t3::[a] = [t,t,t]
</programlisting>
Here, the pattern type signature <literal>(t::a)</literal> mentions a lexical type
-variable that is not already in scope. Indeed, it cannot already be in scope,
+variable that is not already in scope. Indeed, it <emphasis>cannot</emphasis> already be in scope,
because it is bound by the pattern match. GHC's rule is that in this situation
(and only then), a pattern type signature can mention a type variable that is
not already in scope; the effect is to bring it into scope, standing for the
existentially-bound type variable.
</para>
<para>
-If this seems a little odd, we think so too. But we must have
+When a pattern type signature binds a type variable in this way, GHC insists that the
+type variable is bound to a <emphasis>rigid</emphasis>, or fully-known, type variable.
+This means that any user-written type signature always stands for a completely known type.
+</para>
+<para>
+If all this seems a little odd, we think so too. But we must have
<emphasis>some</emphasis> way to bring such type variables into scope, else we
could not name existentially-bound type variables in subsequent type signatures.
</para>
<para>The major effect of an <literal>INLINE</literal> pragma
is to declare a function's “cost” to be very low.
The normal unfolding machinery will then be very keen to
- inline it.</para>
-
+ inline it. However, an <literal>INLINE</literal> pragma for a
+ function "<literal>f</literal>" has a number of other effects:
+<itemizedlist>
+<listitem><para>
+No funtions are inlined into <literal>f</literal>. Otherwise
+GHC might inline a big function into <literal>f</literal>'s right hand side,
+making <literal>f</literal> big; and then inline <literal>f</literal> blindly.
+</para></listitem>
+<listitem><para>
+The float-in, float-out, and common-sub-expression transformations are not
+applied to the body of <literal>f</literal>.
+</para></listitem>
+<listitem><para>
+An INLINE function is not worker/wrappered by strictness analysis.
+It's going to be inlined wholesale instead.
+</para></listitem>
+</itemizedlist>
+All of these effects are aimed at ensuring that what gets inlined is
+exactly what you asked for, no more and no less.
+</para>
<para>Syntactically, an <literal>INLINE</literal> pragma for a
function can be put anywhere its type signature could be
put.</para>
h :: Eq a => a -> a -> a
{-# SPECIALISE h :: (Eq a) => [a] -> [a] -> [a] #-}
-</programlisting>
+</programlisting>
The last of these examples will generate a
RULE with a somewhat-complex left-hand side (try it yourself), so it might not fire very
well. If you use this kind of specialisation, let us know how well it works.