language, GHC extensions, GHC As with all known Haskell systems, GHC implements some extensions to the language. To use them, you'll need to give a -fglasgow-exts option option. Virtually all of the Glasgow extensions serve to give you access to the underlying facilities with which we implement Haskell. Thus, you can get at the Raw Iron, if you are willing to write some non-standard code at a more primitive level. You need not be “stuck” on performance because of the implementation costs of Haskell's “high-level” features—you can always code “under” them. In an extreme case, you can write all your time-critical code in C, and then just glue it together with Haskell! Before you get too carried away working at the lowest level (e.g., sloshing MutableByteArray#s around your program), you may wish to check if there are libraries that provide a “Haskellised veneer” over the features you want. The separate libraries documentation describes all the libraries that come with GHC. Language options languageoption optionslanguage extensionsoptions controlling These flags control what variation of the language are permitted. Leaving out all of them gives you standard Haskell 98. : This simultaneously enables all of the extensions to Haskell 98 described in , except where otherwise noted. and : This option enables the language extension defined in the Haskell 98 Foreign Function Interface Addendum plus deprecated syntax of previous versions of the FFI for backwards compatibility. : This option enables the deprecated with keyword for implicit parameters; it is merely provided for backwards compatibility. It is independent of the flag. : Switch off the Haskell 98 monomorphism restriction. Independent of the flag. See . Only relevant if you also use . See . Only relevant if you also use . See . Independent of . See . Independent of . -fno-implicit-prelude option GHC normally imports Prelude.hi files for you. If you'd rather it didn't, then give it a option. The idea is that you can then import a Prelude of your own. (But don't call it Prelude; the Haskell module namespace is flat, and you must not conflict with any Prelude module.) Even though you have not imported the Prelude, most of the built-in syntax still refers to the built-in Haskell Prelude types and values, as specified by the Haskell Report. For example, the type [Int] still means Prelude.[] Int; tuples continue to refer to the standard Prelude tuples; the translation for list comprehensions continues to use Prelude.map etc. However, does change the handling of certain built-in syntax: see . Unboxed types and primitive operations GHC is built on a raft of primitive data types and operations. While you really can use this stuff to write fast code, we generally find it a lot less painful, and more satisfying in the long run, to use higher-level language features and libraries. With any luck, the code you write will be optimised to the efficient unboxed version in any case. And if it isn't, we'd like to know about it. We do not currently have good, up-to-date documentation about the primitives, perhaps because they are mainly intended for internal use. There used to be a long section about them here in the User Guide, but it became out of date, and wrong information is worse than none. The Real Truth about what primitive types there are, and what operations work over those types, is held in the file fptools/ghc/compiler/prelude/primops.txt. This file is used directly to generate GHC's primitive-operation definitions, so it is always correct! It is also intended for processing into text. Indeed, the result of such processing is part of the description of the External Core language. So that document is a good place to look for a type-set version. We would be very happy if someone wanted to volunteer to produce an SGML back end to the program that processes primops.txt so that we could include the results here in the User Guide. What follows here is a brief summary of some main points. Unboxed types Unboxed types (Glasgow extension) Most types in GHC are boxed, which means that values of that type are represented by a pointer to a heap object. The representation of a Haskell Int, for example, is a two-word heap object. An unboxed type, however, is represented by the value itself, no pointers or heap allocation are involved. Unboxed types correspond to the “raw machine” types you would use in C: Int# (long int), Double# (double), Addr# (void *), etc. The primitive operations (PrimOps) on these types are what you might expect; e.g., (+#) is addition on Int#s, and is the machine-addition that we all know and love—usually one instruction. Primitive (unboxed) types cannot be defined in Haskell, and are therefore built into the language and compiler. Primitive types are always unlifted; that is, a value of a primitive type cannot be bottom. We use the convention that primitive types, values, and operations have a # suffix. Primitive values are often represented by a simple bit-pattern, such as Int#, Float#, Double#. But this is not necessarily the case: a primitive value might be represented by a pointer to a heap-allocated object. Examples include Array#, the type of primitive arrays. A primitive array is heap-allocated because it is too big a value to fit in a register, and would be too expensive to copy around; in a sense, it is accidental that it is represented by a pointer. If a pointer represents a primitive value, then it really does point to that value: no unevaluated thunks, no indirections…nothing can be at the other end of the pointer than the primitive value. There are some restrictions on the use of primitive types, the main one being that you can't pass a primitive value to a polymorphic function or store one in a polymorphic data type. This rules out things like [Int#] (i.e. lists of primitive integers). The reason for this restriction is that polymorphic arguments and constructor fields are assumed to be pointers: if an unboxed integer is stored in one of these, the garbage collector would attempt to follow it, leading to unpredictable space leaks. Or a seq operation on the polymorphic component may attempt to dereference the pointer, with disastrous results. Even worse, the unboxed value might be larger than a pointer (Double# for instance). Nevertheless, A numerically-intensive program using unboxed types can go a lot faster than its “standard” counterpart—we saw a threefold speedup on one example. Unboxed Tuples Unboxed tuples aren't really exported by GHC.Exts, they're available by default with . An unboxed tuple looks like this: (# e_1, ..., e_n #) where e_1..e_n are expressions of any type (primitive or non-primitive). The type of an unboxed tuple looks the same. Unboxed tuples are used for functions that need to return multiple values, but they avoid the heap allocation normally associated with using fully-fledged tuples. When an unboxed tuple is returned, the components are put directly into registers or on the stack; the unboxed tuple itself does not have a composite representation. Many of the primitive operations listed in this section return unboxed tuples. There are some pretty stringent restrictions on the use of unboxed tuples: Unboxed tuple types are subject to the same restrictions as other unboxed types; i.e. they may not be stored in polymorphic data structures or passed to polymorphic functions. Unboxed tuples may only be constructed as the direct result of a function, and may only be deconstructed with a case expression. eg. the following are valid: f x y = (# x+1, y-1 #) g x = case f x x of { (# a, b #) -> a + b } but the following are invalid: f x y = g (# x, y #) g (# x, y #) = x + y No variable can have an unboxed tuple type. This is illegal: f :: (# Int, Int #) -> (# Int, Int #) f x = x because x has an unboxed tuple type. Note: we may relax some of these restrictions in the future. The IO and ST monads use unboxed tuples to avoid unnecessary allocation during sequences of operations. Syntactic extensions Hierarchical Modules GHC supports a small extension to the syntax of module names: a module name is allowed to contain a dot ‘.’. This is also known as the “hierarchical module namespace” extension, because it extends the normally flat Haskell module namespace into a more flexible hierarchy of modules. This extension has very little impact on the language itself; modules names are always fully qualified, so you can just think of the fully qualified module name as the module name. In particular, this means that the full module name must be given after the module keyword at the beginning of the module; for example, the module A.B.C must begin module A.B.C It is a common strategy to use the as keyword to save some typing when using qualified names with hierarchical modules. For example: import qualified Control.Monad.ST.Strict as ST Hierarchical modules have an impact on the way that GHC searches for files. For a description, see . GHC comes with a large collection of libraries arranged hierarchically; see the accompanying library documentation. There is an ongoing project to create and maintain a stable set of core libraries used by several Haskell compilers, and the libraries that GHC comes with represent the current status of that project. For more details, see Haskell Libraries. Pattern guards Pattern guards (Glasgow extension) The discussion that follows is an abbreviated version of Simon Peyton Jones's original proposal. (Note that the proposal was written before pattern guards were implemented, so refers to them as unimplemented.) Suppose we have an abstract data type of finite maps, with a lookup operation: lookup :: FiniteMap -> Int -> Maybe Int The lookup returns Nothing if the supplied key is not in the domain of the mapping, and (Just v) otherwise, where v is the value that the key maps to. Now consider the following definition: clunky env var1 var2 | ok1 && ok2 = val1 + val2 | otherwise = var1 + var2 where m1 = lookup env var1 m2 = lookup env var2 ok1 = maybeToBool m1 ok2 = maybeToBool m2 val1 = expectJust m1 val2 = expectJust m2 The auxiliary functions are maybeToBool :: Maybe a -> Bool maybeToBool (Just x) = True maybeToBool Nothing = False expectJust :: Maybe a -> a expectJust (Just x) = x expectJust Nothing = error "Unexpected Nothing" What is clunky doing? The guard ok1 && ok2 checks that both lookups succeed, using maybeToBool to convert the Maybe types to booleans. The (lazily evaluated) expectJust calls extract the values from the results of the lookups, and binds the returned values to val1 and val2 respectively. If either lookup fails, then clunky takes the otherwise case and returns the sum of its arguments. This is certainly legal Haskell, but it is a tremendously verbose and un-obvious way to achieve the desired effect. Arguably, a more direct way to write clunky would be to use case expressions: clunky env var1 var1 = case lookup env var1 of Nothing -> fail Just val1 -> case lookup env var2 of Nothing -> fail Just val2 -> val1 + val2 where fail = val1 + val2 This is a bit shorter, but hardly better. Of course, we can rewrite any set of pattern-matching, guarded equations as case expressions; that is precisely what the compiler does when compiling equations! The reason that Haskell provides guarded equations is because they allow us to write down the cases we want to consider, one at a time, independently of each other. This structure is hidden in the case version. Two of the right-hand sides are really the same (fail), and the whole expression tends to become more and more indented. Here is how I would write clunky: clunky env var1 var1 | Just val1 <- lookup env var1 , Just val2 <- lookup env var2 = val1 + val2 ...other equations for clunky... The semantics should be clear enough. The qualifers are matched in order. For a <- qualifier, which I call a pattern guard, the right hand side is evaluated and matched against the pattern on the left. If the match fails then the whole guard fails and the next equation is tried. If it succeeds, then the appropriate binding takes place, and the next qualifier is matched, in the augmented environment. Unlike list comprehensions, however, the type of the expression to the right of the <- is the same as the type of the pattern to its left. The bindings introduced by pattern guards scope over all the remaining guard qualifiers, and over the right hand side of the equation. Just as with list comprehensions, boolean expressions can be freely mixed with among the pattern guards. For example: f x | [y] <- x , y > 3 , Just z <- h y = ... Haskell's current guards therefore emerge as a special case, in which the qualifier list has just one element, a boolean expression. The recursive do-notation The recursive do-notation (also known as mdo-notation) is implemented as described in "A recursive do for Haskell", Levent Erkok, John Launchbury", Haskell Workshop 2002, pages: 29-37. Pittsburgh, Pennsylvania. The do-notation of Haskell does not allow recursive bindings, that is, the variables bound in a do-expression are visible only in the textually following code block. Compare this to a let-expression, where bound variables are visible in the entire binding group. It turns out that several applications can benefit from recursive bindings in the do-notation, and this extension provides the necessary syntactic support. Here is a simple (yet contrived) example: import Control.Monad.Fix justOnes = mdo xs <- Just (1:xs) return xs As you can guess justOnes will evaluate to Just [1,1,1,.... The Control.Monad.Fix library introduces the MonadFix class. It's definition is: class Monad m => MonadFix m where mfix :: (a -> m a) -> m a The function mfix dictates how the required recursion operation should be performed. If recursive bindings are required for a monad, then that monad must be declared an instance of the MonadFix class. For details, see the above mentioned reference. The following instances of MonadFix are automatically provided: List, Maybe, IO. Furthermore, the Control.Monad.ST and Control.Monad.ST.Lazy modules provide the instances of the MonadFix class for Haskell's internal state monad (strict and lazy, respectively). There are three important points in using the recursive-do notation: The recursive version of the do-notation uses the keyword mdo (rather than do). You should import Control.Monad.Fix. (Note: Strictly speaking, this import is required only when you need to refer to the name MonadFix in your program, but the import is always safe, and the programmers are encouraged to always import this module when using the mdo-notation.) As with other extensions, ghc should be given the flag -fglasgow-exts The web page: http://www.cse.ogi.edu/PacSoft/projects/rmb contains up to date information on recursive monadic bindings. Historical note: The old implementation of the mdo-notation (and most of the existing documents) used the name MonadRec for the class and the corresponding library. This name is not supported by GHC. Parallel List Comprehensions list comprehensionsparallel parallel list comprehensions Parallel list comprehensions are a natural extension to list comprehensions. List comprehensions can be thought of as a nice syntax for writing maps and filters. Parallel comprehensions extend this to include the zipWith family. A parallel list comprehension has multiple independent branches of qualifier lists, each separated by a `|' symbol. For example, the following zips together two lists: [ (x, y) | x <- xs | y <- ys ] The behavior of parallel list comprehensions follows that of zip, in that the resulting list will have the same length as the shortest branch. We can define parallel list comprehensions by translation to regular comprehensions. Here's the basic idea: Given a parallel comprehension of the form: [ e | p1 <- e11, p2 <- e12, ... | q1 <- e21, q2 <- e22, ... ... ] This will be translated to: [ e | ((p1,p2), (q1,q2), ...) <- zipN [(p1,p2) | p1 <- e11, p2 <- e12, ...] [(q1,q2) | q1 <- e21, q2 <- e22, ...] ... ] where `zipN' is the appropriate zip for the given number of branches. Rebindable syntax GHC allows most kinds of built-in syntax to be rebound by the user, to facilitate replacing the Prelude with a home-grown version, for example. You may want to define your own numeric class hierarchy. It completely defeats that purpose if the literal "1" means "Prelude.fromInteger 1", which is what the Haskell Report specifies. So the flag causes the following pieces of built-in syntax to refer to whatever is in scope, not the Prelude versions: Integer and fractional literals mean "fromInteger 1" and "fromRational 3.2", not the Prelude-qualified versions; both in expressions and in patterns. However, the standard Prelude Eq class is still used for the equality test necessary for literal patterns. Negation (e.g. "- (f x)") means "negate (f x)" (not Prelude.negate). In an n+k pattern, the standard Prelude Ord class is still used for comparison, but the necessary subtraction uses whatever "(-)" is in scope (not "Prelude.(-)"). "Do" notation is translated using whatever functions (>>=), (>>), fail, and return, are in scope (not the Prelude versions). List comprehensions, and parallel array comprehensions, are unaffected. Be warned: this is an experimental facility, with fewer checks than usual. In particular, it is essential that the functions GHC finds in scope must have the appropriate types, namely: fromInteger :: forall a. (...) => Integer -> a fromRational :: forall a. (...) => Rational -> a negate :: forall a. (...) => a -> a (-) :: forall a. (...) => a -> a -> a (>>=) :: forall m a. (...) => m a -> (a -> m b) -> m b (>>) :: forall m a. (...) => m a -> m b -> m b return :: forall m a. (...) => a -> m a fail :: forall m a. (...) => String -> m a (The (...) part can be any context including the empty context; that part is up to you.) If the functions don't have the right type, very peculiar things may happen. Use -dcore-lint to typecheck the desugared program. If Core Lint is happy you should be all right. Type system extensions Data types with no constructors With the flag, GHC lets you declare a data type with no constructors. For example: data S -- S :: * data T a -- T :: * -> * Syntactically, the declaration lacks the "= constrs" part. The type can be parameterised over types of any kind, but if the kind is not * then an explicit kind annotation must be used (see ). Such data types have only one value, namely bottom. Nevertheless, they can be useful when defining "phantom types". Infix type constructors GHC allows type constructors to be operators, and to be written infix, very much like expressions. More specifically: A type constructor can be an operator, beginning with a colon; e.g. :*:. The lexical syntax is the same as that for data constructors. Types can be written infix. For example Int :*: Bool. Back-quotes work as for expressions, both for type constructors and type variables; e.g. Int `Either` Bool, or Int `a` Bool. Similarly, parentheses work the same; e.g. (:*:) Int Bool. Fixities may be declared for type constructors just as for data constructors. However, one cannot distinguish between the two in a fixity declaration; a fixity declaration sets the fixity for a data constructor and the corresponding type constructor. For example: infixl 7 T, :*: sets the fixity for both type constructor T and data constructor T, and similarly for :*:. Int `a` Bool. Function arrow is infixr with fixity 0. (This might change; I'm not sure what it should be.) Data type and type-synonym declarations can be written infix. E.g. data a :*: b = Foo a b type a :+: b = Either a b The only thing that differs between operators in types and operators in expressions is that ordinary non-constructor operators, such as + and * are not allowed in types. Reason: the uniform thing to do would be to make them type variables, but that's not very useful. A less uniform but more useful thing would be to allow them to be type constructors. But that gives trouble in export lists. So for now we just exclude them. Explicitly-kinded quantification Haskell infers the kind of each type variable. Sometimes it is nice to be able to give the kind explicitly as (machine-checked) documentation, just as it is nice to give a type signature for a function. On some occasions, it is essential to do so. For example, in his paper "Restricted Data Types in Haskell" (Haskell Workshop 1999) John Hughes had to define the data type: data Set cxt a = Set [a] | Unused (cxt a -> ()) The only use for the Unused constructor was to force the correct kind for the type variable cxt. GHC now instead allows you to specify the kind of a type variable directly, wherever a type variable is explicitly bound. Namely: data declarations: data Set (cxt :: * -> *) a = Set [a] type declarations: type T (f :: * -> *) = f Int class declarations: class (Eq a) => C (f :: * -> *) a where ... forall's in type signatures: f :: forall (cxt :: * -> *). Set cxt Int The parentheses are required. Some of the spaces are required too, to separate the lexemes. If you write (f::*->*) you will get a parse error, because "::*->*" is a single lexeme in Haskell. As part of the same extension, you can put kind annotations in types as well. Thus: f :: (Int :: *) -> Int g :: forall a. a -> (a :: *) The syntax is atype ::= '(' ctype '::' kind ') The parentheses are required. Class method types Haskell 98 prohibits class method types to mention constraints on the class type variable, thus: class Seq s a where fromList :: [a] -> s a elem :: Eq a => a -> s a -> Bool The type of elem is illegal in Haskell 98, because it contains the constraint Eq a, constrains only the class type variable (in this case a). With the GHC lifts this restriction. Multi-parameter type classes This section documents GHC's implementation of multi-parameter type classes. There's lots of background in the paper Type classes: exploring the design space (Simon Peyton Jones, Mark Jones, Erik Meijer). Types GHC imposes the following restrictions on the form of a qualified type, whether declared in a type signature or inferred. Consider the type: forall tv1..tvn (c1, ...,cn) => type (Here, I write the "foralls" explicitly, although the Haskell source language omits them; in Haskell 1.4, all the free type variables of an explicit source-language type signature are universally quantified, except for the class type variables in a class declaration. However, in GHC, you can give the foralls if you want. See ). Each universally quantified type variable tvi must be reachable from type. A type variable is "reachable" if it it is functionally dependent (see ) on the type variables free in type. The reason for this is that a value with a type that does not obey this restriction could not be used without introducing ambiguity. Here, for example, is an illegal type: forall a. Eq a => Int When a value with this type was used, the constraint Eq tv would be introduced where tv is a fresh type variable, and (in the dictionary-translation implementation) the value would be applied to a dictionary for Eq tv. The difficulty is that we can never know which instance of Eq to use because we never get any more information about tv. Every constraint ci must mention at least one of the universally quantified type variables tvi. For example, this type is OK because C a b mentions the universally quantified type variable b: forall a. C a b => burble The next type is illegal because the constraint Eq b does not mention a: forall a. Eq b => burble The reason for this restriction is milder than the other one. The excluded types are never useful or necessary (because the offending context doesn't need to be witnessed at this point; it can be floated out). Furthermore, floating them out increases sharing. Lastly, excluding them is a conservative choice; it leaves a patch of territory free in case we need it later. Unlike Haskell 1.4, constraints in types do not have to be of the form (class type-variables). Thus, these type signatures are perfectly OK f :: Eq (m a) => [m a] -> [m a] g :: Eq [a] => ... This choice recovers principal types, a property that Haskell 1.4 does not have. Class declarations Multi-parameter type classes are permitted. For example: class Collection c a where union :: c a -> c a -> c a ...etc. The class hierarchy must be acyclic. However, the definition of "acyclic" involves only the superclass relationships. For example, this is OK: class C a where { op :: D b => a -> b -> b } class C a => D a where { ... } Here, C is a superclass of D, but it's OK for a class operation op of C to mention D. (It would not be OK for D to be a superclass of C.) There are no restrictions on the context in a class declaration (which introduces superclasses), except that the class hierarchy must be acyclic. So these class declarations are OK: class Functor (m k) => FiniteMap m k where ... class (Monad m, Monad (t m)) => Transform t m where lift :: m a -> (t m) a All of the class type variables must be reachable (in the sense mentioned in ) from the free varibles of each method type . For example: class Coll s a where empty :: s insert :: s -> a -> s is not OK, because the type of empty doesn't mention a. This rule is a consequence of Rule 1(a), above, for types, and has the same motivation. Sometimes, offending class declarations exhibit misunderstandings. For example, Coll might be rewritten class Coll s a where empty :: s a insert :: s a -> a -> s a which makes the connection between the type of a collection of a's (namely (s a)) and the element type a. Occasionally this really doesn't work, in which case you can split the class like this: class CollE s where empty :: s class CollE s => Coll s a where insert :: s -> a -> s Instance declarations Instance declarations may not overlap. The two instance declarations instance context1 => C type1 where ... instance context2 => C type2 where ... "overlap" if type1 and type2 unify However, if you give the command line option -fallow-overlapping-instances option then overlapping instance declarations are permitted. However, GHC arranges never to commit to using an instance declaration if another instance declaration also applies, either now or later. EITHER type1 and type2 do not unify OR type2 is a substitution instance of type1 (but not identical to type1), or vice versa. Notice that these rules make it clear which instance decl to use (pick the most specific one that matches) do not mention the contexts context1, context2 Reason: you can pick which instance decl "matches" based on the type. However the rules are over-conservative. Two instance declarations can overlap, but it can still be clear in particular situations which to use. For example: instance C (Int,a) where ... instance C (a,Bool) where ... These are rejected by GHC's rules, but it is clear what to do when trying to solve the constraint C (Int,Int) because the second instance cannot apply. Yell if this restriction bites you. GHC is also conservative about committing to an overlapping instance. For example: class C a where { op :: a -> a } instance C [Int] where ... instance C a => C [a] where ... f :: C b => [b] -> [b] f x = op x From the RHS of f we get the constraint C [b]. But GHC does not commit to the second instance declaration, because in a paricular call of f, b might be instantiate to Int, so the first instance declaration would be appropriate. So GHC rejects the program. If you add GHC will instead silently pick the second instance, without complaining about the problem of subsequent instantiations. Regrettably, GHC doesn't guarantee to detect overlapping instance declarations if they appear in different modules. GHC can "see" the instance declarations in the transitive closure of all the modules imported by the one being compiled, so it can "see" all instance decls when it is compiling Main. However, it currently chooses not to look at ones that can't possibly be of use in the module currently being compiled, in the interests of efficiency. (Perhaps we should change that decision, at least for Main.) There are no restrictions on the type in an instance head, except that at least one must not be a type variable. The instance "head" is the bit after the "=>" in an instance decl. For example, these are OK: instance C Int a where ... instance D (Int, Int) where ... instance E [[a]] where ... Note that instance heads may contain repeated type variables. For example, this is OK: instance Stateful (ST s) (MutVar s) where ... See for an experimental extension to lift this restriction. Unlike Haskell 1.4, instance heads may use type synonyms. As always, using a type synonym is just shorthand for writing the RHS of the type synonym definition. For example: type Point = (Int,Int) instance C Point where ... instance C [Point] where ... is legal. However, if you added instance C (Int,Int) where ... as well, then the compiler will complain about the overlapping (actually, identical) instance declarations. As always, type synonyms must be fully applied. You cannot, for example, write: type P a = [[a]] instance Monad P where ... This design decision is independent of all the others, and easily reversed, but it makes sense to me. The types in an instance-declaration context must all be type variables. Thus instance C a b => Eq (a,b) where ... is OK, but instance C Int b => Foo b where ... is not OK. See for an experimental extension to lift this restriction. Undecidable instances The rules for instance declarations state that: At least one of the types in the head of an instance declaration must not be a type variable. All of the types in the context of an instance declaration must be type variables. These restrictions ensure that context reduction terminates: each reduction step removes one type constructor. For example, the following would make the type checker loop if it wasn't excluded: instance C a => C a where ... There are two situations in which the rule is a bit of a pain. First, if one allows overlapping instance declarations then it's quite convenient to have a "default instance" declaration that applies if something more specific does not: instance C a where op = ... -- Default Second, sometimes you might want to use the following to get the effect of a "class synonym": class (C1 a, C2 a, C3 a) => C a where { } instance (C1 a, C2 a, C3 a) => C a where { } This allows you to write shorter signatures: f :: C a => ... instead of f :: (C1 a, C2 a, C3 a) => ... Voluminous correspondence on the Haskell mailing list has convinced me that it's worth experimenting with more liberal rules. If you use the experimental flag -fallow-undecidable-instances option, you can use arbitrary types in both an instance context and instance head. Termination is ensured by having a fixed-depth recursion stack. If you exceed the stack depth you get a sort of backtrace, and the opportunity to increase the stack depth with N. I'm on the lookout for a less brutal solution: a simple rule that preserves decidability while allowing these idioms interesting idioms. Implicit parameters Implicit paramters are implemented as described in "Implicit parameters: dynamic scoping with static types", J Lewis, MB Shields, E Meijer, J Launchbury, 27th ACM Symposium on Principles of Programming Languages (POPL'00), Boston, Jan 2000. (Most of the following, stil rather incomplete, documentation is due to Jeff Lewis.) A variable is called dynamically bound when it is bound by the calling context of a function and statically bound when bound by the callee's context. In Haskell, all variables are statically bound. Dynamic binding of variables is a notion that goes back to Lisp, but was later discarded in more modern incarnations, such as Scheme. Dynamic binding can be very confusing in an untyped language, and unfortunately, typed languages, in particular Hindley-Milner typed languages like Haskell, only support static scoping of variables. However, by a simple extension to the type class system of Haskell, we can support dynamic binding. Basically, we express the use of a dynamically bound variable as a constraint on the type. These constraints lead to types of the form (?x::t') => t, which says "this function uses a dynamically-bound variable ?x of type t'". For example, the following expresses the type of a sort function, implicitly parameterized by a comparison function named cmp. sort :: (?cmp :: a -> a -> Bool) => [a] -> [a] The dynamic binding constraints are just a new form of predicate in the type class system. An implicit parameter occurs in an expression using the special form ?x, where x is any valid identifier (e.g. ord ?x is a valid expression). Use of this construct also introduces a new dynamic-binding constraint in the type of the expression. For example, the following definition shows how we can define an implicitly parameterized sort function in terms of an explicitly parameterized sortBy function: sortBy :: (a -> a -> Bool) -> [a] -> [a] sort :: (?cmp :: a -> a -> Bool) => [a] -> [a] sort = sortBy ?cmp Implicit-parameter type constraints Dynamic binding constraints behave just like other type class constraints in that they are automatically propagated. Thus, when a function is used, its implicit parameters are inherited by the function that called it. For example, our sort function might be used to pick out the least value in a list: least :: (?cmp :: a -> a -> Bool) => [a] -> a least xs = fst (sort xs) Without lifting a finger, the ?cmp parameter is propagated to become a parameter of least as well. With explicit parameters, the default is that parameters must always be explicit propagated. With implicit parameters, the default is to always propagate them. An implicit-parameter type constraint differs from other type class constraints in the following way: All uses of a particular implicit parameter must have the same type. This means that the type of (?x, ?x) is (?x::a) => (a,a), and not (?x::a, ?x::b) => (a, b), as would be the case for type class constraints. You can't have an implicit parameter in the context of a class or instance declaration. For example, both these declarations are illegal: class (?x::Int) => C a where ... instance (?x::a) => Foo [a] where ... Reason: exactly which implicit parameter you pick up depends on exactly where you invoke a function. But the ``invocation'' of instance declarations is done behind the scenes by the compiler, so it's hard to figure out exactly where it is done. Easiest thing is to outlaw the offending types. Implicit-parameter constraints do not cause ambiguity. For example, consider: f :: (?x :: [a]) => Int -> Int f n = n + length ?x g :: (Read a, Show a) => String -> String g s = show (read s) Here, g has an ambiguous type, and is rejected, but f is fine. The binding for ?x at f's call site is quite unambiguous, and fixes the type a. Implicit-parameter bindings An implicit parameter is bound using the standard let or where binding forms. For example, we define the min function by binding cmp. min :: [a] -> a min = let ?cmp = (<=) in least A group of implicit-parameter bindings may occur anywhere a normal group of Haskell bindings can occur, except at top level. That is, they can occur in a let (including in a list comprehension, or do-notation, or pattern guards), or a where clause. Note the following points: An implicit-parameter binding group must be a collection of simple bindings to implicit-style variables (no function-style bindings, and no type signatures); these bindings are neither polymorphic or recursive. You may not mix implicit-parameter bindings with ordinary bindings in a single let expression; use two nested lets instead. (In the case of where you are stuck, since you can't nest where clauses.) You may put multiple implicit-parameter bindings in a single binding group; but they are not treated as a mutually recursive group (as ordinary let bindings are). Instead they are treated as a non-recursive group, simultaneously binding all the implicit parameter. The bindings are not nested, and may be re-ordered without changing the meaning of the program. For example, consider: f t = let { ?x = t; ?y = ?x+(1::Int) } in ?x + ?y The use of ?x in the binding for ?y does not "see" the binding for ?x, so the type of f is f :: (?x::Int) => Int -> Int Linear implicit parameters Linear implicit parameters are an idea developed by Koen Claessen, Mark Shields, and Simon PJ. They address the long-standing problem that monads seem over-kill for certain sorts of problem, notably: distributing a supply of unique names distributing a suppply of random numbers distributing an oracle (as in QuickCheck) Linear implicit parameters are just like ordinary implicit parameters, except that they are "linear" -- that is, they cannot be copied, and must be explicitly "split" instead. Linear implicit parameters are written '%x' instead of '?x'. (The '/' in the '%' suggests the split!) For example: import GHC.Exts( Splittable ) data NameSupply = ... splitNS :: NameSupply -> (NameSupply, NameSupply) newName :: NameSupply -> Name instance Splittable NameSupply where split = splitNS f :: (%ns :: NameSupply) => Env -> Expr -> Expr f env (Lam x e) = Lam x' (f env e) where x' = newName %ns env' = extend env x x' ...more equations for f... Notice that the implicit parameter %ns is consumed once by the call to newName once by the recursive call to f So the translation done by the type checker makes the parameter explicit: f :: NameSupply -> Env -> Expr -> Expr f ns env (Lam x e) = Lam x' (f ns1 env e) where (ns1,ns2) = splitNS ns x' = newName ns2 env = extend env x x' Notice the call to 'split' introduced by the type checker. How did it know to use 'splitNS'? Because what it really did was to introduce a call to the overloaded function 'split', defined by the class Splittable: class Splittable a where split :: a -> (a,a) The instance for Splittable NameSupply tells GHC how to implement split for name supplies. But we can simply write g x = (x, %ns, %ns) and GHC will infer g :: (Splittable a, %ns :: a) => b -> (b,a,a) The Splittable class is built into GHC. It's exported by module GHC.Exts. Other points: '?x' and '%x' are entirely distinct implicit parameters: you can use them together and they won't intefere with each other. You can bind linear implicit parameters in 'with' clauses. You cannot have implicit parameters (whether linear or not) in the context of a class or instance declaration. Warnings The monomorphism restriction is even more important than usual. Consider the example above: f :: (%ns :: NameSupply) => Env -> Expr -> Expr f env (Lam x e) = Lam x' (f env e) where x' = newName %ns env' = extend env x x' If we replaced the two occurrences of x' by (newName %ns), which is usually a harmless thing to do, we get: f :: (%ns :: NameSupply) => Env -> Expr -> Expr f env (Lam x e) = Lam (newName %ns) (f env e) where env' = extend env x (newName %ns) But now the name supply is consumed in three places (the two calls to newName,and the recursive call to f), so the result is utterly different. Urk! We don't even have the beta rule. Well, this is an experimental change. With implicit parameters we have already lost beta reduction anyway, and (as John Launchbury puts it) we can't sensibly reason about Haskell programs without knowing their typing. Recursive functions Linear implicit parameters can be particularly tricky when you have a recursive function Consider foo :: %x::T => Int -> [Int] foo 0 = [] foo n = %x : foo (n-1) where T is some type in class Splittable. Do you get a list of all the same T's or all different T's (assuming that split gives two distinct T's back)? If you supply the type signature, taking advantage of polymorphic recursion, you get what you'd probably expect. Here's the translated term, where the implicit param is made explicit: foo x 0 = [] foo x n = let (x1,x2) = split x in x1 : foo x2 (n-1) But if you don't supply a type signature, GHC uses the Hindley Milner trick of using a single monomorphic instance of the function for the recursive calls. That is what makes Hindley Milner type inference work. So the translation becomes foo x = let foom 0 = [] foom n = x : foom (n-1) in foom Result: 'x' is not split, and you get a list of identical T's. So the semantics of the program depends on whether or not foo has a type signature. Yikes! You may say that this is a good reason to dislike linear implicit parameters and you'd be right. That is why they are an experimental feature. Functional dependencies Functional dependencies are implemented as described by Mark Jones in “Type Classes with Functional Dependencies”, Mark P. Jones, In Proceedings of the 9th European Symposium on Programming, ESOP 2000, Berlin, Germany, March 2000, Springer-Verlag LNCS 1782, . Functional dependencies are introduced by a vertical bar in the syntax of a class declaration; e.g. class (Monad m) => MonadState s m | m -> s where ... class Foo a b c | a b -> c where ... There should be more documentation, but there isn't (yet). Yell if you need it. Arbitrary-rank polymorphism Haskell type signatures are implicitly quantified. The new keyword forall allows us to say exactly what this means. For example: g :: b -> b means this: g :: forall b. (b -> b) The two are treated identically. However, GHC's type system supports arbitrary-rank explicit universal quantification in types. For example, all the following types are legal: f1 :: forall a b. a -> b -> a g1 :: forall a b. (Ord a, Eq b) => a -> b -> a f2 :: (forall a. a->a) -> Int -> Int g2 :: (forall a. Eq a => [a] -> a -> Bool) -> Int -> Int f3 :: ((forall a. a->a) -> Int) -> Bool -> Bool Here, f1 and g1 are rank-1 types, and can be written in standard Haskell (e.g. f1 :: a->b->a). The forall makes explicit the universal quantification that is implicitly added by Haskell. The functions f2 and g2 have rank-2 types; the forall is on the left of a function arrrow. As g2 shows, the polymorphic type on the left of the function arrow can be overloaded. The functions f3 and g3 have rank-3 types; they have rank-2 types on the left of a function arrow. GHC allows types of arbitrary rank; you can nest foralls arbitrarily deep in function arrows. (GHC used to be restricted to rank 2, but that restriction has now been lifted.) In particular, a forall-type (also called a "type scheme"), including an operational type class context, is legal: On the left of a function arrow On the right of a function arrow (see ) As the argument of a constructor, or type of a field, in a data type declaration. For example, any of the f1,f2,f3,g1,g2,g3 above would be valid field type signatures. As the type of an implicit parameter In a pattern type signature (see ) There is one place you cannot put a forall: you cannot instantiate a type variable with a forall-type. So you cannot make a forall-type the argument of a type constructor. So these types are illegal: x1 :: [forall a. a->a] x2 :: (forall a. a->a, Int) x3 :: Maybe (forall a. a->a) Of course forall becomes a keyword; you can't use forall as a type variable any more! Examples In a data or newtype declaration one can quantify the types of the constructor arguments. Here are several examples: data T a = T1 (forall b. b -> b -> b) a data MonadT m = MkMonad { return :: forall a. a -> m a, bind :: forall a b. m a -> (a -> m b) -> m b } newtype Swizzle = MkSwizzle (Ord a => [a] -> [a]) The constructors have rank-2 types: T1 :: forall a. (forall b. b -> b -> b) -> a -> T a MkMonad :: forall m. (forall a. a -> m a) -> (forall a b. m a -> (a -> m b) -> m b) -> MonadT m MkSwizzle :: (Ord a => [a] -> [a]) -> Swizzle Notice that you don't need to use a forall if there's an explicit context. For example in the first argument of the constructor MkSwizzle, an implicit "forall a." is prefixed to the argument type. The implicit forall quantifies all type variables that are not already in scope, and are mentioned in the type quantified over. As for type signatures, implicit quantification happens for non-overloaded types too. So if you write this: data T a = MkT (Either a b) (b -> b) it's just as if you had written this: data T a = MkT (forall b. Either a b) (forall b. b -> b) That is, since the type variable b isn't in scope, it's implicitly universally quantified. (Arguably, it would be better to require explicit quantification on constructor arguments where that is what is wanted. Feedback welcomed.) You construct values of types T1, MonadT, Swizzle by applying the constructor to suitable values, just as usual. For example, a1 :: T Int a1 = T1 (\xy->x) 3 a2, a3 :: Swizzle a2 = MkSwizzle sort a3 = MkSwizzle reverse a4 :: MonadT Maybe a4 = let r x = Just x b m k = case m of Just y -> k y Nothing -> Nothing in MkMonad r b mkTs :: (forall b. b -> b -> b) -> a -> [T a] mkTs f x y = [T1 f x, T1 f y] The type of the argument can, as usual, be more general than the type required, as (MkSwizzle reverse) shows. (reverse does not need the Ord constraint.) When you use pattern matching, the bound variables may now have polymorphic types. For example: f :: T a -> a -> (a, Char) f (T1 w k) x = (w k x, w 'c' 'd') g :: (Ord a, Ord b) => Swizzle -> [a] -> (a -> b) -> [b] g (MkSwizzle s) xs f = s (map f (s xs)) h :: MonadT m -> [m a] -> m [a] h m [] = return m [] h m (x:xs) = bind m x $ \y -> bind m (h m xs) $ \ys -> return m (y:ys) In the function h we use the record selectors return and bind to extract the polymorphic bind and return functions from the MonadT data structure, rather than using pattern matching. Type inference In general, type inference for arbitrary-rank types is undecideable. GHC uses an algorithm proposed by Odersky and Laufer ("Putting type annotations to work", POPL'96) to get a decidable algorithm by requiring some help from the programmer. We do not yet have a formal specification of "some help" but the rule is this: For a lambda-bound or case-bound variable, x, either the programmer provides an explicit polymorphic type for x, or GHC's type inference will assume that x's type has no foralls in it. What does it mean to "provide" an explicit type for x? You can do that by giving a type signature for x directly, using a pattern type signature (), thus: \ f :: (forall a. a->a) -> (f True, f 'c') Alternatively, you can give a type signature to the enclosing context, which GHC can "push down" to find the type for the variable: (\ f -> (f True, f 'c')) :: (forall a. a->a) -> (Bool,Char) Here the type signature on the expression can be pushed inwards to give a type signature for f. Similarly, and more commonly, one can give a type signature for the function itself: h :: (forall a. a->a) -> (Bool,Char) h f = (f True, f 'c') You don't need to give a type signature if the lambda bound variable is a constructor argument. Here is an example we saw earlier: f :: T a -> a -> (a, Char) f (T1 w k) x = (w k x, w 'c' 'd') Here we do not need to give a type signature to w, because it is an argument of constructor T1 and that tells GHC all it needs to know. Implicit quantification GHC performs implicit quantification as follows. At the top level (only) of user-written types, if and only if there is no explicit forall, GHC finds all the type variables mentioned in the type that are not already in scope, and universally quantifies them. For example, the following pairs are equivalent: f :: a -> a f :: forall a. a -> a g (x::a) = let h :: a -> b -> b h x y = y in ... g (x::a) = let h :: forall b. a -> b -> b h x y = y in ... Notice that GHC does not find the innermost possible quantification point. For example: f :: (a -> a) -> Int -- MEANS f :: forall a. (a -> a) -> Int -- NOT f :: (forall a. a -> a) -> Int g :: (Ord a => a -> a) -> Int -- MEANS the illegal type g :: forall a. (Ord a => a -> a) -> Int -- NOT g :: (forall a. Ord a => a -> a) -> Int The latter produces an illegal type, which you might think is silly, but at least the rule is simple. If you want the latter type, you can write your for-alls explicitly. Indeed, doing so is strongly advised for rank-2 types. Liberalised type synonyms Type synonmys are like macros at the type level, and GHC does validity checking on types only after expanding type synonyms. That means that GHC can be very much more liberal about type synonyms than Haskell 98: You can write a forall (including overloading) in a type synonym, thus: type Discard a = forall b. Show b => a -> b -> (a, String) f :: Discard a f x y = (x, show y) g :: Discard Int -> (Int,Bool) -- A rank-2 type g f = f Int True You can write an unboxed tuple in a type synonym: type Pr = (# Int, Int #) h :: Int -> Pr h x = (# x, x #) You can apply a type synonym to a forall type: type Foo a = a -> a -> Bool f :: Foo (forall b. b->b) After expanding the synonym, f has the legal (in GHC) type: f :: (forall b. b->b) -> (forall b. b->b) -> Bool You can apply a type synonym to a partially applied type synonym: type Generic i o = forall x. i x -> o x type Id x = x foo :: Generic Id [] After epxanding the synonym, foo has the legal (in GHC) type: foo :: forall x. x -> [x] GHC currently does kind checking before expanding synonyms (though even that could be changed.) After expanding type synonyms, GHC does validity checking on types, looking for the following mal-formedness which isn't detected simply by kind checking: Type constructor applied to a type involving for-alls. Unboxed tuple on left of an arrow. Partially-applied type synonym. So, for example, this will be rejected: type Pr = (# Int, Int #) h :: Pr -> Int h x = ... because GHC does not allow unboxed tuples on the left of a function arrow. For-all hoisting It is often convenient to use generalised type synonyms at the right hand end of an arrow, thus: type Discard a = forall b. a -> b -> a g :: Int -> Discard Int g x y z = x+y Simply expanding the type synonym would give g :: Int -> (forall b. Int -> b -> Int) but GHC "hoists" the forall to give the isomorphic type g :: forall b. Int -> Int -> b -> Int In general, the rule is this: to determine the type specified by any explicit user-written type (e.g. in a type signature), GHC expands type synonyms and then repeatedly performs the transformation: type1 -> forall a1..an. context2 => type2 ==> forall a1..an. context2 => type1 -> type2 (In fact, GHC tries to retain as much synonym information as possible for use in error messages, but that is a usability issue.) This rule applies, of course, whether or not the forall comes from a synonym. For example, here is another valid way to write g's type signature: g :: Int -> Int -> forall b. b -> Int When doing this hoisting operation, GHC eliminates duplicate constraints. For example: type Foo a = (?x::Int) => Bool -> a g :: Foo (Foo Int) means g :: (?x::Int) => Bool -> Bool -> Int Existentially quantified data constructors The idea of using existential quantification in data type declarations was suggested by Laufer (I believe, thought doubtless someone will correct me), and implemented in Hope+. It's been in Lennart Augustsson's hbc Haskell compiler for several years, and proved very useful. Here's the idea. Consider the declaration: data Foo = forall a. MkFoo a (a -> Bool) | Nil The data type Foo has two constructors with types: MkFoo :: forall a. a -> (a -> Bool) -> Foo Nil :: Foo Notice that the type variable a in the type of MkFoo does not appear in the data type itself, which is plain Foo. For example, the following expression is fine: [MkFoo 3 even, MkFoo 'c' isUpper] :: [Foo] Here, (MkFoo 3 even) packages an integer with a function even that maps an integer to Bool; and MkFoo 'c' isUpper packages a character with a compatible function. These two things are each of type Foo and can be put in a list. What can we do with a value of type Foo?. In particular, what happens when we pattern-match on MkFoo? f (MkFoo val fn) = ??? Since all we know about val and fn is that they are compatible, the only (useful) thing we can do with them is to apply fn to val to get a boolean. For example: f :: Foo -> Bool f (MkFoo val fn) = fn val What this allows us to do is to package heterogenous values together with a bunch of functions that manipulate them, and then treat that collection of packages in a uniform manner. You can express quite a bit of object-oriented-like programming this way. Why existential? What has this to do with existential quantification? Simply that MkFoo has the (nearly) isomorphic type MkFoo :: (exists a . (a, a -> Bool)) -> Foo But Haskell programmers can safely think of the ordinary universally quantified type given above, thereby avoiding adding a new existential quantification construct. Type classes An easy extension (implemented in hbc) is to allow arbitrary contexts before the constructor. For example: data Baz = forall a. Eq a => Baz1 a a | forall b. Show b => Baz2 b (b -> b) The two constructors have the types you'd expect: Baz1 :: forall a. Eq a => a -> a -> Baz Baz2 :: forall b. Show b => b -> (b -> b) -> Baz But when pattern matching on Baz1 the matched values can be compared for equality, and when pattern matching on Baz2 the first matched value can be converted to a string (as well as applying the function to it). So this program is legal: f :: Baz -> String f (Baz1 p q) | p == q = "Yes" | otherwise = "No" f (Baz2 v fn) = show (fn v) Operationally, in a dictionary-passing implementation, the constructors Baz1 and Baz2 must store the dictionaries for Eq and Show respectively, and extract it on pattern matching. Notice the way that the syntax fits smoothly with that used for universal quantification earlier. Restrictions There are several restrictions on the ways in which existentially-quantified constructors can be use. When pattern matching, each pattern match introduces a new, distinct, type for each existential type variable. These types cannot be unified with any other type, nor can they escape from the scope of the pattern match. For example, these fragments are incorrect: f1 (MkFoo a f) = a Here, the type bound by MkFoo "escapes", because a is the result of f1. One way to see why this is wrong is to ask what type f1 has: f1 :: Foo -> a -- Weird! What is this "a" in the result type? Clearly we don't mean this: f1 :: forall a. Foo -> a -- Wrong! The original program is just plain wrong. Here's another sort of error f2 (Baz1 a b) (Baz1 p q) = a==q It's ok to say a==b or p==q, but a==q is wrong because it equates the two distinct types arising from the two Baz1 constructors. You can't pattern-match on an existentially quantified constructor in a let or where group of bindings. So this is illegal: f3 x = a==b where { Baz1 a b = x } Instead, use a case expression: f3 x = case x of Baz1 a b -> a==b In general, you can only pattern-match on an existentially-quantified constructor in a case expression or in the patterns of a function definition. The reason for this restriction is really an implementation one. Type-checking binding groups is already a nightmare without existentials complicating the picture. Also an existential pattern binding at the top level of a module doesn't make sense, because it's not clear how to prevent the existentially-quantified type "escaping". So for now, there's a simple-to-state restriction. We'll see how annoying it is. You can't use existential quantification for newtype declarations. So this is illegal: newtype T = forall a. Ord a => MkT a Reason: a value of type T must be represented as a pair of a dictionary for Ord t and a value of type t. That contradicts the idea that newtype should have no concrete representation. You can get just the same efficiency and effect by using data instead of newtype. If there is no overloading involved, then there is more of a case for allowing an existentially-quantified newtype, because the data because the data version does carry an implementation cost, but single-field existentially quantified constructors aren't much use. So the simple restriction (no existential stuff on newtype) stands, unless there are convincing reasons to change it. You can't use deriving to define instances of a data type with existentially quantified data constructors. Reason: in most cases it would not make sense. For example:# data T = forall a. MkT [a] deriving( Eq ) To derive Eq in the standard way we would need to have equality between the single component of two MkT constructors: instance Eq T where (MkT a) == (MkT b) = ??? But a and b have distinct types, and so can't be compared. It's just about possible to imagine examples in which the derived instance would make sense, but it seems altogether simpler simply to prohibit such declarations. Define your own instances! Scoped type variables A pattern type signature can introduce a scoped type variable. For example f (xs::[a]) = ys ++ ys where ys :: [a] ys = reverse xs The pattern (xs::[a]) includes a type signature for xs. This brings the type variable a into scope; it scopes over all the patterns and right hand sides for this equation for f. In particular, it is in scope at the type signature for y. Pattern type signatures are completely orthogonal to ordinary, separate type signatures. The two can be used independently or together. At ordinary type signatures, such as that for ys, any type variables mentioned in the type signature that are not in scope are implicitly universally quantified. (If there are no type variables in scope, all type variables mentioned in the signature are universally quantified, which is just as in Haskell 98.) In this case, since a is in scope, it is not universally quantified, so the type of ys is the same as that of xs. In Haskell 98 it is not possible to declare a type for ys; a major benefit of scoped type variables is that it becomes possible to do so. Scoped type variables are implemented in both GHC and Hugs. Where the implementations differ from the specification below, those differences are noted. So much for the basic idea. Here are the details. What a pattern type signature means A type variable brought into scope by a pattern type signature is simply the name for a type. The restriction they express is that all occurrences of the same name mean the same type. For example: f :: [Int] -> Int -> Int f (xs::[a]) (y::a) = (head xs + y) :: a The pattern type signatures on the left hand side of f express the fact that xs must be a list of things of some type a; and that y must have this same type. The type signature on the expression (head xs) specifies that this expression must have the same type a. There is no requirement that the type named by "a" is in fact a type variable. Indeed, in this case, the type named by "a" is Int. (This is a slight liberalisation from the original rather complex rules, which specified that a pattern-bound type variable should be universally quantified.) For example, all of these are legal: t (x::a) (y::a) = x+y*2 f (x::a) (y::b) = [x,y] -- a unifies with b g (x::a) = x + 1::Int -- a unifies with Int h x = let k (y::a) = [x,y] -- a is free in the in k x -- environment k (x::a) True = ... -- a unifies with Int k (x::Int) False = ... w :: [b] -> [b] w (x::a) = x -- a unifies with [b] Scope and implicit quantification All the type variables mentioned in a pattern, that are not already in scope, are brought into scope by the pattern. We describe this set as the type variables bound by the pattern. For example: f (x::a) = let g (y::(a,b)) = fst y in g (x,True) The pattern (x::a) brings the type variable a into scope, as well as the term variable x. The pattern (y::(a,b)) contains an occurrence of the already-in-scope type variable a, and brings into scope the type variable b. The type variable(s) bound by the pattern have the same scope as the term variable(s) bound by the pattern. For example: let f (x::a) = <...rhs of f...> (p::b, q::b) = (1,2) in <...body of let...> Here, the type variable a scopes over the right hand side of f, just like x does; while the type variable b scopes over the body of the let, and all the other definitions in the let, just like p and q do. Indeed, the newly bound type variables also scope over any ordinary, separate type signatures in the let group. The type variables bound by the pattern may be mentioned in ordinary type signatures or pattern type signatures anywhere within their scope. In ordinary type signatures, any type variable mentioned in the signature that is in scope is not universally quantified. Ordinary type signatures do not bring any new type variables into scope (except in the type signature itself!). So this is illegal: f :: a -> a f x = x::a It's illegal because a is not in scope in the body of f, so the ordinary signature x::a is equivalent to x::forall a.a; and that is an incorrect typing. The pattern type signature is a monotype: A pattern type signature cannot contain any explicit forall quantification. The type variables bound by a pattern type signature can only be instantiated to monotypes, not to type schemes. There is no implicit universal quantification on pattern type signatures (in contrast to ordinary type signatures). The type variables in the head of a class or instance declaration scope over the methods defined in the where part. For example: class C a where op :: [a] -> a op xs = let ys::[a] ys = reverse xs in head ys (Not implemented in Hugs yet, Dec 98). Where a pattern type signature can occur A pattern type signature can occur in any pattern. For example: A pattern type signature can be on an arbitrary sub-pattern, not ust on a variable: f ((x,y)::(a,b)) = (y,x) :: (b,a) Pattern type signatures, including the result part, can be used in lambda abstractions: (\ (x::a, y) :: a -> x) Pattern type signatures, including the result part, can be used in case expressions: case e of { (x::a, y) :: a -> x } To avoid ambiguity, the type after the “::” in a result pattern signature on a lambda or case must be atomic (i.e. a single token or a parenthesised type of some sort). To see why, consider how one would parse this: \ x :: a -> b -> x Pattern type signatures can bind existential type variables. For example: data T = forall a. MkT [a] f :: T -> T f (MkT [t::a]) = MkT t3 where t3::[a] = [t,t,t] Pattern type signatures can be used in pattern bindings: f x = let (y, z::a) = x in ... f1 x = let (y, z::Int) = x in ... f2 (x::(Int,a)) = let (y, z::a) = x in ... f3 :: (b->b) = \x -> x In all such cases, the binding is not generalised over the pattern-bound type variables. Thus f3 is monomorphic; f3 has type b -> b for some type b, and not forall b. b -> b. In contrast, the binding f4 :: b->b f4 = \x -> x makes a polymorphic function, but b is not in scope anywhere in f4's scope. Result type signatures The result type of a function can be given a signature, thus: f (x::a) :: [a] = [x,x,x] The final :: [a] after all the patterns gives a signature to the result type. Sometimes this is the only way of naming the type variable you want: f :: Int -> [a] -> [a] f n :: ([a] -> [a]) = let g (x::a, y::a) = (y,x) in \xs -> map g (reverse xs `zip` xs) The type variables bound in a result type signature scope over the right hand side of the definition. However, consider this corner-case: rev1 :: [a] -> [a] = \xs -> reverse xs foo ys = rev (ys::[a]) The signature on rev1 is considered a pattern type signature, not a result type signature, and the type variables it binds have the same scope as rev1 itself (i.e. the right-hand side of rev1 and the rest of the module too). In particular, the expression (ys::[a]) is OK, because the type variable a is in scope (otherwise it would mean (ys::forall a.[a]), which would be rejected). As mentioned above, rev1 is made monomorphic by this scoping rule. For example, the following program would be rejected, because it claims that rev1 is polymorphic: rev1 :: [b] -> [b] rev1 :: [a] -> [a] = \xs -> reverse xs Result type signatures are not yet implemented in Hugs. Generalised derived instances for newtypes When you define an abstract type using newtype, you may want the new type to inherit some instances from its representation. In Haskell 98, you can inherit instances of Eq, Ord, Enum and Bounded by deriving them, but for any other classes you have to write an explicit instance declaration. For example, if you define newtype Dollars = Dollars Int and you want to use arithmetic on Dollars, you have to explicitly define an instance of Num: instance Num Dollars where Dollars a + Dollars b = Dollars (a+b) ... All the instance does is apply and remove the newtype constructor. It is particularly galling that, since the constructor doesn't appear at run-time, this instance declaration defines a dictionary which is wholly equivalent to the Int dictionary, only slower! Generalising the deriving clause GHC now permits such instances to be derived instead, so one can write newtype Dollars = Dollars Int deriving (Eq,Show,Num) and the implementation uses the same Num dictionary for Dollars as for Int. Notionally, the compiler derives an instance declaration of the form instance Num Int => Num Dollars which just adds or removes the newtype constructor according to the type. We can also derive instances of constructor classes in a similar way. For example, suppose we have implemented state and failure monad transformers, such that instance Monad m => Monad (State s m) instance Monad m => Monad (Failure m) In Haskell 98, we can define a parsing monad by type Parser tok m a = State [tok] (Failure m) a which is automatically a monad thanks to the instance declarations above. With the extension, we can make the parser type abstract, without needing to write an instance of class Monad, via newtype Parser tok m a = Parser (State [tok] (Failure m) a) deriving Monad In this case the derived instance declaration is of the form instance Monad (State [tok] (Failure m)) => Monad (Parser tok m) Notice that, since Monad is a constructor class, the instance is a partial application of the new type, not the entire left hand side. We can imagine that the type declaration is ``eta-converted'' to generate the context of the instance declaration. We can even derive instances of multi-parameter classes, provided the newtype is the last class parameter. In this case, a ``partial application'' of the class appears in the deriving clause. For example, given the class class StateMonad s m | m -> s where ... instance Monad m => StateMonad s (State s m) where ... then we can derive an instance of StateMonad for Parsers by newtype Parser tok m a = Parser (State [tok] (Failure m) a) deriving (Monad, StateMonad [tok]) The derived instance is obtained by completing the application of the class to the new type: instance StateMonad [tok] (State [tok] (Failure m)) => StateMonad [tok] (Parser tok m) As a result of this extension, all derived instances in newtype declarations are treated uniformly (and implemented just by reusing the dictionary for the representation type), except Show and Read, which really behave differently for the newtype and its representation. A more precise specification Derived instance declarations are constructed as follows. Consider the declaration (after expansion of any type synonyms) newtype T v1...vn = T' (S t1...tk vk+1...vn) deriving (c1...cm) where S is a type constructor, t1...tk are types, vk+1...vn are type variables which do not occur in any of the ti, and the ci are partial applications of classes of the form C t1'...tj', where the arity of C is exactly j+1. That is, C lacks exactly one type argument. Then, for each ci, the derived instance declaration is: instance ci (S t1...tk vk+1...v) => ci (T v1...vp) where p is chosen so that T v1...vp is of the right kind for the last parameter of class Ci. As an example which does not work, consider newtype NonMonad m s = NonMonad (State s m s) deriving Monad Here we cannot derive the instance instance Monad (State s m) => Monad (NonMonad m) because the type variable s occurs in State s m, and so cannot be "eta-converted" away. It is a good thing that this deriving clause is rejected, because NonMonad m is not, in fact, a monad --- for the same reason. Try defining >>= with the correct type: you won't be able to. Notice also that the order of class parameters becomes important, since we can only derive instances for the last one. If the StateMonad class above were instead defined as class StateMonad m s | m -> s where ... then we would not have been able to derive an instance for the Parser type above. We hypothesise that multi-parameter classes usually have one "main" parameter for which deriving new instances is most interesting. Template Haskell Template Haskell allows you to do compile-time meta-programming in Haskell. There is a "home page" for Template Haskell at http://www.haskell.org/th/, while the background to the main technical innovations is discussed in " Template Meta-programming for Haskell" (Proc Haskell Workshop 2002). The first example from that paper is set out below as a worked example to help get you started. The documentation here describes the realisation in GHC. (It's rather sketchy just now; Tim Sheard is going to expand it.) Syntax Template Haskell has the following new syntactic constructions. You need to use the flag -fglasgow-exts to switch these syntactic extensions on. A splice is written $x, where x is an identifier, or $(...), where the "..." is an arbitrary expression. There must be no space between the "$" and the identifier or parenthesis. This use of "$" overrides its meaning as an infix operator, just as "M.x" overrides the meaning of "." as an infix operator. If you want the infix operator, put spaces around it. A splice can occur in place of an expression; the spliced expression must have type Expr a list of top-level declarations; ; the spliced expression must have type Q [Dec] a type; the spliced expression must have type Type. (Note that the syntax for a declaration splice uses "$" not "splice" as in the paper. Also the type of the enclosed expression must be Q [Dec], not [Q Dec] as in the paper.) A expression quotation is written in Oxford brackets, thus: [| ... |], where the "..." is an expression; the quotation has type Expr. [d| ... |], where the "..." is a list of top-level declarations; the quotation has type Q [Dec]. [t| ... |], where the "..." is a type; the quotation has type Type. Reification is written thus: reifyDecl T, where T is a type constructor; this expression has type Dec. reifyDecl C, where C is a class; has type Dec. reifyType f, where f is an identifier; has type Typ. Still to come: fixities Using Template Haskell The data types and monadic constructor functions for Template Haskell are in the library Language.Haskell.THSyntax. You can only run a function at compile time if it is imported from another module. That is, you can't define a function in a module, and call it from within a splice in the same module. (It would make sense to do so, but it's hard to implement.) The flag -ddump-splices shows the expansion of all top-level splices as they happen. If you are building GHC from source, you need at least a stage-2 bootstrap compiler to run Template Haskell. A stage-1 compiler will reject the TH constructs. Reason: TH compiles and runs a program, and then looks at the result. So it's important that the program it compiles produces results whose representations are identical to those of the compiler itself. Template Haskell works in any mode (--make, --interactive, or file-at-a-time). There used to be a restriction to the former two, but that restriction has been lifted. A Template Haskell Worked Example To help you get over the confidence barrier, try out this skeletal worked example. First cut and paste the two modules below into "Main.hs" and "Printf.hs": {- Main.hs -} module Main where -- Import our template "pr" import Printf ( pr ) -- The splice operator $ takes the Haskell source code -- generated at compile time by "pr" and splices it into -- the argument of "putStrLn". main = putStrLn ( $(pr "Hello") ) {- Printf.hs -} module Printf where -- Skeletal printf from the paper. -- It needs to be in a separate module to the one where -- you intend to use it. -- Import some Template Haskell syntax import Language.Haskell.THSyntax -- Describe a format string data Format = D | S | L String -- Parse a format string. This is left largely to you -- as we are here interested in building our first ever -- Template Haskell program and not in building printf. parse :: String -> [Format] parse s = [ L s ] -- Generate Haskell source code from a parsed representation -- of the format string. This code will be spliced into -- the module which calls "pr", at compile time. gen :: [Format] -> Expr gen [D] = [| \n -> show n |] gen [S] = [| \s -> s |] gen [L s] = string s -- Here we generate the Haskell code for the splice -- from an input format string. pr :: String -> Expr pr s = gen (parse s) Now run the compiler (here we are using a "stage three" build of GHC, at a Cygwin prompt on Windows): ghc/compiler/stage3/ghc-inplace --make -fglasgow-exts -package haskell-src main.hs -o main.exe Run "main.exe" and here is your output: $ ./main Hello Arrow notation Arrows are a generalization of monads introduced by John Hughes. For more details, see “Generalising Monads to Arrows”, John Hughes, in Science of Computer Programming 37, pp67–111, May 2000. A New Notation for Arrows”, Ross Paterson, in ICFP, Sep 2001. Arrows and Computation”, Ross Paterson, in The Fun of Programming, Palgrave, 2003. and the arrows web page at http://www.haskell.org/arrows/. With the flag, GHC supports the arrow notation described in the second of these papers. What follows is a brief introduction to the notation; it won't make much sense unless you've read Hughes's paper. This notation is translated to ordinary Haskell, using combinators from the Control.Arrow module. The extension adds a new kind of expression for defining arrows, of the form proc pat -> cmd, where proc is a new keyword. The variables of the pattern are bound in the body of the proc-expression, which is a new sort of thing called a command. The syntax of commands is as follows: cmd ::= exp1 -< exp2 | exp1 -<< exp2 | do { cstmt1 .. cstmtn ; cmd } | let decls in cmd | if exp then cmd1 else cmd2 | case exp of { calts } | cmd1 qop cmd2 | (| aexp cmd1 .. cmdn |) | \ pat1 .. patn -> cmd | cmd aexp | ( cmd ) cstmt ::= let decls | pat <- cmd | rec { cstmt1 .. cstmtn } | cmd Commands produce values, but (like monadic computations) may yield more than one value, or none, and may do other things as well. For the most part, familiarity with monadic notation is a good guide to using commands. However the values of expressions, even monadic ones, are determined by the values of the variables they contain; this is not necessarily the case for commands. A simple example of the new notation is the expression proc x -> f -< x+1 We call this a procedure or arrow abstraction. As with a lambda expression, the variable x is a new variable bound within the proc-expression. It refers to the input to the arrow. In the above example, -< is not an identifier but an new reserved symbol used for building commands from an expression of arrow type and an expression to be fed as input to that arrow. (The weird look will make more sense later.) It may be read as analogue of application for arrows. The above example is equivalent to the Haskell expression arr (\ x -> x+1) >>> f That would make no sense if the expression to the left of -< involves the bound variable x. More generally, the expression to the left of -< may not involve any local variable, i.e. a variable bound in the current arrow abstraction. For such a situation there is a variant -<<, as in proc x -> f x -<< x+1 which is equivalent to arr (\ x -> (f, x+1)) >>> app so in this case the arrow must belong to the ArrowApply class. Such an arrow is equivalent to a monad, so if you're using this form you may find a monadic formulation more convenient. do-notation for commands Another form of command is a form of do-notation. For example, you can write proc x -> do y <- f -< x+1 g -< 2*y let z = x+y t <- h -< x*z returnA -< t+z You can read this much like ordinary do-notation, but with commands in place of monadic expressions. The first line sends the value of x+1 as an input to the arrow f, and matches its output against y. In the next line, the output is discarded. The arrow returnA is defined in the Control.Arrow module as arr id. The above example is treated as an abbreviation for arr (\ x -> (x, x)) >>> first (arr (\ x -> x+1) >>> f) >>> arr (\ (y, x) -> (y, (x, y))) >>> first (arr (\ y -> 2*y) >>> g) >>> arr snd >>> arr (\ (x, y) -> let z = x+y in ((x, z), z)) >>> first (arr (\ (x, z) -> x*z) >>> h) >>> arr (\ (t, z) -> t+z) >>> returnA Note that variables not used later in the composition are projected out. After simplification using rewrite rules (see ) defined in the Control.Arrow module, this reduces to arr (\ x -> (x+1, x)) >>> first f >>> arr (\ (y, x) -> (2*y, (x, y))) >>> first g >>> arr (\ (_, (x, y)) -> let z = x+y in (x*z, z)) >>> first h >>> arr (\ (t, z) -> t+z) which is what you might have written by hand. With arrow notation, GHC keeps track of all those tuples of variables for you. Note that although the above translation suggests that let-bound variables like z must be monomorphic, the actual translation produces Core, so polymorphic variables are allowed. It's also possible to have mutually recursive bindings, using the new rec keyword, as in the following example: counter :: ArrowCircuit a => a Bool Int counter = proc reset -> do rec output <- returnA -< if reset then 0 else next next <- delay 0 -< output+1 returnA -< output The translation of such forms uses the loop combinator, so the arrow concerned must belong to the ArrowLoop class. Conditional commands In the previous example, we used a conditional expression to construct the input for an arrow. Sometimes we want to conditionally execute different commands, as in proc (x,y) -> if f x y then g -< x+1 else h -< y+2 which is translated to arr (\ (x,y) -> if f x y then Left x else Right y) >>> (arr (\x -> x+1) >>> f) ||| (arr (\y -> y+2) >>> g) Since the translation uses |||, the arrow concerned must belong to the ArrowChoice class. There are also case commands, like case input of [] -> f -< () [x] -> g -< x+1 x1:x2:xs -> do y <- h -< (x1, x2) ys <- k -< xs returnA -< y:ys The syntax is the same as for case expressions, except that the bodies of the alternatives are commands rather than expressions. The translation is similar to that of if commands. Defining your own control structures As we're seen, arrow notation provides constructs, modelled on those for expressions, for sequencing, value recursion and conditionals. But suitable combinators, which you can define in ordinary Haskell, may also be used to build new commands out of existing ones. The basic idea is that a command defines an arrow from environments to values. These environments assign values to the free local variables of the command. Thus combinators that produce arrows from arrows may also be used to build commands from commands. For example, the ArrowChoice class includes a combinator ArrowChoice a => (<+>) :: a e c -> a e c -> a e c so we can use it to build commands: expr' = proc x -> returnA -< x <+> do symbol Plus -< () y <- term -< () expr' -< x + y <+> do symbol Minus -< () y <- term -< () expr' -< x - y This is equivalent to expr' = (proc x -> returnA -< x) <+> (proc x -> do symbol Plus -< () y <- term -< () expr' -< x + y) <+> (proc x -> do symbol Minus -< () y <- term -< () expr' -< x - y) It is essential that this operator be polymorphic in e (representing the environment input to the command and thence to its subcommands) and satisfy the corresponding naturality property arr k >>> (f <+> g) = (arr k >>> f) <+> (arr k >>> g) at least for strict k. (This should be automatic if you're not using seq.) This ensures that environments seen by the subcommands are environments of the whole command, and also allows the translation to safely trim these environments. The operator must also not use any variable defined within the current arrow abstraction. We could define our own operator untilA :: ArrowChoice a => a e () -> a e Bool -> a e () untilA body cond = proc x -> if cond x then returnA -< () else do body -< x untilA body cond -< x and use it in the same way. Of course this infix syntax only makes sense for binary operators; there is also a more general syntax involving special brackets: proc x -> do y <- f -< x+1 (|untilA (increment -< x+y) (within 0.5 -< x)|) Primitive constructs Some operators will need to pass additional inputs to their subcommands. For example, in an arrow type supporting exceptions, the operator that attaches an exception handler will wish to pass the exception that occurred to the handler. Such an operator might have a type handleA :: ... => a e c -> a (e,Ex) c -> a e c where Ex is the type of exceptions handled. You could then use this with arrow notation by writing a command body `handleA` \ ex -> handler so that if an exception is raised in the command body, the variable ex is bound to the value of the exception and the command handler, which typically refers to ex, is entered. Though the syntax here looks like a functional lambda, we are talking about commands, and something different is going on. The input to the arrow represented by a command consists of values for the free local variables in the command, plus a stack of anonymous values. In all the prior examples, this stack was empty. In the second argument to handleA, this stack consists of one value, the value of the exception. The command form of lambda merely gives this value a name. More concretely, the values on the stack are paired to the right of the environment. So when designing operators like handleA that pass extra inputs to their subcommands, More precisely, the type of each argument of the operator (and its result) should have the form a (...(e,t1), ... tn) t where e is a polymorphic variable (representing the environment) and ti are the types of the values on the stack, with t1 being the top. The polymorphic variable e must not occur in a, ti or t. However the arrows involved need not be the same. Here are some more examples of suitable operators: bracketA :: ... => a e b -> a (e,b) c -> a (e,c) d -> a e d runReader :: ... => a e c -> a' (e,State) c runState :: ... => a e c -> a' (e,State) (c,State) We can supply the extra input required by commands built with the last two by applying them to ordinary expressions, as in proc x -> do s <- ... (|runReader (do { ... })|) s which adds s to the stack of inputs to the command built using runReader. The command versions of lambda abstraction and application are analogous to the expression versions. In particular, the beta and eta rules describe equivalences of commands. These three features (operators, lambda abstraction and application) are the core of the notation; everything else can be built using them, though the results would be somewhat clumsy. For example, we could simulate do-notation by defining bind :: Arrow a => a e b -> a (e,b) c -> a e c u `bind` f = returnA &&& u >>> f bind_ :: Arrow a => a e b -> a e c -> a e c u `bind_` f = u `bind` (arr fst >>> f) We could simulate do by defining cond :: ArrowChoice a => a e b -> a e b -> a (e,Bool) b cond f g = arr (\ (e,b) -> if b then Left e else Right e) >>> f ||| g Differences with the paper Instead of a single form of arrow application (arrow tail) with two translations, the implementation provides two forms -< (first-order) and -<< (higher-order). User-defined operators are flagged with banana brackets instead of a new form keyword. Portability Although only GHC implements arrow notation directly, there is also a preprocessor (available from the arrows web page>) that translates arrow notation into Haskell 98 for use with other Haskell systems. You would still want to check arrow programs with GHC; tracing type errors in the preprocessor output is not easy. Modules intended for both GHC and the preprocessor must observe some additional restrictions: The module must import Control.Arrow. The preprocessor cannot cope with other Haskell extensions. These would have to go in separate modules. Because the preprocessor targets Haskell (rather than Core), let-bound variables are monomorphic. Assertions <indexterm><primary>Assertions</primary></indexterm> If you want to make use of assertions in your standard Haskell code, you could define a function like the following: assert :: Bool -> a -> a assert False x = error "assertion failed!" assert _ x = x which works, but gives you back a less than useful error message -- an assertion failed, but which and where? One way out is to define an extended assert function which also takes a descriptive string to include in the error message and perhaps combine this with the use of a pre-processor which inserts the source location where assert was used. Ghc offers a helping hand here, doing all of this for you. For every use of assert in the user's source: kelvinToC :: Double -> Double kelvinToC k = assert (k >= 0.0) (k+273.15) Ghc will rewrite this to also include the source location where the assertion was made, assert pred val ==> assertError "Main.hs|15" pred val The rewrite is only performed by the compiler when it spots applications of Control.Exception.assert, so you can still define and use your own versions of assert, should you so wish. If not, import Control.Exception to make use assert in your code. To have the compiler ignore uses of assert, use the compiler option . -fignore-asserts option That is, expressions of the form assert pred e will be rewritten to e. Assertion failures can be caught, see the documentation for the Control.Exception library for the details. Pragmas pragma GHC supports several pragmas, or instructions to the compiler placed in the source code. Pragmas don't normally affect the meaning of the program, but they might affect the efficiency of the generated code. Pragmas all take the form {-# word ... #-} where word indicates the type of pragma, and is followed optionally by information specific to that type of pragma. Case is ignored in word. The various values for word that GHC understands are described in the following sections; any pragma encountered with an unrecognised word is (silently) ignored. DEPRECATED pragma DEPRECATED The DEPRECATED pragma lets you specify that a particular function, class, or type, is deprecated. There are two forms. You can deprecate an entire module thus: module Wibble {-# DEPRECATED "Use Wobble instead" #-} where ... When you compile any module that import Wibble, GHC will print the specified message. You can deprecate a function, class, or type, with the following top-level declaration: {-# DEPRECATED f, C, T "Don't use these" #-} When you compile any module that imports and uses any of the specifed entities, GHC will print the specified message. You can suppress the warnings with the flag . INLINE and NOINLINE pragmas These pragmas control the inlining of function definitions. INLINE pragma INLINE GHC (with , as always) tries to inline (or “unfold”) functions/values that are “small enough,” thus avoiding the call overhead and possibly exposing other more-wonderful optimisations. Normally, if GHC decides a function is “too expensive” to inline, it will not do so, nor will it export that unfolding for other modules to use. The sledgehammer you can bring to bear is the INLINEINLINE pragma pragma, used thusly: key_function :: Int -> String -> (Bool, Double) #ifdef __GLASGOW_HASKELL__ {-# INLINE key_function #-} #endif (You don't need to do the C pre-processor carry-on unless you're going to stick the code through HBC—it doesn't like INLINE pragmas.) The major effect of an INLINE pragma is to declare a function's “cost” to be very low. The normal unfolding machinery will then be very keen to inline it. Syntactially, an INLINE pragma for a function can be put anywhere its type signature could be put. INLINE pragmas are a particularly good idea for the then/return (or bind/unit) functions in a monad. For example, in GHC's own UniqueSupply monad code, we have: #ifdef __GLASGOW_HASKELL__ {-# INLINE thenUs #-} {-# INLINE returnUs #-} #endif See also the NOINLINE pragma (). NOINLINE pragma NOINLINE NOTINLINE The NOINLINE pragma does exactly what you'd expect: it stops the named function from being inlined by the compiler. You shouldn't ever need to do this, unless you're very cautious about code size. NOTINLINE is a synonym for NOINLINE (NOTINLINE is specified by Haskell 98 as the standard way to disable inlining, so it should be used if you want your code to be portable). Phase control Sometimes you want to control exactly when in GHC's pipeline the INLINE pragma is switched on. Inlining happens only during runs of the simplifier. Each run of the simplifier has a different phase number; the phase number decreases towards zero. If you use you'll see the sequence of phase numbers for successive runs of the simpifier. In an INLINE pragma you can optionally specify a phase number, thus: You can say "inline f in Phase 2 and all subsequent phases": {-# INLINE [2] f #-} You can say "inline g in all phases up to, but not including, Phase 3": {-# INLINE [~3] g #-} If you omit the phase indicator, you mean "inline in all phases". You can use a phase number on a NOINLINE pragma too: You can say "do not inline f until Phase 2; in Phase 2 and subsequently behave as if there was no pragma at all": {-# NOINLINE [2] f #-} You can say "do not inline g in Phase 3 or any subsequent phase; before that, behave as if there was no pragma": {-# NOINLINE [~3] g #-} If you omit the phase indicator, you mean "never inline this function". The same phase-numbering control is available for RULES (). LINE pragma LINEpragma pragmaLINE This pragma is similar to C's #line pragma, and is mainly for use in automatically generated Haskell code. It lets you specify the line number and filename of the original code; for example {-# LINE 42 "Foo.vhs" #-} if you'd generated the current file from something called Foo.vhs and this line corresponds to line 42 in the original. GHC will adjust its error messages to refer to the line/file named in the LINE pragma. OPTIONS pragma OPTIONS pragmaOPTIONS The OPTIONS pragma is used to specify additional options that are given to the compiler when compiling this source file. See for details. RULES pragma The RULES pragma lets you specify rewrite rules. It is described in . SPECIALIZE pragma SPECIALIZE pragma pragma, SPECIALIZE overloading, death to (UK spelling also accepted.) For key overloaded functions, you can create extra versions (NB: more code space) specialised to particular types. Thus, if you have an overloaded function: hammeredLookup :: Ord key => [(key, value)] -> key -> value If it is heavily used on lists with Widget keys, you could specialise it as follows: {-# SPECIALIZE hammeredLookup :: [(Widget, value)] -> Widget -> value #-} A SPECIALIZE pragma for a function can be put anywhere its type signature could be put. A SPECIALIZE has the effect of generating (a) a specialised version of the function and (b) a rewrite rule (see ) that rewrites a call to the un-specialised function into a call to the specialised one. You can, instead, provide your own specialised function and your own rewrite rule. For example, suppose that: genericLookup :: Ord a => Table a b -> a -> b intLookup :: Table Int b -> Int -> b where intLookup is an implementation of genericLookup that works very fast for keys of type Int. Then you can write the rule {-# RULES "intLookup" genericLookup = intLookup #-} (see ). It is Your Responsibility to make sure that intLookup really behaves as a specialised version of genericLookup!!! An example in which using RULES for specialisation will Win Big: toDouble :: Real a => a -> Double toDouble = fromRational . toRational {-# RULES "toDouble/Int" toDouble = i2d #-} i2d (I# i) = D# (int2Double# i) -- uses Glasgow prim-op directly The i2d function is virtually one machine instruction; the default conversion—via an intermediate Rational—is obscenely expensive by comparison. SPECIALIZE instance pragma SPECIALIZE pragma overloading, death to Same idea, except for instance declarations. For example: instance (Eq a) => Eq (Foo a) where { {-# SPECIALIZE instance Eq (Foo [(Int, Bar)]) #-} ... usual stuff ... } The pragma must occur inside the where part of the instance declaration. Compatible with HBC, by the way, except perhaps in the placement of the pragma. Rewrite rules <indexterm><primary>RULES pagma</primary></indexterm> <indexterm><primary>pragma, RULES</primary></indexterm> <indexterm><primary>rewrite rules</primary></indexterm> The programmer can specify rewrite rules as part of the source program (in a pragma). GHC applies these rewrite rules wherever it can, provided (a) the flag () is on, and (b) the flag () is not specified. Here is an example: {-# RULES "map/map" forall f g xs. map f (map g xs) = map (f.g) xs #-} Syntax From a syntactic point of view: There may be zero or more rules in a RULES pragma. Each rule has a name, enclosed in double quotes. The name itself has no significance at all. It is only used when reporting how many times the rule fired. A rule may optionally have a phase-control number (see ), immediately after the name of the rule. Thus: {-# RULES "map/map" [2] forall f g xs. map f (map g xs) = map (f.g) xs #-} The "[2]" means that the rule is active in Phase 2 and subsequent phases. The inverse notation "[~2]" is also accepted, meaning that the rule is active up to, but not including, Phase 2. Layout applies in a RULES pragma. Currently no new indentation level is set, so you must lay out your rules starting in the same column as the enclosing definitions. Each variable mentioned in a rule must either be in scope (e.g. map), or bound by the forall (e.g. f, g, xs). The variables bound by the forall are called the pattern variables. They are separated by spaces, just like in a type forall. A pattern variable may optionally have a type signature. If the type of the pattern variable is polymorphic, it must have a type signature. For example, here is the foldr/build rule: "fold/build" forall k z (g::forall b. (a->b->b) -> b -> b) . foldr k z (build g) = g k z Since g has a polymorphic type, it must have a type signature. The left hand side of a rule must consist of a top-level variable applied to arbitrary expressions. For example, this is not OK: "wrong1" forall e1 e2. case True of { True -> e1; False -> e2 } = e1 "wrong2" forall f. f True = True In "wrong1", the LHS is not an application; in "wrong2", the LHS has a pattern variable in the head. A rule does not need to be in the same module as (any of) the variables it mentions, though of course they need to be in scope. Rules are automatically exported from a module, just as instance declarations are. Semantics From a semantic point of view: Rules are only applied if you use the flag. Rules are regarded as left-to-right rewrite rules. When GHC finds an expression that is a substitution instance of the LHS of a rule, it replaces the expression by the (appropriately-substituted) RHS. By "a substitution instance" we mean that the LHS can be made equal to the expression by substituting for the pattern variables. The LHS and RHS of a rule are typechecked, and must have the same type. GHC makes absolutely no attempt to verify that the LHS and RHS of a rule have the same meaning. That is undecideable in general, and infeasible in most interesting cases. The responsibility is entirely the programmer's! GHC makes no attempt to make sure that the rules are confluent or terminating. For example: "loop" forall x,y. f x y = f y x This rule will cause the compiler to go into an infinite loop. If more than one rule matches a call, GHC will choose one arbitrarily to apply. GHC currently uses a very simple, syntactic, matching algorithm for matching a rule LHS with an expression. It seeks a substitution which makes the LHS and expression syntactically equal modulo alpha conversion. The pattern (rule), but not the expression, is eta-expanded if necessary. (Eta-expanding the epression can lead to laziness bugs.) But not beta conversion (that's called higher-order matching). Matching is carried out on GHC's intermediate language, which includes type abstractions and applications. So a rule only matches if the types match too. See below. GHC keeps trying to apply the rules as it optimises the program. For example, consider: let s = map f t = map g in s (t xs) The expression s (t xs) does not match the rule "map/map", but GHC will substitute for s and t, giving an expression which does match. If s or t was (a) used more than once, and (b) large or a redex, then it would not be substituted, and the rule would not fire. In the earlier phases of compilation, GHC inlines nothing that appears on the LHS of a rule, because once you have substituted for something you can't match against it (given the simple minded matching). So if you write the rule "map/map" forall f,g. map f . map g = map (f.g) this won't match the expression map f (map g xs). It will only match something written with explicit use of ".". Well, not quite. It will match the expression wibble f g xs where wibble is defined: wibble f g = map f . map g because wibble will be inlined (it's small). Later on in compilation, GHC starts inlining even things on the LHS of rules, but still leaves the rules enabled. This inlining policy is controlled by the per-simplification-pass flag n. All rules are implicitly exported from the module, and are therefore in force in any module that imports the module that defined the rule, directly or indirectly. (That is, if A imports B, which imports C, then C's rules are in force when compiling A.) The situation is very similar to that for instance declarations. List fusion The RULES mechanism is used to implement fusion (deforestation) of common list functions. If a "good consumer" consumes an intermediate list constructed by a "good producer", the intermediate list should be eliminated entirely. The following are good producers: List comprehensions Enumerations of Int and Char (e.g. ['a'..'z']). Explicit lists (e.g. [True, False]) The cons constructor (e.g 3:4:[]) ++ map filter iterate, repeat zip, zipWith The following are good consumers: List comprehensions array (on its second argument) length ++ (on its first argument) foldr map filter concat unzip, unzip2, unzip3, unzip4 zip, zipWith (but on one argument only; if both are good producers, zip will fuse with one but not the other) partition head and, or, any, all sequence_ msum sortBy So, for example, the following should generate no intermediate lists: array (1,10) [(i,i*i) | i <- map (+ 1) [0..9]] This list could readily be extended; if there are Prelude functions that you use a lot which are not included, please tell us. If you want to write your own good consumers or producers, look at the Prelude definitions of the above functions to see how to do so. Specialisation Rewrite rules can be used to get the same effect as a feature present in earlier version of GHC: {-# SPECIALIZE fromIntegral :: Int8 -> Int16 = int8ToInt16 #-} This told GHC to use int8ToInt16 instead of fromIntegral whenever the latter was called with type Int8 -> Int16. That is, rather than specialising the original definition of fromIntegral the programmer is promising that it is safe to use int8ToInt16 instead. This feature is no longer in GHC. But rewrite rules let you do the same thing: {-# RULES "fromIntegral/Int8/Int16" fromIntegral = int8ToInt16 #-} This slightly odd-looking rule instructs GHC to replace fromIntegral by int8ToInt16 whenever the types match. Speaking more operationally, GHC adds the type and dictionary applications to get the typed rule forall (d1::Integral Int8) (d2::Num Int16) . fromIntegral Int8 Int16 d1 d2 = int8ToInt16 What is more, this rule does not need to be in the same file as fromIntegral, unlike the SPECIALISE pragmas which currently do (so that they have an original definition available to specialise). Controlling what's going on Use to see what transformation rules GHC is using. Use to see what rules are being fired. If you add you get a more detailed listing. The defintion of (say) build in GHC/Base.lhs looks llike this: build :: forall a. (forall b. (a -> b -> b) -> b -> b) -> [a] {-# INLINE build #-} build g = g (:) [] Notice the INLINE! That prevents (:) from being inlined when compiling PrelBase, so that an importing module will “see” the (:), and can match it on the LHS of a rule. INLINE prevents any inlining happening in the RHS of the INLINE thing. I regret the delicacy of this. In libraries/base/GHC/Base.lhs look at the rules for map to see how to write rules that will do fusion and yet give an efficient program even if fusion doesn't happen. More rules in GHC/List.lhs. CORE pragma CORE pragma pragma, CORE core, annotation The external core format supports Note annotations; the CORE pragma gives a way to specify what these should be in your Haskell source code. Syntactically, core annotations are attached to expressions and take a Haskell string literal as an argument. The following function definition shows an example: f x = ({-# CORE "foo" #-} show) ({-# CORE "bar" #-} x) Sematically, this is equivalent to: g x = show x However, when external for is generated (via ), there will be Notes attached to the expressions show and x. The core function declaration for f is: f :: %forall a . GHCziShow.ZCTShow a -> a -> GHCziBase.ZMZN GHCziBase.Char = \ @ a (zddShow::GHCziShow.ZCTShow a) (eta::a) -> (%note "foo" %case zddShow %of (tpl::GHCziShow.ZCTShow a) {GHCziShow.ZCDShow (tpl1::GHCziBase.Int -> a -> GHCziBase.ZMZN GHCziBase.Char -> GHCziBase.ZMZN GHCziBase.Cha r) (tpl2::a -> GHCziBase.ZMZN GHCziBase.Char) (tpl3::GHCziBase.ZMZN a -> GHCziBase.ZMZN GHCziBase.Char -> GHCziBase.ZMZN GHCziBase.Cha r) -> tpl2}) (%note "foo" eta); Here, we can see that the function show (which has been expanded out to a case expression over the Show dictionary) has a %note attached to it, as does the expression eta (which used to be called x). Generic classes (Note: support for generic classes is currently broken in GHC 5.02). The ideas behind this extension are described in detail in "Derivable type classes", Ralf Hinze and Simon Peyton Jones, Haskell Workshop, Montreal Sept 2000, pp94-105. An example will give the idea: import Generics class Bin a where toBin :: a -> [Int] fromBin :: [Int] -> (a, [Int]) toBin {| Unit |} Unit = [] toBin {| a :+: b |} (Inl x) = 0 : toBin x toBin {| a :+: b |} (Inr y) = 1 : toBin y toBin {| a :*: b |} (x :*: y) = toBin x ++ toBin y fromBin {| Unit |} bs = (Unit, bs) fromBin {| a :+: b |} (0:bs) = (Inl x, bs') where (x,bs') = fromBin bs fromBin {| a :+: b |} (1:bs) = (Inr y, bs') where (y,bs') = fromBin bs fromBin {| a :*: b |} bs = (x :*: y, bs'') where (x,bs' ) = fromBin bs (y,bs'') = fromBin bs' This class declaration explains how toBin and fromBin work for arbitrary data types. They do so by giving cases for unit, product, and sum, which are defined thus in the library module Generics: data Unit = Unit data a :+: b = Inl a | Inr b data a :*: b = a :*: b Now you can make a data type into an instance of Bin like this: instance (Bin a, Bin b) => Bin (a,b) instance Bin a => Bin [a] That is, just leave off the "where" clause. Of course, you can put in the where clause and over-ride whichever methods you please. Using generics To use generics you need to Use the flags (to enable the extra syntax), (to generate extra per-data-type code), and (to make the Generics library available. Import the module Generics from the lang package. This import brings into scope the data types Unit, :*:, and :+:. (You don't need this import if you don't mention these types explicitly; for example, if you are simply giving instance declarations.) Changes wrt the paper Note that the type constructors :+: and :*: can be written infix (indeed, you can now use any operator starting in a colon as an infix type constructor). Also note that the type constructors are not exactly as in the paper (Unit instead of 1, etc). Finally, note that the syntax of the type patterns in the class declaration uses "{|" and "|}" brackets; curly braces alone would ambiguous when they appear on right hand sides (an extension we anticipate wanting). Terminology and restrictions Terminology. A "generic default method" in a class declaration is one that is defined using type patterns as above. A "polymorphic default method" is a default method defined as in Haskell 98. A "generic class declaration" is a class declaration with at least one generic default method. Restrictions: Alas, we do not yet implement the stuff about constructor names and field labels. A generic class can have only one parameter; you can't have a generic multi-parameter class. A default method must be defined entirely using type patterns, or entirely without. So this is illegal: class Foo a where op :: a -> (a, Bool) op {| Unit |} Unit = (Unit, True) op x = (x, False) However it is perfectly OK for some methods of a generic class to have generic default methods and others to have polymorphic default methods. The type variable(s) in the type pattern for a generic method declaration scope over the right hand side. So this is legal (note the use of the type variable ``p'' in a type signature on the right hand side: class Foo a where op :: a -> Bool op {| p :*: q |} (x :*: y) = op (x :: p) ... The type patterns in a generic default method must take one of the forms: a :+: b a :*: b Unit where "a" and "b" are type variables. Furthermore, all the type patterns for a single type constructor (:*:, say) must be identical; they must use the same type variables. So this is illegal: class Foo a where op :: a -> Bool op {| a :+: b |} (Inl x) = True op {| p :+: q |} (Inr y) = False The type patterns must be identical, even in equations for different methods of the class. So this too is illegal: class Foo a where op1 :: a -> Bool op1 {| a :*: b |} (x :*: y) = True op2 :: a -> Bool op2 {| p :*: q |} (x :*: y) = False (The reason for this restriction is that we gather all the equations for a particular type consructor into a single generic instance declaration.) A generic method declaration must give a case for each of the three type constructors. The type for a generic method can be built only from: Function arrows Type variables Tuples Arbitrary types not involving type variables Here are some example type signatures for generic methods: op1 :: a -> Bool op2 :: Bool -> (a,Bool) op3 :: [Int] -> a -> a op4 :: [a] -> Bool Here, op1, op2, op3 are OK, but op4 is rejected, because it has a type variable inside a list. This restriction is an implementation restriction: we just havn't got around to implementing the necessary bidirectional maps over arbitrary type constructors. It would be relatively easy to add specific type constructors, such as Maybe and list, to the ones that are allowed. In an instance declaration for a generic class, the idea is that the compiler will fill in the methods for you, based on the generic templates. However it can only do so if The instance type is simple (a type constructor applied to type variables, as in Haskell 98). No constructor of the instance type has unboxed fields. (Of course, these things can only arise if you are already using GHC extensions.) However, you can still give an instance declarations for types which break these rules, provided you give explicit code to override any generic default methods. The option dumps incomprehensible stuff giving details of what the compiler does with generic declarations. Another example Just to finish with, here's another example I rather like: class Tag a where nCons :: a -> Int nCons {| Unit |} _ = 1 nCons {| a :*: b |} _ = 1 nCons {| a :+: b |} _ = nCons (bot::a) + nCons (bot::b) tag :: a -> Int tag {| Unit |} _ = 1 tag {| a :*: b |} _ = 1 tag {| a :+: b |} (Inl x) = tag x tag {| a :+: b |} (Inr y) = nCons (bot::a) + tag y