+It really is important to switch off inlinings inside such
+expressions. Consider the following example
+
+ let f = \pq -> BIG
+ in
+ let g = \y -> f y y
+ {-# INLINE g #-}
+ in ...g...g...g...g...g...
+
+Now, if that's the ONLY occurrence of f, it will be inlined inside g,
+and thence copied multiple times when g is inlined.
+
+
+This function may be inlinined in other modules, so we
+don't want to remove (by inlining) calls to functions that have
+specialisations, or that may have transformation rules in an importing
+scope.
+
+E.g. {-# INLINE f #-}
+ f x = ...g...
+
+and suppose that g is strict *and* has specialisations. If we inline
+g's wrapper, we deny f the chance of getting the specialised version
+of g when f is inlined at some call site (perhaps in some other
+module).
+
+It's also important not to inline a worker back into a wrapper.
+A wrapper looks like
+ wraper = inline_me (\x -> ...worker... )
+Normally, the inline_me prevents the worker getting inlined into
+the wrapper (initially, the worker's only call site!). But,
+if the wrapper is sure to be called, the strictness analyser will
+mark it 'demanded', so when the RHS is simplified, it'll get an ArgOf
+continuation. That's why the keep_inline predicate returns True for
+ArgOf continuations. It shouldn't do any harm not to dissolve the
+inline-me note under these circumstances.
+
+Note that the result is that we do very little simplification
+inside an InlineMe.
+
+ all xs = foldr (&&) True xs
+ any p = all . map p {-# INLINE any #-}
+
+Problem: any won't get deforested, and so if it's exported and the
+importer doesn't use the inlining, (eg passes it as an arg) then we
+won't get deforestation at all. We havn't solved this problem yet!
+
+
+preInlineUnconditionally
+~~~~~~~~~~~~~~~~~~~~~~~~
+@preInlineUnconditionally@ examines a bndr to see if it is used just
+once in a completely safe way, so that it is safe to discard the
+binding inline its RHS at the (unique) usage site, REGARDLESS of how
+big the RHS might be. If this is the case we don't simplify the RHS
+first, but just inline it un-simplified.
+
+This is much better than first simplifying a perhaps-huge RHS and then
+inlining and re-simplifying it. Indeed, it can be at least quadratically
+better. Consider
+
+ x1 = e1
+ x2 = e2[x1]
+ x3 = e3[x2]
+ ...etc...
+ xN = eN[xN-1]
+
+We may end up simplifying e1 N times, e2 N-1 times, e3 N-3 times etc.
+This can happen with cascades of functions too:
+
+ f1 = \x1.e1
+ f2 = \xs.e2[f1]
+ f3 = \xs.e3[f3]
+ ...etc...
+
+THE MAIN INVARIANT is this:
+
+ ---- preInlineUnconditionally invariant -----
+ IF preInlineUnconditionally chooses to inline x = <rhs>
+ THEN doing the inlining should not change the occurrence
+ info for the free vars of <rhs>
+ ----------------------------------------------
+
+For example, it's tempting to look at trivial binding like
+ x = y
+and inline it unconditionally. But suppose x is used many times,
+but this is the unique occurrence of y. Then inlining x would change
+y's occurrence info, which breaks the invariant. It matters: y
+might have a BIG rhs, which will now be dup'd at every occurrenc of x.
+
+
+Evne RHSs labelled InlineMe aren't caught here, because there might be
+no benefit from inlining at the call site.
+
+[Sept 01] Don't unconditionally inline a top-level thing, because that
+can simply make a static thing into something built dynamically. E.g.
+ x = (a,b)
+ main = \s -> h x
+
+[Remember that we treat \s as a one-shot lambda.] No point in
+inlining x unless there is something interesting about the call site.
+
+But watch out: if you aren't careful, some useful foldr/build fusion
+can be lost (most notably in spectral/hartel/parstof) because the
+foldr didn't see the build. Doing the dynamic allocation isn't a big
+deal, in fact, but losing the fusion can be. But the right thing here
+seems to be to do a callSiteInline based on the fact that there is
+something interesting about the call site (it's strict). Hmm. That
+seems a bit fragile.
+
+Conclusion: inline top level things gaily until Phase 0 (the last
+phase), at which point don't.
+
+\begin{code}
+preInlineUnconditionally :: SimplEnv -> TopLevelFlag -> InId -> InExpr -> Bool
+preInlineUnconditionally env top_lvl bndr rhs
+ | not active = False
+ | opt_SimplNoPreInlining = False
+ | otherwise = case idOccInfo bndr of
+ IAmDead -> True -- Happens in ((\x.1) v)
+ OneOcc in_lam True int_cxt -> try_once in_lam int_cxt
+ other -> False
+ where
+ phase = getMode env
+ active = case phase of
+ SimplGently -> isAlwaysActive prag
+ SimplPhase n -> isActive n prag
+ prag = idInlinePragma bndr
+
+ try_once in_lam int_cxt -- There's one textual occurrence
+ | not in_lam = isNotTopLevel top_lvl || early_phase
+ | otherwise = int_cxt && canInlineInLam rhs
+
+-- Be very careful before inlining inside a lambda, becuase (a) we must not
+-- invalidate occurrence information, and (b) we want to avoid pushing a
+-- single allocation (here) into multiple allocations (inside lambda).
+-- Inlining a *function* with a single *saturated* call would be ok, mind you.
+-- || (if is_cheap && not (canInlineInLam rhs) then pprTrace "preinline" (ppr bndr <+> ppr rhs) ok else ok)
+-- where
+-- is_cheap = exprIsCheap rhs
+-- ok = is_cheap && int_cxt
+
+ -- int_cxt The context isn't totally boring
+ -- E.g. let f = \ab.BIG in \y. map f xs
+ -- Don't want to substitute for f, because then we allocate
+ -- its closure every time the \y is called
+ -- But: let f = \ab.BIG in \y. map (f y) xs
+ -- Now we do want to substitute for f, even though it's not
+ -- saturated, because we're going to allocate a closure for
+ -- (f y) every time round the loop anyhow.
+
+ -- canInlineInLam => free vars of rhs are (Once in_lam) or Many,
+ -- so substituting rhs inside a lambda doesn't change the occ info.
+ -- Sadly, not quite the same as exprIsHNF.
+ canInlineInLam (Lit l) = True
+ canInlineInLam (Lam b e) = isRuntimeVar b || canInlineInLam e
+ canInlineInLam (Note _ e) = canInlineInLam e
+ canInlineInLam _ = False
+
+ early_phase = case phase of
+ SimplPhase 0 -> False
+ other -> True
+-- If we don't have this early_phase test, consider
+-- x = length [1,2,3]
+-- The full laziness pass carefully floats all the cons cells to
+-- top level, and preInlineUnconditionally floats them all back in.
+-- Result is (a) static allocation replaced by dynamic allocation
+-- (b) many simplifier iterations because this tickles
+-- a related problem; only one inlining per pass
+--
+-- On the other hand, I have seen cases where top-level fusion is
+-- lost if we don't inline top level thing (e.g. string constants)
+-- Hence the test for phase zero (which is the phase for all the final
+-- simplifications). Until phase zero we take no special notice of
+-- top level things, but then we become more leery about inlining
+-- them.