1 \documentclass{article}
2 \usepackage{code,a4wide}
4 \usepackage{graphics,epsfig,epic,eepic}
6 \setlength{\parskip}{0.25cm}
7 \setlength{\parsep}{0.25cm}
8 \setlength{\topsep}{0cm}
9 \setlength{\parindent}{0cm}
10 \renewcommand{\textfraction}{0.2}
11 \renewcommand{\floatpagefraction}{0.7}
15 \newcommand{\block}{block}
16 \newcommand{\Block}{Block}
17 \newcommand{\segment}{segment}
18 \newcommand{\Segment}{Segment}
19 \newcommand{\step}{step}
20 \newcommand{\Step}{Step}
22 \newcommand{\note}[1]{{\em $\spadesuit$ #1}}
25 \title{The GHC Storage Manager}
26 \author{Simon Peyton Jones and Sungwoo Park}
30 \section{Introduction}
32 This document describes the details of the GHC storage manager, including
33 the interface and implementation of each of its components.
37 Storage management goals are:
39 \item Generational collection, supporting multiple generations.
40 \item The ability to pin the allocation
41 area into a few pages that we hope will fit entirely in the cache.
42 \item Allows objects to age within a generation before getting promoted.
43 \item Heap can grow as needed, rather than having to be pre-sized
45 \item We support mark/sweep/compact collection for older generations.
46 This is a Good Thing when the live memory approaches the available
47 physical memory, because it reduces paging.
48 \item Little OS support needed. No @mmap()@ etc. All that we require is
49 the ability to call @malloc()@ to allocate a new chunk of memory.
50 There can be intervening ``sandbars'' allocated by other programs
51 (e.g. DLLs or other @malloc()@'d structures) between chunks of heap.
54 Language-support goals are:
56 \item The garbage collector ``shorts out'' indirection objects introduced
57 by the mutator (notably when overwriting a thunk with an indirection).
58 \item The garbage collector executes selector thunks.
59 For example, a thunk for
60 @(fst x)@ where @x@ is a pointer to a pair @(a,b)@ would be
61 evaluated by the garbage collector to just @a@. This is an important
62 strategy for plugging space leaks.
63 \item The garbage collector traversese the code tree, as well as
64 the heap data structures, to find which CAFs are live. This is a royal pain.
65 \item The garbage collector finalises some objects (typically a tiny minority).
66 At the moment ``finalisation'' means ``call a C routine when this thing
67 dies'' but it would be more general to schedule a call to a Haskell
71 Instrumentation goals are:
73 \item The garbage collector can gather heap-census information for profiling.
74 To this end we can force GC to happen more often than it otherwise would,
75 and the collector can gather information about the type and cost-centre
76 associated with each heap object.
79 \section{The architecture of the storage manager}
81 The storage manager is a component of the GHC system which is responsible
82 for allocating fresh memory for new objects and reclaiming memory
83 that is no longer used.
84 It is built on a layered architecture and consists of four main parts:
85 \emph{megablock allocator}, \emph{block allocator}, \emph{heap allocator},
86 and \emph{garbage collector} (Figure~\ref{fig-architecture}).
87 The megablock allocator communicates directly with the underlying
88 operating system and forms the lowest level of the storage manager.
89 The heap allocator and garbage collector lie in the topmost level of
90 the storage manager and process requests from
91 the mutator (the Haskell realm at the runtime) and the runtime system.
92 The block allocator lies between the two levels.
96 \input{architecture.eepic}
97 \caption{The overall architecture of the storage manager}
98 \label{fig-architecture}
102 \section{The megablock allocator}
104 The megablock allocator requests a chunk of physical memory of a fixed size,
105 which is called a \emph{megablock}, from the operating system and returns it
106 to the block allocator. A new megablock is not initialized by the
107 megablock allocator; it is later initialized by the block allocator.
109 \subsection{Interface}
112 \item[@void *getMBlock()@] allocates a single megablock and returns its
113 starting address. @MBlock.c@.
114 \item[@void *getMBlocks(nat n)@] allocates @n@ contiguous megablocks
115 and returns their starting address. @MBlock.c@.
118 \subsection{Implementation}
120 Since the megablock allocator communicates directly with the underlying
121 operating system, its implementation relies on memory allocation functions
122 provided by the operating system; thus, the implementation varies between
124 However, every megablock is always of a fixed size $2^K$ and aligned on a
125 $2^K$ boundary, regardless of the platform
126 (@MBLOCK_SIZE@ in @include/Constants.h@ defines the size of megablocks).
127 @mblocks_allocated@ in @MBlock.c@ stores the number of megablocks allocated.
129 For implementation details, see @MBlock.c@, @MBlock.h@, @include/Block.h@.
131 \section{The block allocator}
133 The block allocator divides a megablock returned by the megablock allocator
134 into a contiguous group of \emph{block descriptors} followed by another
135 contiguous group of \emph{blocks}.
137 A block is a contiguous chunk of $2^K$ bytes, starting on
138 a $2^K$-byte boundary (@BLOCK_SIZE@ in
139 @include/Constants.h@ defines the size of blocks).
140 Each block has its own associated block descriptor, which records the
141 current state of the block.
142 Both blocks and block descriptors are placed in a common megablock.
144 Figure~\ref{fig-megablock} shows a megablock after initialization by the
146 Block descriptors occupy the lower address space and blocks the higher address
147 space in the megablock.
148 Since block descriptors are ordered linearly, we can always locate a block
149 descriptor corresponding to a particular block from the starting address
154 \input{megablock.eepic}
155 \caption{A megablock after initialization}
156 \label{fig-megablock}
160 A block is the unit of allocation for the block allocator.
161 That is, the block allocator hands over store to the heap allocator in multiples of
162 one block, where multiple heap objects may be allocated.
163 A contiguous group of blocks, which is called a \emph{block group}, can be
164 directly handed over to the heap allocator to reduce inter-block
166 The first block of a block group is called the \emph{group head}.\footnote{
167 An alternative design has the block descriptor at the start of each block.
168 This makes it easy to locate the block descriptor corresponding to a particular
169 block, but is pessimistic for cache locality when fiddling with block descriptors.
170 It also means that only the first block in a contiguous chunk of blocks can
171 have a block descriptor. This in turn makes it difficult to achieve an
172 efficient mostly-copying conservative (MCC) garbage collector.}
174 \subsection{Block descriptors}
176 A block descriptor has the following structure, defined in
180 typedef struct _bdescr {
184 struct _bdescr *link;
185 /* additional fields */
189 The fields of a block descriptor have the following purposes:
192 \item[@start@] points to the first byte of the corresponding block.
193 \item[@free@] For a group head, @free@ points to the first free byte in
194 the block group. For a non-group head, @free@ is set to zero to identify
195 the corresponding block as a non-group head.
196 \item[@blocks@] For a group head, @blocks@ stores the number of blocks
197 in the block group. It is not used for non-group heads.
198 \item[@link@] For a group head, @link@ is used to chain all individual
199 blocks or block groups together. For a non-group head, @link@ points
200 to the block descriptor of the group head.
203 \subsection{Interface}
206 \item[@void initBlockAllocator(void)@] initializes the block allocator.
208 \item[@bdescr *allocBlock(void)@] requests a single block and returns
209 the address of its block descriptor. @BlockAlloc.c@.
210 \item[@bdescr *allocGroup(nat n)@] allocates a block group of size @n@
211 and returns the address of the block descriptor for the group head.
213 \item[@void freeGroup(bdescr *p)@] frees the block group where @p@ points
214 to the block descriptor of the group head, and places it in a pool of
215 free block groups. @BlockAlloc.c@.
216 \item[@bdescr *Bdescr(StgPtr p)@] takes a pointer @p@ to any byte within
217 a block and returns a pointer to its block descriptor. It is implemented as
218 an @inline@ procedure. @include/Block.h@.
221 \subsection{Implementation}
223 The block allocator maintains a linked list of free block groups, whose head
224 is stored in @free_list@ in @BlockAlloc.c@ (Figure~\ref{fig-freelist}).
225 When @allocBlock()@ or @allocGroup()@ is called, the block allocator
226 scans the linked list from @free_list@ and finds the first block group
227 which can handle the request.
228 If such a block group exists, it takes off the requested number of blocks
229 from the block group, creates a new block group from them,
230 initializes it if needed, and returns it to the caller.
231 The rest of the old block group, if any, is linked back to the list of free block
232 groups as another block group.
233 If such a block group does not exist, the block allocator requests a megablock
234 from the megablock allocator and processes the request using the new megablock.
236 For implementation details, see @BlockAlloc.c@.
240 \input{freelist.eepic}
241 \caption{Linked list of free block groups}
246 \section{Heap allocator}
248 The role of the heap allocator in the storage manager is to allocate fresh
249 memory upon requests from the mutator and the runtime system.
250 Memory allocation takes place frequently during the execution of Haskell
251 programs, and hence its efficiency is crucial to the overall performance.
252 To handle requests from the mutator and the runtime system efficiently,
253 the heap allocator maintains three different memory stores,
254 each of which has its own purpose.
256 The first store is the \emph{nursery}, where typical Haskell
258 The mutator itself can allocate fresh memory directly in the nursery
259 without invoking an interface function:
260 the configuration of the nursery is always revealed to the mutator and can even
261 be changed by the mutator when it allocates fresh memory from the nursery
263 Thus, although the small overhead in manipulating the nursery results in fast
264 memory allocation, it is up to the mutator to keep the nursery in an
267 The second and the third are the \emph{small object pool} and the
268 \emph{large object pool}.
269 The heap allocator provides a common interface function to be shared by both stores:
270 the size of fresh memory requested, which is passed as an argument to the
271 interface function, determines which of the two stores to be used.
272 The interface function can be called by both the mutator and the runtime system.
274 \subsection{Interface}
277 \item[@void allocNurseries(void)@] creates and initializes the nursery.
279 \item[@void resetNurseries(void)@] re-initializes the nursery. @Storage.c@.
280 \item[@Hp@] points to the byte before the first byte of the current allocation area
281 (contiguous free bytes in the nursery).
282 Defined as @MainRegTable.rHp@ in @include/Regs.h@.
283 \item[@HpLim@] marks the boundary of the current allocation area:
284 it points to the last byte in the current allocation area, and thus
285 all the bytes of memory addresses from @Hp@$ + 1$ to @HpLim@ are free.
286 The mutator can obtain fresh memory simply by adjusting @Hp@ as long as the new
287 value of @Hp@ does not exceed @HpLim@. For instance, if the mutator
288 increases @Hp@ by @n@, it can now store an object of size up to @n@ at the
289 address pointed to by the old value of @Hp@$ + 1$.
290 Defined as @MainRegTable.rHpLim@ in @include/Regs.h@.\footnote{@HpLim@ is not always
291 correct. It may point event to a word in the next block.}
292 \item[@OpenNursery(hp, hplim)@] opens an allocation area and sets @hp@ and
293 @hplim@ appropriately. A macro in @include/StgStorage.h@.
294 \item[@CloseNursery(hp)@] closes the current allocation area beginning at @hp@.
295 A macro in @include/StgStorage.h@.
296 \item[@ExtendNursery(hp, hplim)@] closes the current allocation area and
297 tries to find a new allocation area in the nursery.
298 If it succeeds, it sets @hp@ and @hplim@ appropriately and returns @rtsTrue@;
299 otherwise, it returns @rtsFalse@,
300 which means that the nursery has been exhausted.
301 The new allocation area is not necessarily contiguous with the old one.
302 A macro in @Storage.h@.
303 \item[@StgPtr allocate(nat n)@] allocates @n@ words from either the small
304 object pool or the large object pool, depending on the argument @n@,
305 and returns a pointer to the first byte. It always succeeds.
309 \subsection{Implementation}
311 The nursery is implemented with a fixed number of blocks (@nursery_blocks@
312 in @Storage.c@ specifies the number of blocks).
313 Each of these blocks forms its own block group, and they are all linked together
314 by @allocNurseries()@ (Figure~\ref{fig-nursery}).
315 The head of the linked list is stored in @MainRegTable.rNursery@.
316 These blocks are never freed.
320 \input{nursery.eepic}
321 \caption{Nursery during the mutator time}
326 A single block called the \emph{active block} serves as the allocation area
328 When the free space left in the block is not enough for the request from
329 the mutator, the heap allocator sets the @free@ field in the corresponding
330 block descriptor to the first free byte in the block and moves the allocation
331 area to the next block.
332 The block descriptor of the active block is stored in
333 @MainRegTable.rCurrentNursery@.
334 @Hp@ and @HpLim@ are kept valid during the mutator time so that
335 the mutator can obtain fresh memory on its own simply by adjusting @Hp@.
337 The blocks in the nursery are carefully allocated in a contiguous address
338 range so that they fit next to each other in the cache.
340 The small object pool is implemented with a linked list of block groups,
341 each of which consists of a single block (Figure~\ref{fig-smallobjectpool}).
342 The head of the linked list is stored in @small_alloc_list@ in @Storage.c@.
346 \input{smallobjectpool.eepic}
347 \caption{Small object pool}
348 \label{fig-smallobjectpool}
352 The allocation in the small object pool is done in the same way as in the
353 nursery; @alloc_Hp@ and @alloc_HpLim@ (both defined in @Storage.c@)
354 point to the first free byte and the boundary of the small object pool,
356 Thus, if when @allocate()@ is called and the heap allocator decides to
357 allocate fresh memory in the small object pool, it simply increases @alloc_Hp@
358 by the size of memory requested.
359 If the allocation cannot be done in the current small object pool, the
360 heap allocator calls @allocBlock()@ to obtain a new block from the block
361 allocator, puts it to the head of the linked list, and
362 sets @alloc_Hp@ and @alloc_HpLim@ appropriately.
364 The large object pool is also implemented with a (doubly) linked list of block
365 groups (Figure~\ref{fig-largeobjectpool}).
366 The difference from the small object pool is that each block group stores only
367 a single object: each time the argument to @allocate()@ is
368 greater than a threshold value (computed from @LARGE_OBJECT_THRESHOLD@
369 in @include/Constants.h@), a new block group accommodating the requested size
370 is created to store a single object.
371 The new block group is put to the head of the list.
372 The head of the linked list is available as @g0s0->large_objects@.\footnote{The
373 meaning of @g0s0@ is explained in the next section.}
377 \input{largeobjectpool.eepic}
378 \caption{Large object pool}
379 \label{fig-largeobjectpool}
383 For implementation details, see @Storage.c@ and @include/StgStorage.h@.
385 \section{Garbage collector}
387 The garbage collector finds all the objects unreachable from a given set of
388 roots and frees the memory allocated to them. By invoking the
389 garbage collector regularly, the storage manager prevents the heap from
390 growing indefinitely and allows Haskell programs to be executed at a
391 reasonable memory cost.
393 The garbage collector in the storage manager is based upon the generational
394 garbage collection algorithm.
395 The storage manager records the age for every object in the heap.
396 An object surviving one garbage collection grows old by one \emph{step},
397 and an object surviving a certain number of garbage collections
398 is promoted to the next \emph{generation}.
399 That is, a step can be defined as a collection of objects which have survived
400 the same number of garbage collections (or a collection objects which are
401 born at some point between two particular successive garbage collections),
402 and a generation as a group of steps belonging to a certain range of ages.
403 Notice that the unit of garbage collections is not step but generation:
404 a garbage collection applies to all the steps in a generation, and we cannot
405 perform a garbage collection just on part of a generation.
406 Furthermore, if a particular generation is garbage collected, so are
407 all the younger generations.\footnote{Some
408 authors define a generation as the set of
409 all the objects created between two particular garbage collection and
410 an object cannot change its generation (e.g., 1960's, 1970's, and so on).
412 an object can change its generation as it survives garbage collections
413 (e.g., teenagers, 20's, and so on).}
415 Figure~\ref{fig-generation} illustrates how an object grows old.
416 Every object is created in step $0$ of generation $0$.
417 As it survives garbage collections, it is moved to the next step of the
418 same generation until it is finally promoted to the next generation and
419 moved to step $0$ of the next generation:
420 during a garbage collection of generation $g < G$, live objects from
421 step $s < S$ are moved to step $s + 1$, and live objects from
422 the last step $S$ are promoted to step $0$ in generation $g + 1$.
423 Live objects in step $0$ of generation $G$ stay in the same step;
424 the oldest generation maintains only one step because there is no point
425 in aging objects in the oldest generation.
426 In this way, objects are given a decent chance of dying before being
427 promoted to the next generation.
431 \input{generation.eepic}
432 \caption{Evolution of objects through garbage collections}
433 \label{fig-generation}
437 The main reason that we separate steps from generations is to
438 reduce the cost of maintaining \emph{backward inter-generational pointers},
439 that is, pointers from older generations to younger generations.
440 Suppose that a garbage collection applies to all generations $0$
441 through $g$. If an object @O@ in one of these generations is pointed to
442 by another object in generation $g' > g$, we cannot free the object @O@
443 even though generation $g'$ is out of consideration. Consequently
444 we have to track backward inter-generational pointers to perform garbage
445 collections correctly.
446 Since maintaining backward pointers is costly, we
447 choose to track backward inter-generational pointers only;
448 we do not track backward inter-step pointers.
450 By grouping all the objects created between two successive garbage collections
451 and grouping multiple age groups into one generation, the garbage
452 collector makes an efficient use of heap memory.
456 A step has the following structure, defined in
457 @include/StgStorage.h@:
460 typedef struct _step {
463 unsigned int n_blocks;
464 bdescr *large_objects;
465 /* additional fields */
469 The fields of a step have the following purposes (Figure~\ref{fig-step}):
472 \item[@no@] indicates the age within its generation.
473 $0$ indicates the youngest step in a generation.
474 \item[@blocks@] is a linked list of all the blocks in this step
475 which contain small objects.
476 Each block forms its own block group.
477 \item[@n\_blocks@] is the number of blocks in the linked list @blocks@.
478 \item[@large\_objects@] is a (doubly) linked list of all the block groups
479 in this step which contain large objects.
480 Each block group stores only a single object.
486 \caption{Memory layout of a step}
491 The linked list @blocks@ of step $s$ in generation $g$ is created
492 during a garbage collection
493 from live small objects of step $s - 1$ in the same generation
494 (or the last step in the previous generation if $s = 0$).
495 The @free@ field in every block descriptor never changes because
496 no objects are added after the garbage collection; new objects are created
497 only in step $0$ in generation $0$.
498 Likewise, the linked list @large_objects@ is created during a
499 garbage collection from live large objects of the previous step.
501 There are three exceptions to the above rules.
502 First, both @blocks@ and @large_objects@ of
503 step $0$ in generation $0$ are not filled with new objects during a garbage
505 They are simply re-initialized by the garbage collector and
506 grow during during the execution of a program as new objects are
508 Step $0$ in generation $0$ is accessible via a global variable @g0s0@,
509 and this is the reason why the large object pool (described in the previous
510 section) is indeed stored in @g0s0->large_objects@.
511 For the same reason, @MainRegTable.rNursery@ holds the same address as
512 @g0s0->blocks@ all the time.
513 Second, @blocks@ of step $1$ in generation $0$ is created not only from
514 the nursery (@blocks@ of step $0$ in the same generation) but also from the
515 small object pool. In other words, all the live small objects created since
516 the previous garbage collection, either directly by the mutator or indirectly
517 through @allocate()@, are gathered together in the same linked list.
518 Finally, step $0$ of the last generation serves the source for itself during
519 any garbage collection (because there exists no older step).
521 \subsection{Generations}
523 A generation has the following structure, defined in
524 @include/StgStorage.h@:
527 typedef struct _generation {
530 unsigned int n_steps;
531 unsigned int max_blocks;
532 StgMutClosure *mut_list;
533 StgMutClosure *mut_once_list;
534 /* additional fields */
538 The fields of a generation have the following purposes (Figure~\ref{fig-gen}):
541 \item[@no@] is the generation number.
542 \item[@steps@] points to an array of @step@ structures. @steps[@$i$@]@
543 corresponds to step $i$ in this generation, i.e.,
544 @steps[@$i$@].no@ is equal to $i$.
545 \item[@n\_steps@] is the number of @step@ structures in the array pointed to
547 \item[@max\_blocks@] is the maximum number of blocks allowed in step $0$ of
548 this generation. If the number of blocks allocated
549 in step @0@ exceeds @max_blocks@,
550 this generation is garbage collected during the next garbage collection.
551 \item[@mut\_list@] links all mutable objects in this generation, that is,
552 objects whose contents can be updated and hence may contain pointers to
554 Every object in this linked list is a dynamic object residing in the heap
555 and has a structure compatible with @StgMutClosure@.
556 The structure @StgMutClosure@ (@includes/Closures.h@) has a field
557 @mut_link@ (called a mutable link field) of type @StgMutClosure *@, which
558 points to the next object in this linked list.
559 The end mark of this linked list is a pointer to a statically allocated object
560 @END_MUT_LIST@ (@StoragePriv.h@).
561 \item[@mut\_once\_list@] links objects in this generation whose contents
562 cannot be updated any more but may already have pointers to younger generations.
563 As with @mut_list@, it links only those objects whose structure is compatible
564 with @StgMutClosure@ and ends with @END_MUT_LIST@.
570 \caption{Memory layout of a generation}
575 The garbage collector maintains an array @generations@ of @generation@ structure
576 (defined in @Storage.c@), whose size is stored in a runtime system flag
577 (@RtsFlags.GcFlags.generations@).
578 The generation number of each generation coincides with its index into
579 the array @generations@, i.e., @generations[@$i$@].no@ is equal to $i$.
581 As mentioned before, lists of objects which may have pointers to younger
582 generations are kept per generation, not per step. The youngest generation,
583 accessible via a global variable @g0@, does not keep such a list because it
584 does not have younger generations.
586 The oldest generation, accessible via a global variable @oldest_gen@, may
587 contain static objects (as opposed to dynamic objects residing in the heap)
588 in its list @mut_once_list@. This happens when a static
589 thunk, also known as a \emph{constant applicative form} (CAF), is entered.
590 When a CAF (corresponding to closure type @THUNK_STATIC@, defined
591 in @includes/ClosureTypes.h@) is entered,
592 it is first put in the list @mut_once_list@ of the oldest generation
593 and then overwritten with an appropriate static indirection object
594 (corresponding to closure type @IND_STATIC@).\footnote{Actually a static
595 indirection object does not have a @mut\_link@ field.
596 We use its @static\_link@ field as a substitute for @mut\_link@.
597 See the structure @StgIndStatic@ in @include/Closures.h@.}\footnote{For
598 details of this operation, see the macro @UPD\_CAF()@ in @includes/Updates.h@}
599 If the CAF is dynamically loaded (e.g., in an interactive environment), it is
600 instead put in a separate linked list @caf_list@
601 (declared in @Storage.c@).
603 The evaluation result of the
604 CAF is stored in a separate dynamic object in the heap and the static
605 indirection object has a pointer to the dynamic object.
606 Thus, the new static indirection object is put in the list
607 @mut_once_list@ of the oldest generation (or the list @caf_list@) so that the
608 dynamic object is not removed during the next garbage collection.
609 Once it is created, the static indirection object remains unaltered, which
610 is the reason why it is put in the @mut_once_list@ list, not in the
612 Since the static indirection object survives any garbage collection (because
613 it comes from a static object) and would be eventually moved to the oldest
615 we put it in the @mut_once_list@ of the oldest generation as soon
618 \subsection{Interface}
621 %\item[@StgClosure *MarkRoot(StgClosure *root)@] informs the garbage collector
622 %that @root@ is an object in the root set. It returns the new location of
624 \item[@void *mark\_root(StgClosure **root)@] informs the garbage collector
625 that @*root@ is an object in the root set. It replaces @*root@ by
626 the new location of the object. @GC.c@.
627 \item[@void GarbageCollect(void (*get\_roots)(evac\_fn), rtsBool force\_major\_gc)@]
628 performs a garbage collection.
629 @get_roots()@ is a function which is called by the garbage collector when
630 it wishes to find all the objects in the root set (other than those
632 Therefore it is incumbent on the caller to find the root set.
633 @force_major_gc@ specifies whether a major garbage collection is required
634 or not. If a major garbage collection is not required, the garbage collector
635 decides an oldest generation $g$ to garbage collect on its own.
637 \item[@rtsBool doYouWantToGC(void)@] returns @rtsTrue@ if the garbage
638 collector is ready to perform a garbage collection. Specifically, it returns
639 @rtsTrue@ if the number of allocated blocks since the last garbage collection
640 (@alloc_blocks@ in @Storage.c@) exceeds an approximate limit
641 (@alloc_blocks_lim@ in @Storage.c@).
643 \item[@void recordMutable(StgMutClosure *p)@] informs the garbage collector
644 that a previously immutable object @p@ has become mutable.
645 The garbage collector then puts the object @p@ in the list @mut_list@ of the
646 generation to which it belongs.\footnote{It is easy to
647 locate the generation to which a dynamic object belongs from its address:
648 we can identify the block in which the object resides from its address,
649 and the corresponding block descriptor stores pointers
650 to the step and the generation (@gen@ and @step@ fields in the @bdescr@
651 structure) to which it belongs.}
652 It suffices to call @RecordMutable()@ only once for any object.
654 For an object which is genuinely mutable (e.g., mutable arrays),
655 it is permanently recorded as mutable.
657 an object which is temporarily mutable (e.g., frozen arrays),
658 can be dropped from the list @mut_list@ once its pointer has been dealt with
659 during garbage collections. @Storage.h@.
660 \item[@void recordOldToNewPtrs(StgMutClosure *p)@] puts the object @p@ in the
661 list @mut_once_list@ of the generation to which it belongs.
662 \item[@void newCAF(StgClosure *caf)@] puts the CAF @caf@ either
663 in the list @caf_list@ or
664 in the list @mut_once_list@ of the oldest generation,
665 depending on whether it is dynamically loaded or not.
668 \subsection{Implementation}
670 The overall structure of a garbage collection is as follows:
673 \item[(1)] Initialize.
674 \item[(2)] Scavenge lists @mut_once_list@ and @mut_list@ if necessary.
675 \item[(3)] Scavenge CAFs.
676 \item[(4)] Evacuate roots.
677 \item[(5)] Scavenge objects.
681 \subsubsection{(1) Initialization}
683 During initialization, the garbage collector first decides which generation
686 if the argument @force_major_gc@ to @GarbageCollect()@ is @rtsFalse@,
687 it decides the greatest generation number $N$ such
688 that the number of blocks allocated in step $0$ of generation $N$ exceeds
689 @generations[@$N$@].max_blocks@.
690 If the argument @force_major_gc@ to @GarbageCollect()@ is @rtsTrue@,
691 $N$ is set to the greatest generation number, namely,
692 $@RtsFlags.GcFlags.generations@ - 1$.
693 The garbage collector considers up to generation $N$ for garbage collection.
694 A major garbage collection takes place if $N$ is set to
695 $@RtsFlags.GcFlags.generations@ - 1$ during this process.
697 Then, the garbage collector initialize the \emph{to-space} (as opposed to
698 \emph{from-space}) for each step of
699 each generation, which is complete with an \emph{allocation pointer} and
700 an \emph{sweep pointer}.
701 The to-space of a step is the memory to which any object belonging to the
702 step can be copied when it survives a garbage collection.
703 For instance, a live object in step $s$ of generation $g$ can first be copied
704 to the to-space associated with step $s$, which eventually becomes
705 associated with the next step $s + 1$ (or step $0$ of the next generation)
707 This operation effectively moves an object to the next step if it survives
708 a garbage collection.
709 The allocation pointer points to the next free in the to-space while
710 the sweep pointer points to the next object considered for scavenging.
712 During major garbage collections,
713 the static link field of every static object indicates whether it has
714 been visited by the garbage collector or not.
715 Therefore, the static link field of every static object must have
716 a null value before a major garbage collection starts.
717 The list @mut_once_list@ of the oldest generation may contain static
718 indirection objects, and thus
719 the garbage collector invokes @zero_mutable_list()@ on the list,
720 Although this breaks up the list, it does not cause any problem because
721 the list is not employed during major garbage collections.
723 \subsubsection{\tt evacuate()}
725 The function @evacuate()@ (defined in @GC.c@), which
726 is called eventually for every live object
727 (including even static objects reachable from roots),
729 a safe place so as not to be garbage collected.
730 Before invoking the function @evacuate()@ on an object @o@, the caller specifies
731 a \emph{desired generation} for @o@ in a variable @evac_gen@
732 (declared in @GC.c@).
733 The desired generation is the youngest generation to which the caller wishes
734 @o@ to be evacuated; the garbage collector should evacuate @o@ to a
735 generation no younger than the desired generation.
737 Depending on @evac_gen@ and the generation $M$ where @o@ currently resides,
738 @evacuate()@ behaves itself as follows:
740 \item If @evac_gen@ $\leq M$ and $N < M$, it does nothing because @o@ is already
741 in a generation no younger than @evac_gen@.
742 \item If @evac_gen@ $\leq M \leq N$, it evacuates @o@ to the to-space of the
743 step to which @o@ currently belongs. @o@ will be moved to the next step later.
744 @recordMutable()@ may be invoked on @o@ depending on its type (e.g., @MVAR@).
745 \item If $M <$ @evac_gen@, @o@ is evacuated to the to-space of step $0$
746 of generation @even_gen@, which accomplishes the request.
747 This happens even when $N \leq$ @evac_gen@. Therefore, those generations
748 which are not considered for garbage collection may still be augmented
749 with new objects during garbage collection.
750 @recordMutable()@ may be invoked on @o@ depending on its type.
752 If @o@ has already been evacuated, @evacuate()@ either does nothing (when
753 @even_gen@ $\leq M$) or reports
754 a failure to evacuate @o@ by setting the flag @failed_to_evac@ (declared
757 Evacuating a large object is handled by @evacuate_large()@.
758 Since it is costly to allocate new memory blocks and copy all the contents
759 of the object, the garbage collector simply removes the object form
760 the list @large_alloc_list@ of its step and links it to another list,
761 from which it will be scavenged later.
763 \subsubsection{Set of roots for garbage collection}
764 Part of the set of roots for garbage collection is obtained indirectly by
765 invoking the function
766 @get_roots()@, an argument to @GarbageCollect()@: the garbage collector
767 invokes @get_roots()@ with @mark_root()@ as an argument, and @get_roots()@
768 in turn invokes @mark_root()@ on each of known roots.
769 The rest of the set of roots is obtained from the lists @mut_list@ and
770 @mut_once_list@ of generation $N + 1$ through the oldest generation:
771 any objects in these lists may have pointers to objects in generations
772 $0$ to $N$, and thus must be considered as a root.
773 If a major garbage collection takes place, no @mut_list@ and @mut_once_list@
774 lists are consider for scavenging and step (2) is skipped.
775 The entire set of roots is now specified by @get_roots()@ alone.
777 \subsubsection{(2) Scavenging lists {\tt mut\_once\_list} and {\tt mut\_list}}
779 Since the roots obtained from the lists @mut_list@ and @mut_once_list@ are
780 already in generations $N' > N$, we only have to scavenge them.
781 That is, it suffices to invoke @evacuate()@ once on each object
782 which is currently pointed to by an object in these lists.
784 When scavenging an object @r@ in the list @mut_once_list@ of generation $M$,
785 the desired generation is set to $M$ for each object @o@ pointed
786 to by @r@ before invoking @evacuate()@.
787 The rationale is that the contents of @r@ cannot be updated any more,
788 and thus @r@ is always survived by @o@; @o@ is live as long as @r@ is.
789 Therefore, we wish @r@ to be evacuated to the same generation $M$ as @r@
790 currently resides (not to its next step).
791 If the evacuation succeeds (indicated by a @rtsFalse@ value of a variable
792 @failed_to_evac@, declared in @GC.c@) for every object @o@, @r@ is removed
793 from the list @mut_once_list@ because it does not hold any backward
794 inter-generational pointers.\footnote{It turns out that @r@ can have only
795 one such object @o@. The type of @r@ is one of the following:
796 @IND\_OLDGEN@, @IND\_OLDGEN\_PERM@, @IND\_STATIC@, and @MUT\_VAR@.}
798 Scavenging a list @mut_list@ is similar to the case of @mut_once_list@.
799 When scavenging an object @r@ in the list @mut_list@ of generation $M$,
800 the desired generation is set to $M$ for each object pointed to by @r@
801 if @r@ is known to be immutable (e.g., @MUT_ARR_PTRS_FROZEN@,
803 or to $0$ if @r@ is still mutable (e.g., @MUT_ARR_PTRS@, @MUT_VAR@).
804 The list @mut_once_list@ is also adjusted if it is safe to remove @r@ from
807 \subsubsection{(3) Scavenging CAFs}
809 When a dynamically loaded CAF is entered, it it first put to the list
810 @caf_list@ and then overwritten with a static indirection object.
811 The evaluation result of the CAF is stored in a dynamic object in the heap
812 and the static indirection object stores a pointer to the dynamic object.
813 Although the static indirection object (or the CAF) itself is never freed,
814 it may be removed later from the @caf_list@ when it is reverted to the
815 original CAF, and the dynamic object may not be live afterwards.
816 Hence, we treat the dynamic object just as normal dynamic objects and
817 set the desired generation to $0$.
819 \subsubsection{(4) Evacuating roots}
821 Evacuating roots (other than those in the lists @mut_once_list@ and
822 @mut_list@) is simply done by invoking @get_roots()@ with @mark_root()@
824 Since these roots are normal dynamic objects, we set the desired generation
827 \subsubsection{(5) Scavenging}
829 The garbage collector scavenges all the objects in the to-space of
830 each step (by invoking @evacuate()@ on each object reachable from them)
831 until every sweep pointer has reached its corresponding
833 It repeatedly examines all the to-spaces because not only sweep pointers
834 but also allocation pointers change during scavenging:
835 when an object @r@ is scavenged, each object reachable from
836 @r@ is evacuated to a certain to-space, which increases the corresponding
837 allocation pointer, and
838 the sweep pointer of the to-space which currently contains @r@
839 increases as well upon finishing scavenging the object @r@.
840 Thus, the garbage collector cannot anticipate in advance how many times
841 it needs to scan through all the to-spaces; it keeps scavenging until
842 no objects are left to be scavenged.
844 \subsubsection{Scavenging static objects}
846 Since it is possible for dynamic objects to point to static objects,
847 the garbage collector may invoke @evacuate()@ on static objects
848 while scavenging dynamic objects in to-spaces.
849 This complicates the garbage collector because
850 static objects cannot be evacuated in general yet
851 they may have pointers to dynamic objects, which must be evacuated.
852 Thus the garbage collector needs to at least scavenge live static objects
853 (as opposed to those static objects currently not reachable from roots).
855 When a minor garbage collection is performed, any invocation of
856 @evacuate()@ on static objects is simply ignored.
857 Furthermore, no static object is considered for scavenging
858 (except those in the list @mut_once_list@ of the oldest generation during).
859 Still all dynamic objects which are marked as live due to static objects
860 are safely evacuated.
861 The reason is that we can reach all such dynamic objects from
862 indirection static objects stored in the list
863 @mut_once_list@ of the oldest generation, which is scavenged during step (2),
864 and the list @caf_list@.
865 In other words, in order to evacuate all such dynamic objects, it is
866 sufficient to evacuate all dynamic objects reachable from
867 static indirection objects in
868 the list @mut_once_list@ of the oldest generation and the list @caf_list@.
869 However, the garbage collector may unnecessarily scavenge certain static
870 indirection objects which are no longer used.
871 They are not scavenged during a major garbage collection, however.
873 During a major garbage collection,
874 if an invocation of @evacuate()@ on a static object @r@ is made,
875 the garbage collector first checks whether @r@ needs to be scavenged or not.
876 If its SRT (Static Reference Table) is empty and it has no other pointers,
877 no dynamic objects are reachable from @r@ and it is ignored.\footnote{If
878 no dynamic objects are reachable from a static object @r@ (even indirectly
879 via multiple static objects),
880 @r@ is not stored in \emph{any} SRT table because it would be no use attempting
881 to follow any pointers in @r@.}
882 Otherwise, it is put in the list @static_objects@.
883 At the beginning of each scavenging loop in step (5),
884 the garbage collector invokes @scavenge_static()@ if the list @static_objects@
886 @scavenge_static()@ scavenges the static objects in the list @static_objects@
887 by invoking @evacuate()@ on every object reachable from them.
888 The desired generation is set to the oldest generation (because any
889 dynamic object directly pointed to by a static object lives
891 These static objects are then put in another list @scavenged_static_objects@
892 and removed from the list @static_objects@.
893 For a static indirection object, if the evacuation
894 fails, it is put back to the list @mut_once_list@ of the oldest generation;
895 it can be thought of as a CAF just entered.
897 After a major garbage collection, therefore, the list @scavenged_static_objects@
898 links all live static objects except for static indirection objects put back
899 to the list @mut_once_list@ of the oldest generation.
900 Dynamically loaded CAFs are found in the list @caf_list@.
902 \subsubsection{(6) Tidying up}
904 The garbage collector tidies up the heap by
905 moving the to-space of each step to the next step.
906 It also re-initialize the small object pool (which now does not contain
907 any live objects), frees any large objects which have not been scavenged,
908 and invokes @resetNurseries()@.
909 If a major garbage collection has been performed, it
910 invokes @zero_static_object_list()@ on the list @scavenged_static_objects@
911 so that all static objects
912 (other than those in the list @mut_once_list@ of the oldest generation)
913 have a null static link field again.
915 At this point, both the small allocation pool and the large object pool are
916 empty. Upon the exit from @GarbageCollect()@, however, they may not
917 be empty any more because the garbage collector invokes @scheduleFinalizer()@
918 before exiting, which tries to run pending finalizers on dead weak pointers and
919 may create new objects through @allocate()@.
920 The nursery still remains intact.
922 The heap may contain extra objects which are not reachable from the roots
923 used during the garbage collection: 1) weak head pointers; 2) dead
924 weak head pointers. Weak head pointers can be tracked from
925 the list @weak_ptr_list@ (declared in @Weak.c@). However, there is no way
926 of reaching dead weak pointers; they will be garbage collected during the
927 next garbage collection.
929 For implementation details, see @GC.c@.
931 \section{Miscellaneous notes}
934 \item To see how to add new fields to Haskell closures,
935 see the document on the implementation of retainer profiling
936 (section `Adding Retainer Set Fields').
938 \item To see how to traverse the graph and visit every live closure,
939 see the document on the implementation of retainer profiling
940 (section `Graph Traversal').
942 \item To see how to linearly scan the heap at any random moment during
943 program execution, see the document on the implementation of LDVU profiling
944 (section `Heap Censuses').
946 \item To see how to linearly scan the from-space during garbage collections,
947 see the document on the implementation of LDVU profiling
948 (section `Destruction of Closures').