3
Reference Cycle Garbage Collection
4
==================================
6
Neil Schemenauer <nas@arctrix.com>
8
Based on a post on the python-dev list. Ideas from Guido van Rossum,
9
Eric Tiedemann, and various others.
11
http://www.arctrix.com/nas/python/gc/
13
The following mailing list threads provide a historical perspective on
14
the design of this module. Note that a fair amount of refinement has
15
occurred since those discussions.
17
http://mail.python.org/pipermail/python-dev/2000-March/002385.html
18
http://mail.python.org/pipermail/python-dev/2000-March/002434.html
19
http://mail.python.org/pipermail/python-dev/2000-March/002497.html
21
For a highlevel view of the collection process, read the collect
27
#include "frameobject.h" /* for PyFrame_ClearFreeList */
29
/* Get an object's GC head */
30
#define AS_GC(o) ((PyGC_Head *)(o)-1)
32
/* Get the object given the GC head */
33
#define FROM_GC(g) ((PyObject *)(((PyGC_Head *)g)+1))
35
/*** Global GC state ***/
37
struct gc_generation {
39
int threshold; /* collection threshold */
40
int count; /* count of allocations or collections of younger
44
#define NUM_GENERATIONS 3
45
#define GEN_HEAD(n) (&generations[n].head)
47
/* linked lists of container objects */
48
static struct gc_generation generations[NUM_GENERATIONS] = {
49
/* PyGC_Head, threshold, count */
50
{{{GEN_HEAD(0), GEN_HEAD(0), 0}}, 700, 0},
51
{{{GEN_HEAD(1), GEN_HEAD(1), 0}}, 10, 0},
52
{{{GEN_HEAD(2), GEN_HEAD(2), 0}}, 10, 0},
55
PyGC_Head *_PyGC_generation0 = GEN_HEAD(0);
57
static int enabled = 1; /* automatic collection enabled? */
59
/* true if we are currently running the collector */
60
static int collecting = 0;
62
/* list of uncollectable objects */
63
static PyObject *garbage = NULL;
65
/* Python string to use if unhandled exception occurs */
66
static PyObject *gc_str = NULL;
68
/* a list of callbacks to be invoked when collection is performed */
69
static PyObject *callbacks = NULL;
71
/* This is the number of objects that survived the last full collection. It
72
approximates the number of long lived objects tracked by the GC.
74
(by "full collection", we mean a collection of the oldest generation).
76
static Py_ssize_t long_lived_total = 0;
78
/* This is the number of objects that survived all "non-full" collections,
79
and are awaiting to undergo a full collection for the first time.
82
static Py_ssize_t long_lived_pending = 0;
85
NOTE: about the counting of long-lived objects.
87
To limit the cost of garbage collection, there are two strategies;
88
- make each collection faster, e.g. by scanning fewer objects
90
This heuristic is about the latter strategy.
92
In addition to the various configurable thresholds, we only trigger a
93
full collection if the ratio
94
long_lived_pending / long_lived_total
95
is above a given value (hardwired to 25%).
97
The reason is that, while "non-full" collections (i.e., collections of
98
the young and middle generations) will always examine roughly the same
99
number of objects -- determined by the aforementioned thresholds --,
100
the cost of a full collection is proportional to the total number of
101
long-lived objects, which is virtually unbounded.
103
Indeed, it has been remarked that doing a full collection every
104
<constant number> of object creations entails a dramatic performance
105
degradation in workloads which consist in creating and storing lots of
106
long-lived objects (e.g. building a large list of GC-tracked objects would
107
show quadratic performance, instead of linear as expected: see issue #4074).
109
Using the above ratio, instead, yields amortized linear performance in
110
the total number of objects (the effect of which can be summarized
111
thusly: "each full garbage collection is more and more costly as the
112
number of objects grows, but we do fewer and fewer of them").
114
This heuristic was suggested by Martin von Lƶwis on python-dev in
115
June 2008. His original analysis and proposal can be found at:
116
http://mail.python.org/pipermail/python-dev/2008-June/080579.html
120
NOTE: about untracking of mutable objects.
122
Certain types of container cannot participate in a reference cycle, and
123
so do not need to be tracked by the garbage collector. Untracking these
124
objects reduces the cost of garbage collections. However, determining
125
which objects may be untracked is not free, and the costs must be
126
weighed against the benefits for garbage collection.
128
There are two possible strategies for when to untrack a container:
130
i) When the container is created.
131
ii) When the container is examined by the garbage collector.
133
Tuples containing only immutable objects (integers, strings etc, and
134
recursively, tuples of immutable objects) do not need to be tracked.
135
The interpreter creates a large number of tuples, many of which will
136
not survive until garbage collection. It is therefore not worthwhile
137
to untrack eligible tuples at creation time.
139
Instead, all tuples except the empty tuple are tracked when created.
140
During garbage collection it is determined whether any surviving tuples
141
can be untracked. A tuple can be untracked if all of its contents are
142
already not tracked. Tuples are examined for untracking in all garbage
143
collection cycles. It may take more than one cycle to untrack a tuple.
145
Dictionaries containing only immutable objects also do not need to be
146
tracked. Dictionaries are untracked when created. If a tracked item is
147
inserted into a dictionary (either as a key or value), the dictionary
148
becomes tracked. During a full garbage collection (all generations),
149
the collector will untrack any dictionaries whose contents are not
152
The module provides the python function is_tracked(obj), which returns
153
the CURRENT tracking status of the object. Subsequent garbage
154
collections may change the tracking status of the object.
156
Untracking of certain containers was introduced in issue #4688, and
157
the algorithm was refined in response to issue #14775.
160
/* set for debugging information */
161
#define DEBUG_STATS (1<<0) /* print collection statistics */
162
#define DEBUG_COLLECTABLE (1<<1) /* print collectable objects */
163
#define DEBUG_UNCOLLECTABLE (1<<2) /* print uncollectable objects */
164
#define DEBUG_SAVEALL (1<<5) /* save all garbage in gc.garbage */
165
#define DEBUG_LEAK DEBUG_COLLECTABLE | \
166
DEBUG_UNCOLLECTABLE | \
169
static PyObject *tmod = NULL;
171
/* Running stats per generation */
172
struct gc_generation_stats {
173
/* total number of collections */
174
Py_ssize_t collections;
175
/* total number of collected objects */
176
Py_ssize_t collected;
177
/* total number of uncollectable objects (put into gc.garbage) */
178
Py_ssize_t uncollectable;
181
static struct gc_generation_stats generation_stats[NUM_GENERATIONS];
183
/*--------------------------------------------------------------------------
186
Between collections, every gc'ed object has one of two gc_refs values:
189
The initial state; objects returned by PyObject_GC_Malloc are in this
190
state. The object doesn't live in any generation list, and its
191
tp_traverse slot must not be called.
194
The object lives in some generation list, and its tp_traverse is safe to
195
call. An object transitions to GC_REACHABLE when PyObject_GC_Track
198
During a collection, gc_refs can temporarily take on other states:
201
At the start of a collection, update_refs() copies the true refcount
202
to gc_refs, for each object in the generation being collected.
203
subtract_refs() then adjusts gc_refs so that it equals the number of
204
times an object is referenced directly from outside the generation
206
gc_refs remains >= 0 throughout these steps.
208
GC_TENTATIVELY_UNREACHABLE
209
move_unreachable() then moves objects not reachable (whether directly or
210
indirectly) from outside the generation into an "unreachable" set.
211
Objects that are found to be reachable have gc_refs set to GC_REACHABLE
212
again. Objects that are found to be unreachable have gc_refs set to
213
GC_TENTATIVELY_UNREACHABLE. It's "tentatively" because the pass doing
214
this can't be sure until it ends, and GC_TENTATIVELY_UNREACHABLE may
215
transition back to GC_REACHABLE.
217
Only objects with GC_TENTATIVELY_UNREACHABLE still set are candidates
218
for collection. If it's decided not to collect such an object (e.g.,
219
it has a __del__ method), its gc_refs is restored to GC_REACHABLE again.
220
----------------------------------------------------------------------------
222
#define GC_UNTRACKED _PyGC_REFS_UNTRACKED
223
#define GC_REACHABLE _PyGC_REFS_REACHABLE
224
#define GC_TENTATIVELY_UNREACHABLE _PyGC_REFS_TENTATIVELY_UNREACHABLE
226
#define IS_TRACKED(o) (_PyGC_REFS(o) != GC_UNTRACKED)
227
#define IS_REACHABLE(o) (_PyGC_REFS(o) == GC_REACHABLE)
228
#define IS_TENTATIVELY_UNREACHABLE(o) ( \
229
_PyGC_REFS(o) == GC_TENTATIVELY_UNREACHABLE)
231
/*** list functions ***/
234
gc_list_init(PyGC_Head *list)
236
list->gc.gc_prev = list;
237
list->gc.gc_next = list;
241
gc_list_is_empty(PyGC_Head *list)
243
return (list->gc.gc_next == list);
247
/* This became unused after gc_list_move() was introduced. */
248
/* Append `node` to `list`. */
250
gc_list_append(PyGC_Head *node, PyGC_Head *list)
252
node->gc.gc_next = list;
253
node->gc.gc_prev = list->gc.gc_prev;
254
node->gc.gc_prev->gc.gc_next = node;
255
list->gc.gc_prev = node;
259
/* Remove `node` from the gc list it's currently in. */
261
gc_list_remove(PyGC_Head *node)
263
node->gc.gc_prev->gc.gc_next = node->gc.gc_next;
264
node->gc.gc_next->gc.gc_prev = node->gc.gc_prev;
265
node->gc.gc_next = NULL; /* object is not currently tracked */
268
/* Move `node` from the gc list it's currently in (which is not explicitly
269
* named here) to the end of `list`. This is semantically the same as
270
* gc_list_remove(node) followed by gc_list_append(node, list).
273
gc_list_move(PyGC_Head *node, PyGC_Head *list)
276
PyGC_Head *current_prev = node->gc.gc_prev;
277
PyGC_Head *current_next = node->gc.gc_next;
278
/* Unlink from current list. */
279
current_prev->gc.gc_next = current_next;
280
current_next->gc.gc_prev = current_prev;
281
/* Relink at end of new list. */
282
new_prev = node->gc.gc_prev = list->gc.gc_prev;
283
new_prev->gc.gc_next = list->gc.gc_prev = node;
284
node->gc.gc_next = list;
287
/* append list `from` onto list `to`; `from` becomes an empty list */
289
gc_list_merge(PyGC_Head *from, PyGC_Head *to)
293
if (!gc_list_is_empty(from)) {
294
tail = to->gc.gc_prev;
295
tail->gc.gc_next = from->gc.gc_next;
296
tail->gc.gc_next->gc.gc_prev = tail;
297
to->gc.gc_prev = from->gc.gc_prev;
298
to->gc.gc_prev->gc.gc_next = to;
304
gc_list_size(PyGC_Head *list)
308
for (gc = list->gc.gc_next; gc != list; gc = gc->gc.gc_next) {
314
/* Append objects in a GC list to a Python list.
315
* Return 0 if all OK, < 0 if error (out of memory for list).
318
append_objects(PyObject *py_list, PyGC_Head *gc_list)
321
for (gc = gc_list->gc.gc_next; gc != gc_list; gc = gc->gc.gc_next) {
322
PyObject *op = FROM_GC(gc);
324
if (PyList_Append(py_list, op)) {
325
return -1; /* exception */
332
/*** end of list stuff ***/
335
/* Set all gc_refs = ob_refcnt. After this, gc_refs is > 0 for all objects
336
* in containers, and is GC_REACHABLE for all tracked gc objects not in
340
update_refs(PyGC_Head *containers)
342
PyGC_Head *gc = containers->gc.gc_next;
343
for (; gc != containers; gc = gc->gc.gc_next) {
344
assert(_PyGCHead_REFS(gc) == GC_REACHABLE);
345
_PyGCHead_SET_REFS(gc, Py_REFCNT(FROM_GC(gc)));
346
/* Python's cyclic gc should never see an incoming refcount
347
* of 0: if something decref'ed to 0, it should have been
348
* deallocated immediately at that time.
349
* Possible cause (if the assert triggers): a tp_dealloc
350
* routine left a gc-aware object tracked during its teardown
351
* phase, and did something-- or allowed something to happen --
352
* that called back into Python. gc can trigger then, and may
353
* see the still-tracked dying object. Before this assert
354
* was added, such mistakes went on to allow gc to try to
355
* delete the object again. In a debug build, that caused
356
* a mysterious segfault, when _Py_ForgetReference tried
357
* to remove the object from the doubly-linked list of all
358
* objects a second time. In a release build, an actual
359
* double deallocation occurred, which leads to corruption
360
* of the allocator's internal bookkeeping pointers. That's
361
* so serious that maybe this should be a release-build
362
* check instead of an assert?
364
assert(_PyGCHead_REFS(gc) != 0);
368
/* A traversal callback for subtract_refs. */
370
visit_decref(PyObject *op, void *data)
373
if (PyObject_IS_GC(op)) {
374
PyGC_Head *gc = AS_GC(op);
375
/* We're only interested in gc_refs for objects in the
376
* generation being collected, which can be recognized
377
* because only they have positive gc_refs.
379
assert(_PyGCHead_REFS(gc) != 0); /* else refcount was too small */
380
if (_PyGCHead_REFS(gc) > 0)
381
_PyGCHead_DECREF(gc);
386
/* Subtract internal references from gc_refs. After this, gc_refs is >= 0
387
* for all objects in containers, and is GC_REACHABLE for all tracked gc
388
* objects not in containers. The ones with gc_refs > 0 are directly
389
* reachable from outside containers, and so can't be collected.
392
subtract_refs(PyGC_Head *containers)
394
traverseproc traverse;
395
PyGC_Head *gc = containers->gc.gc_next;
396
for (; gc != containers; gc=gc->gc.gc_next) {
397
traverse = Py_TYPE(FROM_GC(gc))->tp_traverse;
398
(void) traverse(FROM_GC(gc),
399
(visitproc)visit_decref,
404
/* A traversal callback for move_unreachable. */
406
visit_reachable(PyObject *op, PyGC_Head *reachable)
408
if (PyObject_IS_GC(op)) {
409
PyGC_Head *gc = AS_GC(op);
410
const Py_ssize_t gc_refs = _PyGCHead_REFS(gc);
413
/* This is in move_unreachable's 'young' list, but
414
* the traversal hasn't yet gotten to it. All
415
* we need to do is tell move_unreachable that it's
418
_PyGCHead_SET_REFS(gc, 1);
420
else if (gc_refs == GC_TENTATIVELY_UNREACHABLE) {
421
/* This had gc_refs = 0 when move_unreachable got
422
* to it, but turns out it's reachable after all.
423
* Move it back to move_unreachable's 'young' list,
424
* and move_unreachable will eventually get to it
427
gc_list_move(gc, reachable);
428
_PyGCHead_SET_REFS(gc, 1);
430
/* Else there's nothing to do.
431
* If gc_refs > 0, it must be in move_unreachable's 'young'
432
* list, and move_unreachable will eventually get to it.
433
* If gc_refs == GC_REACHABLE, it's either in some other
434
* generation so we don't care about it, or move_unreachable
435
* already dealt with it.
436
* If gc_refs == GC_UNTRACKED, it must be ignored.
440
|| gc_refs == GC_REACHABLE
441
|| gc_refs == GC_UNTRACKED);
447
/* Move the unreachable objects from young to unreachable. After this,
448
* all objects in young have gc_refs = GC_REACHABLE, and all objects in
449
* unreachable have gc_refs = GC_TENTATIVELY_UNREACHABLE. All tracked
450
* gc objects not in young or unreachable still have gc_refs = GC_REACHABLE.
451
* All objects in young after this are directly or indirectly reachable
452
* from outside the original young; and all objects in unreachable are
456
move_unreachable(PyGC_Head *young, PyGC_Head *unreachable)
458
PyGC_Head *gc = young->gc.gc_next;
460
/* Invariants: all objects "to the left" of us in young have gc_refs
461
* = GC_REACHABLE, and are indeed reachable (directly or indirectly)
462
* from outside the young list as it was at entry. All other objects
463
* from the original young "to the left" of us are in unreachable now,
464
* and have gc_refs = GC_TENTATIVELY_UNREACHABLE. All objects to the
465
* left of us in 'young' now have been scanned, and no objects here
466
* or to the right have been scanned yet.
469
while (gc != young) {
472
if (_PyGCHead_REFS(gc)) {
473
/* gc is definitely reachable from outside the
474
* original 'young'. Mark it as such, and traverse
475
* its pointers to find any other objects that may
476
* be directly reachable from it. Note that the
477
* call to tp_traverse may append objects to young,
478
* so we have to wait until it returns to determine
479
* the next object to visit.
481
PyObject *op = FROM_GC(gc);
482
traverseproc traverse = Py_TYPE(op)->tp_traverse;
483
assert(_PyGCHead_REFS(gc) > 0);
484
_PyGCHead_SET_REFS(gc, GC_REACHABLE);
486
(visitproc)visit_reachable,
488
next = gc->gc.gc_next;
489
if (PyTuple_CheckExact(op)) {
490
_PyTuple_MaybeUntrack(op);
494
/* This *may* be unreachable. To make progress,
495
* assume it is. gc isn't directly reachable from
496
* any object we've already traversed, but may be
497
* reachable from an object we haven't gotten to yet.
498
* visit_reachable will eventually move gc back into
499
* young if that's so, and we'll see it again.
501
next = gc->gc.gc_next;
502
gc_list_move(gc, unreachable);
503
_PyGCHead_SET_REFS(gc, GC_TENTATIVELY_UNREACHABLE);
509
/* Try to untrack all currently tracked dictionaries */
511
untrack_dicts(PyGC_Head *head)
513
PyGC_Head *next, *gc = head->gc.gc_next;
515
PyObject *op = FROM_GC(gc);
516
next = gc->gc.gc_next;
517
if (PyDict_CheckExact(op))
518
_PyDict_MaybeUntrack(op);
523
/* Return true if object has a pre-PEP 442 finalization method. */
525
has_legacy_finalizer(PyObject *op)
527
return op->ob_type->tp_del != NULL;
530
/* Move the objects in unreachable with tp_del slots into `finalizers`.
531
* Objects moved into `finalizers` have gc_refs set to GC_REACHABLE; the
532
* objects remaining in unreachable are left at GC_TENTATIVELY_UNREACHABLE.
535
move_legacy_finalizers(PyGC_Head *unreachable, PyGC_Head *finalizers)
540
/* March over unreachable. Move objects with finalizers into
543
for (gc = unreachable->gc.gc_next; gc != unreachable; gc = next) {
544
PyObject *op = FROM_GC(gc);
546
assert(IS_TENTATIVELY_UNREACHABLE(op));
547
next = gc->gc.gc_next;
549
if (has_legacy_finalizer(op)) {
550
gc_list_move(gc, finalizers);
551
_PyGCHead_SET_REFS(gc, GC_REACHABLE);
556
/* A traversal callback for move_legacy_finalizer_reachable. */
558
visit_move(PyObject *op, PyGC_Head *tolist)
560
if (PyObject_IS_GC(op)) {
561
if (IS_TENTATIVELY_UNREACHABLE(op)) {
562
PyGC_Head *gc = AS_GC(op);
563
gc_list_move(gc, tolist);
564
_PyGCHead_SET_REFS(gc, GC_REACHABLE);
570
/* Move objects that are reachable from finalizers, from the unreachable set
571
* into finalizers set.
574
move_legacy_finalizer_reachable(PyGC_Head *finalizers)
576
traverseproc traverse;
577
PyGC_Head *gc = finalizers->gc.gc_next;
578
for (; gc != finalizers; gc = gc->gc.gc_next) {
579
/* Note that the finalizers list may grow during this. */
580
traverse = Py_TYPE(FROM_GC(gc))->tp_traverse;
581
(void) traverse(FROM_GC(gc),
582
(visitproc)visit_move,
587
/* Clear all weakrefs to unreachable objects, and if such a weakref has a
588
* callback, invoke it if necessary. Note that it's possible for such
589
* weakrefs to be outside the unreachable set -- indeed, those are precisely
590
* the weakrefs whose callbacks must be invoked. See gc_weakref.txt for
591
* overview & some details. Some weakrefs with callbacks may be reclaimed
592
* directly by this routine; the number reclaimed is the return value. Other
593
* weakrefs with callbacks may be moved into the `old` generation. Objects
594
* moved into `old` have gc_refs set to GC_REACHABLE; the objects remaining in
595
* unreachable are left at GC_TENTATIVELY_UNREACHABLE. When this returns,
596
* no object in `unreachable` is weakly referenced anymore.
599
handle_weakrefs(PyGC_Head *unreachable, PyGC_Head *old)
602
PyObject *op; /* generally FROM_GC(gc) */
603
PyWeakReference *wr; /* generally a cast of op */
604
PyGC_Head wrcb_to_call; /* weakrefs with callbacks to call */
608
gc_list_init(&wrcb_to_call);
610
/* Clear all weakrefs to the objects in unreachable. If such a weakref
611
* also has a callback, move it into `wrcb_to_call` if the callback
612
* needs to be invoked. Note that we cannot invoke any callbacks until
613
* all weakrefs to unreachable objects are cleared, lest the callback
614
* resurrect an unreachable object via a still-active weakref. We
615
* make another pass over wrcb_to_call, invoking callbacks, after this
618
for (gc = unreachable->gc.gc_next; gc != unreachable; gc = next) {
619
PyWeakReference **wrlist;
622
assert(IS_TENTATIVELY_UNREACHABLE(op));
623
next = gc->gc.gc_next;
625
if (! PyType_SUPPORTS_WEAKREFS(Py_TYPE(op)))
628
/* It supports weakrefs. Does it have any? */
629
wrlist = (PyWeakReference **)
630
PyObject_GET_WEAKREFS_LISTPTR(op);
632
/* `op` may have some weakrefs. March over the list, clear
633
* all the weakrefs, and move the weakrefs with callbacks
634
* that must be called into wrcb_to_call.
636
for (wr = *wrlist; wr != NULL; wr = *wrlist) {
637
PyGC_Head *wrasgc; /* AS_GC(wr) */
639
/* _PyWeakref_ClearRef clears the weakref but leaves
640
* the callback pointer intact. Obscure: it also
643
assert(wr->wr_object == op);
644
_PyWeakref_ClearRef(wr);
645
assert(wr->wr_object == Py_None);
646
if (wr->wr_callback == NULL)
647
continue; /* no callback */
649
/* Headache time. `op` is going away, and is weakly referenced by
650
* `wr`, which has a callback. Should the callback be invoked? If wr
653
* 1. There's no need to call it. The object and the weakref are
654
* both going away, so it's legitimate to pretend the weakref is
655
* going away first. The user has to ensure a weakref outlives its
656
* referent if they want a guarantee that the wr callback will get
659
* 2. It may be catastrophic to call it. If the callback is also in
660
* cyclic trash (CT), then although the CT is unreachable from
661
* outside the current generation, CT may be reachable from the
662
* callback. Then the callback could resurrect insane objects.
664
* Since the callback is never needed and may be unsafe in this case,
665
* wr is simply left in the unreachable set. Note that because we
666
* already called _PyWeakref_ClearRef(wr), its callback will never
669
* OTOH, if wr isn't part of CT, we should invoke the callback: the
670
* weakref outlived the trash. Note that since wr isn't CT in this
671
* case, its callback can't be CT either -- wr acted as an external
672
* root to this generation, and therefore its callback did too. So
673
* nothing in CT is reachable from the callback either, so it's hard
674
* to imagine how calling it later could create a problem for us. wr
675
* is moved to wrcb_to_call in this case.
677
if (IS_TENTATIVELY_UNREACHABLE(wr))
679
assert(IS_REACHABLE(wr));
681
/* Create a new reference so that wr can't go away
682
* before we can process it again.
686
/* Move wr to wrcb_to_call, for the next pass. */
688
assert(wrasgc != next); /* wrasgc is reachable, but
689
next isn't, so they can't
691
gc_list_move(wrasgc, &wrcb_to_call);
695
/* Invoke the callbacks we decided to honor. It's safe to invoke them
696
* because they can't reference unreachable objects.
698
while (! gc_list_is_empty(&wrcb_to_call)) {
702
gc = wrcb_to_call.gc.gc_next;
704
assert(IS_REACHABLE(op));
705
assert(PyWeakref_Check(op));
706
wr = (PyWeakReference *)op;
707
callback = wr->wr_callback;
708
assert(callback != NULL);
710
/* copy-paste of weakrefobject.c's handle_callback() */
711
temp = PyObject_CallFunctionObjArgs(callback, wr, NULL);
713
PyErr_WriteUnraisable(callback);
717
/* Give up the reference we created in the first pass. When
718
* op's refcount hits 0 (which it may or may not do right now),
719
* op's tp_dealloc will decref op->wr_callback too. Note
720
* that the refcount probably will hit 0 now, and because this
721
* weakref was reachable to begin with, gc didn't already
722
* add it to its count of freed objects. Example: a reachable
723
* weak value dict maps some key to this reachable weakref.
724
* The callback removes this key->weakref mapping from the
725
* dict, leaving no other references to the weakref (excepting
729
if (wrcb_to_call.gc.gc_next == gc) {
730
/* object is still alive -- move it */
731
gc_list_move(gc, old);
741
debug_cycle(char *msg, PyObject *op)
743
PySys_FormatStderr("gc: %s <%s %p>\n",
744
msg, Py_TYPE(op)->tp_name, op);
747
/* Handle uncollectable garbage (cycles with tp_del slots, and stuff reachable
748
* only from such cycles).
749
* If DEBUG_SAVEALL, all objects in finalizers are appended to the module
750
* garbage list (a Python list), else only the objects in finalizers with
751
* __del__ methods are appended to garbage. All objects in finalizers are
752
* merged into the old list regardless.
753
* Returns 0 if all OK, <0 on error (out of memory to grow the garbage list).
754
* The finalizers list is made empty on a successful return.
757
handle_legacy_finalizers(PyGC_Head *finalizers, PyGC_Head *old)
759
PyGC_Head *gc = finalizers->gc.gc_next;
761
if (garbage == NULL) {
762
garbage = PyList_New(0);
764
Py_FatalError("gc couldn't create gc.garbage list");
766
for (; gc != finalizers; gc = gc->gc.gc_next) {
767
PyObject *op = FROM_GC(gc);
769
if ((debug & DEBUG_SAVEALL) || has_legacy_finalizer(op)) {
770
if (PyList_Append(garbage, op) < 0)
775
gc_list_merge(finalizers, old);
780
finalize_garbage(PyGC_Head *collectable, PyGC_Head *old)
783
PyGC_Head *gc = collectable->gc.gc_next;
785
for (; gc != collectable; gc = gc->gc.gc_next) {
786
PyObject *op = FROM_GC(gc);
788
if (!_PyGCHead_FINALIZED(gc) &&
789
PyType_HasFeature(Py_TYPE(op), Py_TPFLAGS_HAVE_FINALIZE) &&
790
(finalize = Py_TYPE(op)->tp_finalize) != NULL) {
791
_PyGCHead_SET_FINALIZED(gc, 1);
794
if (Py_REFCNT(op) == 1) {
795
/* op will be destroyed */
803
/* Walk the collectable list and check that they are really unreachable
804
from the outside (some objects could have been resurrected by a
807
check_garbage(PyGC_Head *collectable)
810
for (gc = collectable->gc.gc_next; gc != collectable;
811
gc = gc->gc.gc_next) {
812
_PyGCHead_SET_REFS(gc, Py_REFCNT(FROM_GC(gc)));
813
assert(_PyGCHead_REFS(gc) != 0);
815
subtract_refs(collectable);
816
for (gc = collectable->gc.gc_next; gc != collectable;
817
gc = gc->gc.gc_next) {
818
assert(_PyGCHead_REFS(gc) >= 0);
819
if (_PyGCHead_REFS(gc) != 0)
826
revive_garbage(PyGC_Head *collectable)
829
for (gc = collectable->gc.gc_next; gc != collectable;
830
gc = gc->gc.gc_next) {
831
_PyGCHead_SET_REFS(gc, GC_REACHABLE);
835
/* Break reference cycles by clearing the containers involved. This is
836
* tricky business as the lists can be changing and we don't know which
837
* objects may be freed. It is possible I screwed something up here.
840
delete_garbage(PyGC_Head *collectable, PyGC_Head *old)
844
while (!gc_list_is_empty(collectable)) {
845
PyGC_Head *gc = collectable->gc.gc_next;
846
PyObject *op = FROM_GC(gc);
848
if (debug & DEBUG_SAVEALL) {
849
PyList_Append(garbage, op);
852
if ((clear = Py_TYPE(op)->tp_clear) != NULL) {
858
if (collectable->gc.gc_next == gc) {
859
/* object is still alive, move it, it may die later */
860
gc_list_move(gc, old);
861
_PyGCHead_SET_REFS(gc, GC_REACHABLE);
866
/* Clear all free lists
867
* All free lists are cleared during the collection of the highest generation.
868
* Allocated items in the free list may keep a pymalloc arena occupied.
869
* Clearing the free lists may give back memory to the OS earlier.
872
clear_freelists(void)
874
(void)PyMethod_ClearFreeList();
875
(void)PyFrame_ClearFreeList();
876
(void)PyCFunction_ClearFreeList();
877
(void)PyTuple_ClearFreeList();
878
(void)PyUnicode_ClearFreeList();
879
(void)PyFloat_ClearFreeList();
880
(void)PyList_ClearFreeList();
881
(void)PyDict_ClearFreeList();
882
(void)PySet_ClearFreeList();
890
_Py_IDENTIFIER(time);
892
PyObject *f = _PyObject_CallMethodId(tmod, &PyId_time, NULL);
897
if (PyFloat_Check(f))
898
result = PyFloat_AsDouble(f);
905
/* This is the main function. Read this to understand how the
906
* collection process works. */
908
collect(int generation, Py_ssize_t *n_collected, Py_ssize_t *n_uncollectable,
912
Py_ssize_t m = 0; /* # objects collected */
913
Py_ssize_t n = 0; /* # unreachable objects that couldn't be collected */
914
PyGC_Head *young; /* the generation we are examining */
915
PyGC_Head *old; /* next older generation */
916
PyGC_Head unreachable; /* non-problematic unreachable trash */
917
PyGC_Head finalizers; /* objects with, & reachable from, __del__ */
920
struct gc_generation_stats *stats = &generation_stats[generation];
922
if (debug & DEBUG_STATS) {
923
PySys_WriteStderr("gc: collecting generation %d...\n",
925
PySys_WriteStderr("gc: objects in each generation:");
926
for (i = 0; i < NUM_GENERATIONS; i++)
927
PySys_WriteStderr(" %" PY_FORMAT_SIZE_T "d",
928
gc_list_size(GEN_HEAD(i)));
930
PySys_WriteStderr("\n");
933
/* update collection and allocation counters */
934
if (generation+1 < NUM_GENERATIONS)
935
generations[generation+1].count += 1;
936
for (i = 0; i <= generation; i++)
937
generations[i].count = 0;
939
/* merge younger generations with one we are currently collecting */
940
for (i = 0; i < generation; i++) {
941
gc_list_merge(GEN_HEAD(i), GEN_HEAD(generation));
944
/* handy references */
945
young = GEN_HEAD(generation);
946
if (generation < NUM_GENERATIONS-1)
947
old = GEN_HEAD(generation+1);
951
/* Using ob_refcnt and gc_refs, calculate which objects in the
952
* container set are reachable from outside the set (i.e., have a
953
* refcount greater than 0 when all the references within the
954
* set are taken into account).
957
subtract_refs(young);
959
/* Leave everything reachable from outside young in young, and move
960
* everything else (in young) to unreachable.
961
* NOTE: This used to move the reachable objects into a reachable
962
* set instead. But most things usually turn out to be reachable,
963
* so it's more efficient to move the unreachable things.
965
gc_list_init(&unreachable);
966
move_unreachable(young, &unreachable);
968
/* Move reachable objects to next generation. */
970
if (generation == NUM_GENERATIONS - 2) {
971
long_lived_pending += gc_list_size(young);
973
gc_list_merge(young, old);
976
/* We only untrack dicts in full collections, to avoid quadratic
977
dict build-up. See issue #14775. */
978
untrack_dicts(young);
979
long_lived_pending = 0;
980
long_lived_total = gc_list_size(young);
983
/* All objects in unreachable are trash, but objects reachable from
984
* legacy finalizers (e.g. tp_del) can't safely be deleted.
986
gc_list_init(&finalizers);
987
move_legacy_finalizers(&unreachable, &finalizers);
988
/* finalizers contains the unreachable objects with a legacy finalizer;
989
* unreachable objects reachable *from* those are also uncollectable,
990
* and we move those into the finalizers list too.
992
move_legacy_finalizer_reachable(&finalizers);
994
/* Collect statistics on collectable objects found and print
995
* debugging information.
997
for (gc = unreachable.gc.gc_next; gc != &unreachable;
998
gc = gc->gc.gc_next) {
1000
if (debug & DEBUG_COLLECTABLE) {
1001
debug_cycle("collectable", FROM_GC(gc));
1005
/* Clear weakrefs and invoke callbacks as necessary. */
1006
m += handle_weakrefs(&unreachable, old);
1008
/* Call tp_finalize on objects which have one. */
1009
finalize_garbage(&unreachable, old);
1011
if (check_garbage(&unreachable)) {
1012
revive_garbage(&unreachable);
1013
gc_list_merge(&unreachable, old);
1016
/* Call tp_clear on objects in the unreachable set. This will cause
1017
* the reference cycles to be broken. It may also cause some objects
1018
* in finalizers to be freed.
1020
delete_garbage(&unreachable, old);
1023
/* Collect statistics on uncollectable objects found and print
1024
* debugging information. */
1025
for (gc = finalizers.gc.gc_next;
1027
gc = gc->gc.gc_next) {
1029
if (debug & DEBUG_UNCOLLECTABLE)
1030
debug_cycle("uncollectable", FROM_GC(gc));
1032
if (debug & DEBUG_STATS) {
1033
double t2 = get_time();
1034
if (m == 0 && n == 0)
1035
PySys_WriteStderr("gc: done");
1039
"%" PY_FORMAT_SIZE_T "d unreachable, "
1040
"%" PY_FORMAT_SIZE_T "d uncollectable",
1043
PySys_WriteStderr(", %.4fs elapsed", t2-t1);
1045
PySys_WriteStderr(".\n");
1048
/* Append instances in the uncollectable set to a Python
1049
* reachable list of garbage. The programmer has to deal with
1050
* this if they insist on creating this type of structure.
1052
(void)handle_legacy_finalizers(&finalizers, old);
1054
/* Clear free list only during the collection of the highest
1056
if (generation == NUM_GENERATIONS-1) {
1060
if (PyErr_Occurred()) {
1066
gc_str = PyUnicode_FromString("garbage collection");
1067
PyErr_WriteUnraisable(gc_str);
1068
Py_FatalError("unexpected exception during garbage collection");
1075
if (n_uncollectable)
1076
*n_uncollectable = n;
1077
stats->collections++;
1078
stats->collected += m;
1079
stats->uncollectable += n;
1083
/* Invoke progress callbacks to notify clients that garbage collection
1084
* is starting or stopping
1087
invoke_gc_callback(const char *phase, int generation,
1088
Py_ssize_t collected, Py_ssize_t uncollectable)
1091
PyObject *info = NULL;
1093
/* we may get called very early */
1094
if (callbacks == NULL)
1096
/* The local variable cannot be rebound, check it for sanity */
1097
assert(callbacks != NULL && PyList_CheckExact(callbacks));
1098
if (PyList_GET_SIZE(callbacks) != 0) {
1099
info = Py_BuildValue("{sisnsn}",
1100
"generation", generation,
1101
"collected", collected,
1102
"uncollectable", uncollectable);
1104
PyErr_WriteUnraisable(NULL);
1108
for (i=0; i<PyList_GET_SIZE(callbacks); i++) {
1109
PyObject *r, *cb = PyList_GET_ITEM(callbacks, i);
1110
Py_INCREF(cb); /* make sure cb doesn't go away */
1111
r = PyObject_CallFunction(cb, "sO", phase, info);
1114
PyErr_WriteUnraisable(cb);
1120
/* Perform garbage collection of a generation and invoke
1121
* progress callbacks.
1124
collect_with_callback(int generation)
1126
Py_ssize_t result, collected, uncollectable;
1127
invoke_gc_callback("start", generation, 0, 0);
1128
result = collect(generation, &collected, &uncollectable, 0);
1129
invoke_gc_callback("stop", generation, collected, uncollectable);
1134
collect_generations(void)
1139
/* Find the oldest generation (highest numbered) where the count
1140
* exceeds the threshold. Objects in the that generation and
1141
* generations younger than it will be collected. */
1142
for (i = NUM_GENERATIONS-1; i >= 0; i--) {
1143
if (generations[i].count > generations[i].threshold) {
1144
/* Avoid quadratic performance degradation in number
1145
of tracked objects. See comments at the beginning
1146
of this file, and issue #4074.
1148
if (i == NUM_GENERATIONS - 1
1149
&& long_lived_pending < long_lived_total / 4)
1151
n = collect_with_callback(i);
1158
PyDoc_STRVAR(gc_enable__doc__,
1159
"enable() -> None\n"
1161
"Enable automatic garbage collection.\n");
1164
gc_enable(PyObject *self, PyObject *noargs)
1171
PyDoc_STRVAR(gc_disable__doc__,
1172
"disable() -> None\n"
1174
"Disable automatic garbage collection.\n");
1177
gc_disable(PyObject *self, PyObject *noargs)
1184
PyDoc_STRVAR(gc_isenabled__doc__,
1185
"isenabled() -> status\n"
1187
"Returns true if automatic garbage collection is enabled.\n");
1190
gc_isenabled(PyObject *self, PyObject *noargs)
1192
return PyBool_FromLong((long)enabled);
1195
PyDoc_STRVAR(gc_collect__doc__,
1196
"collect([generation]) -> n\n"
1198
"With no arguments, run a full collection. The optional argument\n"
1199
"may be an integer specifying which generation to collect. A ValueError\n"
1200
"is raised if the generation number is invalid.\n\n"
1201
"The number of unreachable objects is returned.\n");
1204
gc_collect(PyObject *self, PyObject *args, PyObject *kws)
1206
static char *keywords[] = {"generation", NULL};
1207
int genarg = NUM_GENERATIONS - 1;
1210
if (!PyArg_ParseTupleAndKeywords(args, kws, "|i", keywords, &genarg))
1213
else if (genarg < 0 || genarg >= NUM_GENERATIONS) {
1214
PyErr_SetString(PyExc_ValueError, "invalid generation");
1219
n = 0; /* already collecting, don't do anything */
1222
n = collect_with_callback(genarg);
1226
return PyLong_FromSsize_t(n);
1229
PyDoc_STRVAR(gc_set_debug__doc__,
1230
"set_debug(flags) -> None\n"
1232
"Set the garbage collection debugging flags. Debugging information is\n"
1233
"written to sys.stderr.\n"
1235
"flags is an integer and can have the following bits turned on:\n"
1237
" DEBUG_STATS - Print statistics during collection.\n"
1238
" DEBUG_COLLECTABLE - Print collectable objects found.\n"
1239
" DEBUG_UNCOLLECTABLE - Print unreachable but uncollectable objects found.\n"
1240
" DEBUG_SAVEALL - Save objects to gc.garbage rather than freeing them.\n"
1241
" DEBUG_LEAK - Debug leaking programs (everything but STATS).\n");
1244
gc_set_debug(PyObject *self, PyObject *args)
1246
if (!PyArg_ParseTuple(args, "i:set_debug", &debug))
1253
PyDoc_STRVAR(gc_get_debug__doc__,
1254
"get_debug() -> flags\n"
1256
"Get the garbage collection debugging flags.\n");
1259
gc_get_debug(PyObject *self, PyObject *noargs)
1261
return Py_BuildValue("i", debug);
1264
PyDoc_STRVAR(gc_set_thresh__doc__,
1265
"set_threshold(threshold0, [threshold1, threshold2]) -> None\n"
1267
"Sets the collection thresholds. Setting threshold0 to zero disables\n"
1271
gc_set_thresh(PyObject *self, PyObject *args)
1274
if (!PyArg_ParseTuple(args, "i|ii:set_threshold",
1275
&generations[0].threshold,
1276
&generations[1].threshold,
1277
&generations[2].threshold))
1279
for (i = 2; i < NUM_GENERATIONS; i++) {
1280
/* generations higher than 2 get the same threshold */
1281
generations[i].threshold = generations[2].threshold;
1288
PyDoc_STRVAR(gc_get_thresh__doc__,
1289
"get_threshold() -> (threshold0, threshold1, threshold2)\n"
1291
"Return the current collection thresholds\n");
1294
gc_get_thresh(PyObject *self, PyObject *noargs)
1296
return Py_BuildValue("(iii)",
1297
generations[0].threshold,
1298
generations[1].threshold,
1299
generations[2].threshold);
1302
PyDoc_STRVAR(gc_get_count__doc__,
1303
"get_count() -> (count0, count1, count2)\n"
1305
"Return the current collection counts\n");
1308
gc_get_count(PyObject *self, PyObject *noargs)
1310
return Py_BuildValue("(iii)",
1311
generations[0].count,
1312
generations[1].count,
1313
generations[2].count);
1317
referrersvisit(PyObject* obj, PyObject *objs)
1320
for (i = 0; i < PyTuple_GET_SIZE(objs); i++)
1321
if (PyTuple_GET_ITEM(objs, i) == obj)
1327
gc_referrers_for(PyObject *objs, PyGC_Head *list, PyObject *resultlist)
1331
traverseproc traverse;
1332
for (gc = list->gc.gc_next; gc != list; gc = gc->gc.gc_next) {
1334
traverse = Py_TYPE(obj)->tp_traverse;
1335
if (obj == objs || obj == resultlist)
1337
if (traverse(obj, (visitproc)referrersvisit, objs)) {
1338
if (PyList_Append(resultlist, obj) < 0)
1339
return 0; /* error */
1342
return 1; /* no error */
1345
PyDoc_STRVAR(gc_get_referrers__doc__,
1346
"get_referrers(*objs) -> list\n\
1347
Return the list of objects that directly refer to any of objs.");
1350
gc_get_referrers(PyObject *self, PyObject *args)
1353
PyObject *result = PyList_New(0);
1354
if (!result) return NULL;
1356
for (i = 0; i < NUM_GENERATIONS; i++) {
1357
if (!(gc_referrers_for(args, GEN_HEAD(i), result))) {
1365
/* Append obj to list; return true if error (out of memory), false if OK. */
1367
referentsvisit(PyObject *obj, PyObject *list)
1369
return PyList_Append(list, obj) < 0;
1372
PyDoc_STRVAR(gc_get_referents__doc__,
1373
"get_referents(*objs) -> list\n\
1374
Return the list of objects that are directly referred to by objs.");
1377
gc_get_referents(PyObject *self, PyObject *args)
1380
PyObject *result = PyList_New(0);
1385
for (i = 0; i < PyTuple_GET_SIZE(args); i++) {
1386
traverseproc traverse;
1387
PyObject *obj = PyTuple_GET_ITEM(args, i);
1389
if (! PyObject_IS_GC(obj))
1391
traverse = Py_TYPE(obj)->tp_traverse;
1394
if (traverse(obj, (visitproc)referentsvisit, result)) {
1402
PyDoc_STRVAR(gc_get_objects__doc__,
1403
"get_objects() -> [...]\n"
1405
"Return a list of objects tracked by the collector (excluding the list\n"
1409
gc_get_objects(PyObject *self, PyObject *noargs)
1414
result = PyList_New(0);
1417
for (i = 0; i < NUM_GENERATIONS; i++) {
1418
if (append_objects(result, GEN_HEAD(i))) {
1426
PyDoc_STRVAR(gc_get_stats__doc__,
1427
"get_stats() -> [...]\n"
1429
"Return a list of dictionaries containing per-generation statistics.\n");
1432
gc_get_stats(PyObject *self, PyObject *noargs)
1436
struct gc_generation_stats stats[NUM_GENERATIONS], *st;
1438
/* To get consistent values despite allocations while constructing
1439
the result list, we use a snapshot of the running stats. */
1440
for (i = 0; i < NUM_GENERATIONS; i++) {
1441
stats[i] = generation_stats[i];
1444
result = PyList_New(0);
1448
for (i = 0; i < NUM_GENERATIONS; i++) {
1451
dict = Py_BuildValue("{snsnsn}",
1452
"collections", st->collections,
1453
"collected", st->collected,
1454
"uncollectable", st->uncollectable
1458
if (PyList_Append(result, dict)) {
1472
PyDoc_STRVAR(gc_is_tracked__doc__,
1473
"is_tracked(obj) -> bool\n"
1475
"Returns true if the object is tracked by the garbage collector.\n"
1476
"Simple atomic objects will return false.\n"
1480
gc_is_tracked(PyObject *self, PyObject *obj)
1484
if (PyObject_IS_GC(obj) && IS_TRACKED(obj))
1493
PyDoc_STRVAR(gc__doc__,
1494
"This module provides access to the garbage collector for reference cycles.\n"
1496
"enable() -- Enable automatic garbage collection.\n"
1497
"disable() -- Disable automatic garbage collection.\n"
1498
"isenabled() -- Returns true if automatic collection is enabled.\n"
1499
"collect() -- Do a full collection right now.\n"
1500
"get_count() -- Return the current collection counts.\n"
1501
"set_debug() -- Set debugging flags.\n"
1502
"get_debug() -- Get debugging flags.\n"
1503
"set_threshold() -- Set the collection thresholds.\n"
1504
"get_threshold() -- Return the current the collection thresholds.\n"
1505
"get_objects() -- Return a list of all objects tracked by the collector.\n"
1506
"is_tracked() -- Returns true if a given object is tracked.\n"
1507
"get_referrers() -- Return the list of objects that refer to an object.\n"
1508
"get_referents() -- Return the list of objects that an object refers to.\n");
1510
static PyMethodDef GcMethods[] = {
1511
{"enable", gc_enable, METH_NOARGS, gc_enable__doc__},
1512
{"disable", gc_disable, METH_NOARGS, gc_disable__doc__},
1513
{"isenabled", gc_isenabled, METH_NOARGS, gc_isenabled__doc__},
1514
{"set_debug", gc_set_debug, METH_VARARGS, gc_set_debug__doc__},
1515
{"get_debug", gc_get_debug, METH_NOARGS, gc_get_debug__doc__},
1516
{"get_count", gc_get_count, METH_NOARGS, gc_get_count__doc__},
1517
{"set_threshold", gc_set_thresh, METH_VARARGS, gc_set_thresh__doc__},
1518
{"get_threshold", gc_get_thresh, METH_NOARGS, gc_get_thresh__doc__},
1519
{"collect", (PyCFunction)gc_collect,
1520
METH_VARARGS | METH_KEYWORDS, gc_collect__doc__},
1521
{"get_objects", gc_get_objects,METH_NOARGS, gc_get_objects__doc__},
1522
{"get_stats", gc_get_stats, METH_NOARGS, gc_get_stats__doc__},
1523
{"is_tracked", gc_is_tracked, METH_O, gc_is_tracked__doc__},
1524
{"get_referrers", gc_get_referrers, METH_VARARGS,
1525
gc_get_referrers__doc__},
1526
{"get_referents", gc_get_referents, METH_VARARGS,
1527
gc_get_referents__doc__},
1528
{NULL, NULL} /* Sentinel */
1531
static struct PyModuleDef gcmodule = {
1532
PyModuleDef_HEAD_INIT,
1534
gc__doc__, /* m_doc */
1536
GcMethods, /* m_methods */
1537
NULL, /* m_reload */
1538
NULL, /* m_traverse */
1548
m = PyModule_Create(&gcmodule);
1553
if (garbage == NULL) {
1554
garbage = PyList_New(0);
1555
if (garbage == NULL)
1559
if (PyModule_AddObject(m, "garbage", garbage) < 0)
1562
if (callbacks == NULL) {
1563
callbacks = PyList_New(0);
1564
if (callbacks == NULL)
1567
Py_INCREF(callbacks);
1568
if (PyModule_AddObject(m, "callbacks", callbacks) < 0)
1571
/* Importing can't be done in collect() because collect()
1572
* can be called via PyGC_Collect() in Py_Finalize().
1573
* This wouldn't be a problem, except that <initialized> is
1574
* reset to 0 before calling collect which trips up
1575
* the import and triggers an assertion.
1578
tmod = PyImport_ImportModuleNoBlock("time");
1583
#define ADD_INT(NAME) if (PyModule_AddIntConstant(m, #NAME, NAME) < 0) return NULL
1584
ADD_INT(DEBUG_STATS);
1585
ADD_INT(DEBUG_COLLECTABLE);
1586
ADD_INT(DEBUG_UNCOLLECTABLE);
1587
ADD_INT(DEBUG_SAVEALL);
1588
ADD_INT(DEBUG_LEAK);
1593
/* API to invoke gc.collect() from C */
1600
n = 0; /* already collecting, don't do anything */
1603
n = collect_with_callback(NUM_GENERATIONS - 1);
1611
_PyGC_CollectNoFail(void)
1615
/* Ideally, this function is only called on interpreter shutdown,
1616
and therefore not recursively. Unfortunately, when there are daemon
1617
threads, a daemon thread can start a cyclic garbage collection
1618
during interpreter shutdown (and then never finish it).
1619
See http://bugs.python.org/issue8713#msg195178 for an example.
1625
n = collect(NUM_GENERATIONS - 1, NULL, NULL, 1);
1632
_PyGC_DumpShutdownStats(void)
1634
if (!(debug & DEBUG_SAVEALL)
1635
&& garbage != NULL && PyList_GET_SIZE(garbage) > 0) {
1637
if (debug & DEBUG_UNCOLLECTABLE)
1638
message = "gc: %zd uncollectable objects at " \
1641
message = "gc: %zd uncollectable objects at " \
1642
"shutdown; use gc.set_debug(gc.DEBUG_UNCOLLECTABLE) to list them";
1643
/* PyErr_WarnFormat does too many things and we are at shutdown,
1644
the warnings module's dependencies (e.g. linecache) may be gone
1646
if (PyErr_WarnExplicitFormat(PyExc_ResourceWarning, "gc", 0,
1647
"gc", NULL, message,
1648
PyList_GET_SIZE(garbage)))
1649
PyErr_WriteUnraisable(NULL);
1650
if (debug & DEBUG_UNCOLLECTABLE) {
1651
PyObject *repr = NULL, *bytes = NULL;
1652
repr = PyObject_Repr(garbage);
1653
if (!repr || !(bytes = PyUnicode_EncodeFSDefault(repr)))
1654
PyErr_WriteUnraisable(garbage);
1658
PyBytes_AS_STRING(bytes)
1670
Py_CLEAR(callbacks);
1676
_PyGC_Dump(PyGC_Head *g)
1678
_PyObject_Dump(FROM_GC(g));
1681
/* extension modules might be compiled with GC support so these
1682
functions must always be available */
1684
#undef PyObject_GC_Track
1685
#undef PyObject_GC_UnTrack
1686
#undef PyObject_GC_Del
1687
#undef _PyObject_GC_Malloc
1690
PyObject_GC_Track(void *op)
1692
_PyObject_GC_TRACK(op);
1695
/* for binary compatibility with 2.2 */
1697
_PyObject_GC_Track(PyObject *op)
1699
PyObject_GC_Track(op);
1703
PyObject_GC_UnTrack(void *op)
1705
/* Obscure: the Py_TRASHCAN mechanism requires that we be able to
1706
* call PyObject_GC_UnTrack twice on an object.
1709
_PyObject_GC_UNTRACK(op);
1712
/* for binary compatibility with 2.2 */
1714
_PyObject_GC_UnTrack(PyObject *op)
1716
PyObject_GC_UnTrack(op);
1720
_PyObject_GC_Malloc(size_t basicsize)
1724
if (basicsize > PY_SSIZE_T_MAX - sizeof(PyGC_Head))
1725
return PyErr_NoMemory();
1726
g = (PyGC_Head *)PyObject_MALLOC(
1727
sizeof(PyGC_Head) + basicsize);
1729
return PyErr_NoMemory();
1731
_PyGCHead_SET_REFS(g, GC_UNTRACKED);
1732
generations[0].count++; /* number of allocated GC objects */
1733
if (generations[0].count > generations[0].threshold &&
1735
generations[0].threshold &&
1737
!PyErr_Occurred()) {
1739
collect_generations();
1747
_PyObject_GC_New(PyTypeObject *tp)
1749
PyObject *op = _PyObject_GC_Malloc(_PyObject_SIZE(tp));
1751
op = PyObject_INIT(op, tp);
1756
_PyObject_GC_NewVar(PyTypeObject *tp, Py_ssize_t nitems)
1762
PyErr_BadInternalCall();
1765
size = _PyObject_VAR_SIZE(tp, nitems);
1766
op = (PyVarObject *) _PyObject_GC_Malloc(size);
1768
op = PyObject_INIT_VAR(op, tp, nitems);
1773
_PyObject_GC_Resize(PyVarObject *op, Py_ssize_t nitems)
1775
const size_t basicsize = _PyObject_VAR_SIZE(Py_TYPE(op), nitems);
1776
PyGC_Head *g = AS_GC(op);
1777
if (basicsize > PY_SSIZE_T_MAX - sizeof(PyGC_Head))
1778
return (PyVarObject *)PyErr_NoMemory();
1779
g = (PyGC_Head *)PyObject_REALLOC(g, sizeof(PyGC_Head) + basicsize);
1781
return (PyVarObject *)PyErr_NoMemory();
1782
op = (PyVarObject *) FROM_GC(g);
1783
Py_SIZE(op) = nitems;
1788
PyObject_GC_Del(void *op)
1790
PyGC_Head *g = AS_GC(op);
1793
if (generations[0].count > 0) {
1794
generations[0].count--;