1
/* ---------- To make a malloc.h, start cutting here ------------ */
4
A version of malloc/free/realloc written by Doug Lea and released to the
5
public domain. Send questions/comments/complaints/performance data
8
* VERSION 2.6.5 Wed Jun 17 15:55:16 1998 Doug Lea (dl at gee)
10
Note: There may be an updated version of this malloc obtainable at
11
ftp://g.oswego.edu/pub/misc/malloc.c
12
Check before installing!
14
Note: This version differs from 2.6.4 only by correcting a
15
statement ordering error that could cause failures only
16
when calls to this malloc are interposed with calls to
17
other memory allocators.
19
* Why use this malloc?
21
This is not the fastest, most space-conserving, most portable, or
22
most tunable malloc ever written. However it is among the fastest
23
while also being among the most space-conserving, portable and tunable.
24
Consistent balance across these factors results in a good general-purpose
25
allocator. For a high-level description, see
26
http://g.oswego.edu/dl/html/malloc.html
28
* Synopsis of public routines
30
(Much fuller descriptions are contained in the program documentation below.)
33
Return a pointer to a newly allocated chunk of at least n bytes, or null
34
if no space is available.
36
Release the chunk of memory pointed to by p, or no effect if p is null.
37
realloc(Void_t* p, size_t n);
38
Return a pointer to a chunk of size n that contains the same data
39
as does chunk p up to the minimum of (n, p's size) bytes, or null
40
if no space is available. The returned pointer may or may not be
41
the same as p. If p is null, equivalent to malloc. Unless the
42
#define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
43
size argument of zero (re)allocates a minimum-sized chunk.
44
memalign(size_t alignment, size_t n);
45
Return a pointer to a newly allocated chunk of n bytes, aligned
46
in accord with the alignment argument, which must be a power of
49
Equivalent to memalign(pagesize, n), where pagesize is the page
50
size of the system (or as near to this as can be figured out from
51
all the includes/defines below.)
53
Equivalent to valloc(minimum-page-that-holds(n)), that is,
54
round up n to nearest pagesize.
55
calloc(size_t unit, size_t quantity);
56
Returns a pointer to quantity * unit bytes, with all locations
59
Equivalent to free(p).
60
malloc_trim(size_t pad);
61
Release all but pad bytes of freed top-most memory back
62
to the system. Return 1 if successful, else 0.
63
malloc_usable_size(Void_t* p);
64
Report the number usable allocated bytes associated with allocated
65
chunk p. This may or may not report more bytes than were requested,
66
due to alignment and minimum size constraints.
68
Prints brief summary statistics on stderr.
70
Returns (by copy) a struct containing various summary statistics.
71
mallopt(int parameter_number, int parameter_value)
72
Changes one of the tunable parameters described below. Returns
73
1 if successful in changing the parameter, else 0.
78
8 byte alignment is currently hardwired into the design. This
79
seems to suffice for all current machines and C compilers.
81
Assumed pointer representation: 4 or 8 bytes
82
Code for 8-byte pointers is untested by me but has worked
83
reliably by Wolfram Gloger, who contributed most of the
84
changes supporting this.
86
Assumed size_t representation: 4 or 8 bytes
87
Note that size_t is allowed to be 4 bytes even if pointers are 8.
89
Minimum overhead per allocated chunk: 4 or 8 bytes
90
Each malloced chunk has a hidden overhead of 4 bytes holding size
91
and status information.
93
Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
94
8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
96
When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
97
ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
98
needed; 4 (8) for a trailing size field
99
and 8 (16) bytes for free list pointers. Thus, the minimum
100
allocatable size is 16/24/32 bytes.
102
Even a request for zero bytes (i.e., malloc(0)) returns a
103
pointer to something of the minimum allocatable size.
105
Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
106
8-byte size_t: 2^63 - 16 bytes
108
It is assumed that (possibly signed) size_t bit values suffice to
109
represent chunk sizes. `Possibly signed' is due to the fact
110
that `size_t' may be defined on a system as either a signed or
111
an unsigned type. To be conservative, values that would appear
112
as negative numbers are avoided.
113
Requests for sizes with a negative sign bit will return a
116
Maximum overhead wastage per allocated chunk: normally 15 bytes
118
Alignnment demands, plus the minimum allocatable size restriction
119
make the normal worst-case wastage 15 bytes (i.e., up to 15
120
more bytes will be allocated than were requested in malloc), with
122
1. Because requests for zero bytes allocate non-zero space,
123
the worst case wastage for a request of zero bytes is 24 bytes.
124
2. For requests >= mmap_threshold that are serviced via
125
mmap(), the worst case wastage is 8 bytes plus the remainder
126
from a system page (the minimal mmap unit); typically 4096 bytes.
130
Here are some features that are NOT currently supported
132
* No user-definable hooks for callbacks and the like.
133
* No automated mechanism for fully checking that all accesses
134
to malloced memory stay within their bounds.
135
* No support for compaction.
137
* Synopsis of compile-time options:
139
People have reported using previous versions of this malloc on all
140
versions of Unix, sometimes by tweaking some of the defines
141
below. It has been tested most extensively on Solaris and
142
Linux. It is also reported to work on WIN32 platforms.
143
People have also reported adapting this malloc for use in
144
stand-alone embedded systems.
146
The implementation is in straight, hand-tuned ANSI C. Among other
147
consequences, it uses a lot of macros. Because of this, to be at
148
all usable, this code should be compiled using an optimizing compiler
149
(for example gcc -O2) that can simplify expressions and control
152
__STD_C (default: derived from C compiler defines)
153
Nonzero if using ANSI-standard C compiler, a C++ compiler, or
154
a C compiler sufficiently close to ANSI to get away with it.
155
DEBUG (default: NOT defined)
156
Define to enable debugging. Adds fairly extensive assertion-based
157
checking to help track down memory errors, but noticeably slows down
159
REALLOC_ZERO_BYTES_FREES (default: NOT defined)
160
Define this if you think that realloc(p, 0) should be equivalent
161
to free(p). Otherwise, since malloc returns a unique pointer for
162
malloc(0), so does realloc(p, 0).
163
HAVE_MEMCPY (default: defined)
164
Define if you are not otherwise using ANSI STD C, but still
165
have memcpy and memset in your C library and want to use them.
166
Otherwise, simple internal versions are supplied.
167
USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
168
Define as 1 if you want the C library versions of memset and
169
memcpy called in realloc and calloc (otherwise macro versions are used).
170
At least on some platforms, the simple macro versions usually
171
outperform libc versions.
172
HAVE_MMAP (default: defined as 1)
173
Define to non-zero to optionally make malloc() use mmap() to
174
allocate very large blocks.
175
HAVE_MREMAP (default: defined as 0 unless Linux libc set)
176
Define to non-zero to optionally make realloc() use mremap() to
177
reallocate very large blocks.
178
malloc_getpagesize (default: derived from system #includes)
179
Either a constant or routine call returning the system page size.
180
HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
181
Optionally define if you are on a system with a /usr/include/malloc.h
182
that declares struct mallinfo. It is not at all necessary to
183
define this even if you do, but will ensure consistency.
184
INTERNAL_SIZE_T (default: size_t)
185
Define to a 32-bit type (probably `unsigned int') if you are on a
186
64-bit machine, yet do not want or need to allow malloc requests of
187
greater than 2^31 to be handled. This saves space, especially for
189
INTERNAL_LINUX_C_LIB (default: NOT defined)
190
Defined only when compiled as part of Linux libc.
191
Also note that there is some odd internal name-mangling via defines
192
(for example, internally, `malloc' is named `mALLOc') needed
193
when compiling in this case. These look funny but don't otherwise
195
WIN32 (default: undefined)
196
Define this on MS win (95, nt) platforms to compile in sbrk emulation.
197
LACKS_UNISTD_H (default: undefined)
198
Define this if your system does not have a <unistd.h>.
199
MORECORE (default: sbrk)
200
The name of the routine to call to obtain more memory from the system.
201
MORECORE_FAILURE (default: -1)
202
The value returned upon failure of MORECORE.
203
MORECORE_CLEARS (default 1)
204
True (1) if the routine mapped to MORECORE zeroes out memory (which
206
DEFAULT_TRIM_THRESHOLD
208
DEFAULT_MMAP_THRESHOLD
210
Default values of tunable parameters (described in detail below)
211
controlling interaction with host system routines (sbrk, mmap, etc).
212
These values may also be changed dynamically via mallopt(). The
213
preset defaults are those that give best performance for typical
232
#endif /*__cplusplus*/
245
#include <stddef.h> /* for size_t */
247
#include <sys/types.h>
254
#include <stdio.h> /* needed for malloc_stats */
265
Because freed chunks may be overwritten with link fields, this
266
malloc will often die when freed memory is overwritten by user
267
programs. This can be very effective (albeit in an annoying way)
268
in helping track down dangling pointers.
270
If you compile with -DDEBUG, a number of assertion checks are
271
enabled that will catch more memory errors. You probably won't be
272
able to make much sense of the actual assertion errors, but they
273
should help you locate incorrectly overwritten memory. The
274
checking is fairly extensive, and will slow down execution
275
noticeably. Calling malloc_stats or mallinfo with DEBUG set will
276
attempt to check every non-mmapped allocated and free chunk in the
277
course of computing the summmaries. (By nature, mmapped regions
278
cannot be checked very much automatically.)
280
Setting DEBUG may also be helpful if you are trying to modify
281
this code. The assertions in the check routines spell out in more
282
detail the assumptions and invariants underlying the algorithms.
289
#define assert(x) ((void)0)
294
INTERNAL_SIZE_T is the word-size used for internal bookkeeping
295
of chunk sizes. On a 64-bit machine, you can reduce malloc
296
overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
297
at the expense of not being able to handle requests greater than
298
2^31. This limitation is hardly ever a concern; you are encouraged
299
to set this. However, the default version is the same as size_t.
302
#ifndef INTERNAL_SIZE_T
303
#define INTERNAL_SIZE_T size_t
307
REALLOC_ZERO_BYTES_FREES should be set if a call to
308
realloc with zero bytes should be the same as a call to free.
309
Some people think it should. Otherwise, since this malloc
310
returns a unique pointer for malloc(0), so does realloc(p, 0).
314
/* #define REALLOC_ZERO_BYTES_FREES */
318
WIN32 causes an emulation of sbrk to be compiled in
319
mmap-based options are not currently supported in WIN32.
324
#define MORECORE wsbrk
330
HAVE_MEMCPY should be defined if you are not otherwise using
331
ANSI STD C, but still have memcpy and memset in your C library
332
and want to use them in calloc and realloc. Otherwise simple
333
macro versions are defined here.
335
USE_MEMCPY should be defined as 1 if you actually want to
336
have memset and memcpy called. People report that the macro
337
versions are often enough faster than libc versions on many
338
systems that it is better to use them.
352
#if (__STD_C || defined(HAVE_MEMCPY))
355
void* memset(void*, int, size_t);
356
void* memcpy(void*, const void*, size_t);
365
/* The following macros are only invoked with (2n+1)-multiples of
366
INTERNAL_SIZE_T units, with a positive integer n. This is exploited
367
for fast inline execution when n is small. */
369
#define MALLOC_ZERO(charp, nbytes) \
371
INTERNAL_SIZE_T mzsz = (nbytes); \
372
if(mzsz <= 9*sizeof(mzsz)) { \
373
INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
374
if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
376
if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
378
if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
383
} else memset((charp), 0, mzsz); \
386
#define MALLOC_COPY(dest,src,nbytes) \
388
INTERNAL_SIZE_T mcsz = (nbytes); \
389
if(mcsz <= 9*sizeof(mcsz)) { \
390
INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
391
INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
392
if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
393
*mcdst++ = *mcsrc++; \
394
if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
395
*mcdst++ = *mcsrc++; \
396
if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
397
*mcdst++ = *mcsrc++; }}} \
398
*mcdst++ = *mcsrc++; \
399
*mcdst++ = *mcsrc++; \
401
} else memcpy(dest, src, mcsz); \
404
#else /* !USE_MEMCPY */
406
/* Use Duff's device for good zeroing/copying performance. */
408
#define MALLOC_ZERO(charp, nbytes) \
410
INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
411
long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
412
if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
414
case 0: for(;;) { *mzp++ = 0; \
415
case 7: *mzp++ = 0; \
416
case 6: *mzp++ = 0; \
417
case 5: *mzp++ = 0; \
418
case 4: *mzp++ = 0; \
419
case 3: *mzp++ = 0; \
420
case 2: *mzp++ = 0; \
421
case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
425
#define MALLOC_COPY(dest,src,nbytes) \
427
INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
428
INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
429
long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
430
if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
432
case 0: for(;;) { *mcdst++ = *mcsrc++; \
433
case 7: *mcdst++ = *mcsrc++; \
434
case 6: *mcdst++ = *mcsrc++; \
435
case 5: *mcdst++ = *mcsrc++; \
436
case 4: *mcdst++ = *mcsrc++; \
437
case 3: *mcdst++ = *mcsrc++; \
438
case 2: *mcdst++ = *mcsrc++; \
439
case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
447
Define HAVE_MMAP to optionally make malloc() use mmap() to
448
allocate very large blocks. These will be returned to the
449
operating system immediately after a free().
457
Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
458
large blocks. This is currently only possible on Linux with
459
kernel versions newer than 1.3.77.
463
#ifdef INTERNAL_LINUX_C_LIB
464
#define HAVE_MREMAP 1
466
#define HAVE_MREMAP 0
474
#include <sys/mman.h>
476
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
477
#define MAP_ANONYMOUS MAP_ANON
480
#endif /* HAVE_MMAP */
483
Access to system page size. To the extent possible, this malloc
484
manages memory from the system in page-size units.
486
The following mechanics for getpagesize were adapted from
487
bsd/gnu getpagesize.h
490
#ifndef LACKS_UNISTD_H
494
#ifndef malloc_getpagesize
495
# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
496
# ifndef _SC_PAGE_SIZE
497
# define _SC_PAGE_SIZE _SC_PAGESIZE
500
# ifdef _SC_PAGE_SIZE
501
# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
503
# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
504
extern size_t getpagesize();
505
# define malloc_getpagesize getpagesize()
507
# include <sys/param.h>
508
# ifdef EXEC_PAGESIZE
509
# define malloc_getpagesize EXEC_PAGESIZE
513
# define malloc_getpagesize NBPG
515
# define malloc_getpagesize (NBPG * CLSIZE)
519
# define malloc_getpagesize NBPC
522
# define malloc_getpagesize PAGESIZE
524
# define malloc_getpagesize (4096) /* just guess */
537
This version of malloc supports the standard SVID/XPG mallinfo
538
routine that returns a struct containing the same kind of
539
information you can get from malloc_stats. It should work on
540
any SVID/XPG compliant system that has a /usr/include/malloc.h
541
defining struct mallinfo. (If you'd like to install such a thing
542
yourself, cut out the preliminary declarations as described above
543
and below and save them in a malloc.h file. But there's no
544
compelling reason to bother to do this.)
546
The main declaration needed is the mallinfo struct that is returned
547
(by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
548
bunch of fields, most of which are not even meaningful in this
549
version of malloc. Some of these fields are are instead filled by
550
mallinfo() with other numbers that might possibly be of interest.
552
HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
553
/usr/include/malloc.h file that includes a declaration of struct
554
mallinfo. If so, it is included; else an SVID2/XPG2 compliant
555
version is declared below. These must be precisely the same for
560
/* #define HAVE_USR_INCLUDE_MALLOC_H */
562
#if HAVE_USR_INCLUDE_MALLOC_H
563
#include "/usr/include/malloc.h"
566
/* SVID2/XPG mallinfo structure */
569
int arena; /* total space allocated from system */
570
int ordblks; /* number of non-inuse chunks */
571
int smblks; /* unused -- always zero */
572
int hblks; /* number of mmapped regions */
573
int hblkhd; /* total space in mmapped regions */
574
int usmblks; /* unused -- always zero */
575
int fsmblks; /* unused -- always zero */
576
int uordblks; /* total allocated space */
577
int fordblks; /* total non-inuse space */
578
int keepcost; /* top-most, releasable (via malloc_trim) space */
581
/* SVID2/XPG mallopt options */
583
#define M_MXFAST 1 /* UNUSED in this malloc */
584
#define M_NLBLKS 2 /* UNUSED in this malloc */
585
#define M_GRAIN 3 /* UNUSED in this malloc */
586
#define M_KEEP 4 /* UNUSED in this malloc */
590
/* mallopt options that actually do something */
592
#define M_TRIM_THRESHOLD -1
594
#define M_MMAP_THRESHOLD -3
595
#define M_MMAP_MAX -4
599
#ifndef DEFAULT_TRIM_THRESHOLD
600
#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
604
M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
605
to keep before releasing via malloc_trim in free().
607
Automatic trimming is mainly useful in long-lived programs.
608
Because trimming via sbrk can be slow on some systems, and can
609
sometimes be wasteful (in cases where programs immediately
610
afterward allocate more large chunks) the value should be high
611
enough so that your overall system performance would improve by
614
The trim threshold and the mmap control parameters (see below)
615
can be traded off with one another. Trimming and mmapping are
616
two different ways of releasing unused memory back to the
617
system. Between these two, it is often possible to keep
618
system-level demands of a long-lived program down to a bare
619
minimum. For example, in one test suite of sessions measuring
620
the XF86 X server on Linux, using a trim threshold of 128K and a
621
mmap threshold of 192K led to near-minimal long term resource
624
If you are using this malloc in a long-lived program, it should
625
pay to experiment with these values. As a rough guide, you
626
might set to a value close to the average size of a process
627
(program) running on your system. Releasing this much memory
628
would allow such a process to run in memory. Generally, it's
629
worth it to tune for trimming rather tham memory mapping when a
630
program undergoes phases where several large chunks are
631
allocated and released in ways that can reuse each other's
632
storage, perhaps mixed with phases where there are no such
633
chunks at all. And in well-behaved long-lived programs,
634
controlling release of large blocks via trimming versus mapping
637
However, in most programs, these parameters serve mainly as
638
protection against the system-level effects of carrying around
639
massive amounts of unneeded memory. Since frequent calls to
640
sbrk, mmap, and munmap otherwise degrade performance, the default
641
parameters are set to relatively high values that serve only as
644
The default trim value is high enough to cause trimming only in
645
fairly extreme (by current memory consumption standards) cases.
646
It must be greater than page size to have any useful effect. To
647
disable trimming completely, you can set to (unsigned long)(-1);
653
#ifndef DEFAULT_TOP_PAD
654
#define DEFAULT_TOP_PAD (0)
658
M_TOP_PAD is the amount of extra `padding' space to allocate or
659
retain whenever sbrk is called. It is used in two ways internally:
661
* When sbrk is called to extend the top of the arena to satisfy
662
a new malloc request, this much padding is added to the sbrk
665
* When malloc_trim is called automatically from free(),
666
it is used as the `pad' argument.
668
In both cases, the actual amount of padding is rounded
669
so that the end of the arena is always a system page boundary.
671
The main reason for using padding is to avoid calling sbrk so
672
often. Having even a small pad greatly reduces the likelihood
673
that nearly every malloc request during program start-up (or
674
after trimming) will invoke sbrk, which needlessly wastes
677
Automatic rounding-up to page-size units is normally sufficient
678
to avoid measurable overhead, so the default is 0. However, in
679
systems where sbrk is relatively slow, it can pay to increase
680
this value, at the expense of carrying around more memory than
686
#ifndef DEFAULT_MMAP_THRESHOLD
687
#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
692
M_MMAP_THRESHOLD is the request size threshold for using mmap()
693
to service a request. Requests of at least this size that cannot
694
be allocated using already-existing space will be serviced via mmap.
695
(If enough normal freed space already exists it is used instead.)
697
Using mmap segregates relatively large chunks of memory so that
698
they can be individually obtained and released from the host
699
system. A request serviced through mmap is never reused by any
700
other request (at least not directly; the system may just so
701
happen to remap successive requests to the same locations).
703
Segregating space in this way has the benefit that mmapped space
704
can ALWAYS be individually released back to the system, which
705
helps keep the system level memory demands of a long-lived
706
program low. Mapped memory can never become `locked' between
707
other chunks, as can happen with normally allocated chunks, which
708
menas that even trimming via malloc_trim would not release them.
710
However, it has the disadvantages that:
712
1. The space cannot be reclaimed, consolidated, and then
713
used to service later requests, as happens with normal chunks.
714
2. It can lead to more wastage because of mmap page alignment
716
3. It causes malloc performance to be more dependent on host
717
system memory management support routines which may vary in
718
implementation quality and may impose arbitrary
719
limitations. Generally, servicing a request via normal
720
malloc steps is faster than going through a system's mmap.
722
All together, these considerations should lead you to use mmap
723
only for relatively large requests.
730
#ifndef DEFAULT_MMAP_MAX
732
#define DEFAULT_MMAP_MAX (64)
734
#define DEFAULT_MMAP_MAX (0)
739
M_MMAP_MAX is the maximum number of requests to simultaneously
740
service using mmap. This parameter exists because:
742
1. Some systems have a limited number of internal tables for
744
2. In most systems, overreliance on mmap can degrade overall
746
3. If a program allocates many large regions, it is probably
747
better off using normal sbrk-based allocation routines that
748
can reclaim and reallocate normal heap memory. Using a
749
small value allows transition into this mode after the
750
first few allocations.
752
Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
753
the default value is 0, and attempts to set it to non-zero values
754
in mallopt will fail.
762
Special defines for linux libc
764
Except when compiled using these special defines for Linux libc
765
using weak aliases, this malloc is NOT designed to work in
766
multithreaded applications. No semaphores or other concurrency
767
control are provided to ensure that multiple malloc or free calls
768
don't run at the same time, which could be disasterous. A single
769
semaphore could be used across malloc, realloc, and free (which is
770
essentially the effect of the linux weak alias approach). It would
771
be hard to obtain finer granularity.
775
/* For jade; need the __morecore hook for ralloc.c... */
777
#if defined(INTERNAL_LINUX_C_LIB) || defined(JADE)
780
#define __default_morecore_init __jade_morecore
785
Void_t * __default_morecore_init (ptrdiff_t);
786
Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
790
Void_t * __default_morecore_init ();
791
Void_t *(*__morecore)() = __default_morecore_init;
795
#define MORECORE (*__morecore)
796
#define MORECORE_FAILURE 0
797
#define MORECORE_CLEARS 1
799
#else /* INTERNAL_LINUX_C_LIB */
801
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
803
extern Void_t* sbrk(ptrdiff_t);
805
extern Void_t* sbrk();
810
#define MORECORE sbrk
813
#ifndef MORECORE_FAILURE
814
#define MORECORE_FAILURE -1
817
#ifndef MORECORE_CLEARS
818
#define MORECORE_CLEARS 1
821
#endif /* INTERNAL_LINUX_C_LIB */
823
#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
825
#define cALLOc __libc_calloc
826
#define fREe __libc_free
827
#define mALLOc __libc_malloc
828
#define mEMALIGn __libc_memalign
829
#define rEALLOc __libc_realloc
830
#define vALLOc __libc_valloc
831
#define pvALLOc __libc_pvalloc
832
#define mALLINFo __libc_mallinfo
833
#define mALLOPt __libc_mallopt
835
#pragma weak calloc = __libc_calloc
836
#pragma weak free = __libc_free
837
#pragma weak cfree = __libc_free
838
#pragma weak malloc = __libc_malloc
839
#pragma weak memalign = __libc_memalign
840
#pragma weak realloc = __libc_realloc
841
#pragma weak valloc = __libc_valloc
842
#pragma weak pvalloc = __libc_pvalloc
843
#pragma weak mallinfo = __libc_mallinfo
844
#pragma weak mallopt = __libc_mallopt
849
#define cALLOc calloc
851
#define mALLOc malloc
852
#define mEMALIGn memalign
853
#define rEALLOc realloc
854
#define vALLOc valloc
855
#define pvALLOc pvalloc
856
#define mALLINFo mallinfo
857
#define mALLOPt mallopt
861
/* Public routines */
865
Void_t* mALLOc(size_t);
867
Void_t* rEALLOc(Void_t*, size_t);
868
Void_t* mEMALIGn(size_t, size_t);
869
Void_t* vALLOc(size_t);
870
Void_t* pvALLOc(size_t);
871
Void_t* cALLOc(size_t, size_t);
873
int malloc_trim(size_t);
874
size_t malloc_usable_size(Void_t*);
876
int mALLOPt(int, int);
877
struct mallinfo mALLINFo(void);
888
size_t malloc_usable_size();
891
struct mallinfo mALLINFo();
896
}; /* end of extern "C" */
899
/* ---------- To make a malloc.h, end cutting here ------------ */
903
Emulation of sbrk for WIN32
904
All code within the ifdef WIN32 is untested by me.
910
#define AlignPage(add) (((add) + (malloc_getpagesize-1)) &
911
~(malloc_getpagesize-1))
913
/* resrve 64MB to insure large contiguous space */
914
#define RESERVED_SIZE (1024*1024*64)
915
#define NEXT_SIZE (2048*1024)
916
#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
918
struct GmListElement;
919
typedef struct GmListElement GmListElement;
927
static GmListElement* head = 0;
928
static unsigned int gNextAddress = 0;
929
static unsigned int gAddressBase = 0;
930
static unsigned int gAllocatedSize = 0;
933
GmListElement* makeGmListElement (void* bas)
936
this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
950
ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
951
if (gAddressBase && (gNextAddress - gAddressBase))
953
rval = VirtualFree ((void*)gAddressBase,
954
gNextAddress - gAddressBase,
960
GmListElement* next = head->next;
961
rval = VirtualFree (head->base, 0, MEM_RELEASE);
969
void* findRegion (void* start_address, unsigned long size)
971
MEMORY_BASIC_INFORMATION info;
972
while ((unsigned long)start_address < TOP_MEMORY)
974
VirtualQuery (start_address, &info, sizeof (info));
975
if (info.State != MEM_FREE)
976
start_address = (char*)info.BaseAddress + info.RegionSize;
977
else if (info.RegionSize >= size)
978
return start_address;
980
start_address = (char*)info.BaseAddress + info.RegionSize;
987
void* wsbrk (long size)
992
if (gAddressBase == 0)
994
gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
995
gNextAddress = gAddressBase =
996
(unsigned int)VirtualAlloc (NULL, gAllocatedSize,
997
MEM_RESERVE, PAGE_NOACCESS);
998
} else if (AlignPage (gNextAddress + size) > (gAddressBase +
1001
long new_size = max (NEXT_SIZE, AlignPage (size));
1002
void* new_address = (void*)(gAddressBase+gAllocatedSize);
1005
new_address = findRegion (new_address, new_size);
1007
if (new_address == 0)
1010
gAddressBase = gNextAddress =
1011
(unsigned int)VirtualAlloc (new_address, new_size,
1012
MEM_RESERVE, PAGE_NOACCESS);
1013
// repeat in case of race condition
1014
// The region that we found has been snagged
1015
// by another thread
1017
while (gAddressBase == 0);
1019
ASSERT (new_address == (void*)gAddressBase);
1021
gAllocatedSize = new_size;
1023
if (!makeGmListElement ((void*)gAddressBase))
1026
if ((size + gNextAddress) > AlignPage (gNextAddress))
1029
res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1030
(size + gNextAddress -
1031
AlignPage (gNextAddress)),
1032
MEM_COMMIT, PAGE_READWRITE);
1036
tmp = (void*)gNextAddress;
1037
gNextAddress = (unsigned int)tmp + size;
1042
unsigned int alignedGoal = AlignPage (gNextAddress + size);
1043
/* Trim by releasing the virtual memory */
1044
if (alignedGoal >= gAddressBase)
1046
VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1048
gNextAddress = gNextAddress + size;
1049
return (void*)gNextAddress;
1053
VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1055
gNextAddress = gAddressBase;
1061
return (void*)gNextAddress;
1076
INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1077
INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1078
struct malloc_chunk* fd; /* double links -- used only if free. */
1079
struct malloc_chunk* bk;
1082
typedef struct malloc_chunk* mchunkptr;
1086
malloc_chunk details:
1088
(The following includes lightly edited explanations by Colin Plumb.)
1090
Chunks of memory are maintained using a `boundary tag' method as
1091
described in e.g., Knuth or Standish. (See the paper by Paul
1092
Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1093
survey of such techniques.) Sizes of free chunks are stored both
1094
in the front of each chunk and at the end. This makes
1095
consolidating fragmented chunks into bigger chunks very fast. The
1096
size fields also hold bits representing whether chunks are free or
1099
An allocated chunk looks like this:
1102
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1103
| Size of previous chunk, if allocated | |
1104
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1105
| Size of chunk, in bytes |P|
1106
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1107
| User data starts here... .
1109
. (malloc_usable_space() bytes) .
1111
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1113
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1116
Where "chunk" is the front of the chunk for the purpose of most of
1117
the malloc code, but "mem" is the pointer that is returned to the
1118
user. "Nextchunk" is the beginning of the next contiguous chunk.
1120
Chunks always begin on even word boundries, so the mem portion
1121
(which is returned to the user) is also on an even word boundary, and
1122
thus double-word aligned.
1124
Free chunks are stored in circular doubly-linked lists, and look like this:
1126
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1127
| Size of previous chunk |
1128
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1129
`head:' | Size of chunk, in bytes |P|
1130
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1131
| Forward pointer to next chunk in list |
1132
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1133
| Back pointer to previous chunk in list |
1134
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1135
| Unused space (may be 0 bytes long) .
1138
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1139
`foot:' | Size of chunk, in bytes |
1140
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1142
The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1143
chunk size (which is always a multiple of two words), is an in-use
1144
bit for the *previous* chunk. If that bit is *clear*, then the
1145
word before the current chunk size contains the previous chunk
1146
size, and can be used to find the front of the previous chunk.
1147
(The very first chunk allocated always has this bit set,
1148
preventing access to non-existent (or non-owned) memory.)
1150
Note that the `foot' of the current chunk is actually represented
1151
as the prev_size of the NEXT chunk. (This makes it easier to
1152
deal with alignments etc).
1154
The two exceptions to all this are
1156
1. The special chunk `top', which doesn't bother using the
1157
trailing size field since there is no
1158
next contiguous chunk that would have to index off it. (After
1159
initialization, `top' is forced to always exist. If it would
1160
become less than MINSIZE bytes long, it is replenished via
1163
2. Chunks allocated via mmap, which have the second-lowest-order
1164
bit (IS_MMAPPED) set in their size fields. Because they are
1165
never merged or traversed from any other chunk, they have no
1166
foot size or inuse information.
1168
Available chunks are kept in any of several places (all declared below):
1170
* `av': An array of chunks serving as bin headers for consolidated
1171
chunks. Each bin is doubly linked. The bins are approximately
1172
proportionally (log) spaced. There are a lot of these bins
1173
(128). This may look excessive, but works very well in
1174
practice. All procedures maintain the invariant that no
1175
consolidated chunk physically borders another one. Chunks in
1176
bins are kept in size order, with ties going to the
1177
approximately least recently used chunk.
1179
The chunks in each bin are maintained in decreasing sorted order by
1180
size. This is irrelevant for the small bins, which all contain
1181
the same-sized chunks, but facilitates best-fit allocation for
1182
larger chunks. (These lists are just sequential. Keeping them in
1183
order almost never requires enough traversal to warrant using
1184
fancier ordered data structures.) Chunks of the same size are
1185
linked with the most recently freed at the front, and allocations
1186
are taken from the back. This results in LRU or FIFO allocation
1187
order, which tends to give each chunk an equal opportunity to be
1188
consolidated with adjacent freed chunks, resulting in larger free
1189
chunks and less fragmentation.
1191
* `top': The top-most available chunk (i.e., the one bordering the
1192
end of available memory) is treated specially. It is never
1193
included in any bin, is used only if no other chunk is
1194
available, and is released back to the system if it is very
1195
large (see M_TRIM_THRESHOLD).
1197
* `last_remainder': A bin holding only the remainder of the
1198
most recently split (non-top) chunk. This bin is checked
1199
before other non-fitting chunks, so as to provide better
1200
locality for runs of sequentially allocated chunks.
1202
* Implicitly, through the host system's memory mapping tables.
1203
If supported, requests greater than a threshold are usually
1204
serviced via calls to mmap, and then later released via munmap.
1213
/* sizes, alignments */
1215
#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1216
#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1217
#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1218
#define MINSIZE (sizeof(struct malloc_chunk))
1220
/* conversion from malloc headers to user pointers, and back */
1222
#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1223
#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1225
/* pad request bytes into a usable size */
1227
#define request2size(req) \
1228
(((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1229
(long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1230
(((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1232
/* Check if m has acceptable alignment */
1234
#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1240
Physical chunk operations
1244
/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1246
#define PREV_INUSE 0x1
1248
/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1250
#define IS_MMAPPED 0x2
1252
/* Bits to mask off when extracting size */
1254
#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1257
/* Ptr to next physical malloc_chunk. */
1259
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1261
/* Ptr to previous physical malloc_chunk */
1263
#define prev_chunk(p)\
1264
((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1267
/* Treat space at ptr + offset as a chunk */
1269
#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1275
Dealing with use bits
1278
/* extract p's inuse bit */
1281
((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1283
/* extract inuse bit of previous chunk */
1285
#define prev_inuse(p) ((p)->size & PREV_INUSE)
1287
/* check for mmap()'ed chunk */
1289
#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1291
/* set/clear chunk as in use without otherwise disturbing */
1293
#define set_inuse(p)\
1294
((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1296
#define clear_inuse(p)\
1297
((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1299
/* check/set/clear inuse bits in known places */
1301
#define inuse_bit_at_offset(p, s)\
1302
(((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1304
#define set_inuse_bit_at_offset(p, s)\
1305
(((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1307
#define clear_inuse_bit_at_offset(p, s)\
1308
(((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1314
Dealing with size fields
1317
/* Get size, ignoring use bits */
1319
#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1321
/* Set size at head, without disturbing its use bit */
1323
#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1325
/* Set size/use ignoring previous bits in header */
1327
#define set_head(p, s) ((p)->size = (s))
1329
/* Set size at footer (only when chunk is not in use) */
1331
#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1340
The bins, `av_' are an array of pairs of pointers serving as the
1341
heads of (initially empty) doubly-linked lists of chunks, laid out
1342
in a way so that each pair can be treated as if it were in a
1343
malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1344
and chunks are the same).
1346
Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1347
8 bytes apart. Larger bins are approximately logarithmically
1348
spaced. (See the table below.) The `av_' array is never mentioned
1349
directly in the code, but instead via bin access macros.
1357
4 bins of size 32768
1358
2 bins of size 262144
1359
1 bin of size what's left
1361
There is actually a little bit of slop in the numbers in bin_index
1362
for the sake of speed. This makes no difference elsewhere.
1364
The special chunks `top' and `last_remainder' get their own bins,
1365
(this is implemented via yet more trickery with the av_ array),
1366
although `top' is never properly linked to its bin since it is
1367
always handled specially.
1371
#define NAV 128 /* number of bins */
1373
typedef struct malloc_chunk* mbinptr;
1377
#define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1378
#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1379
#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1382
The first 2 bins are never indexed. The corresponding av_ cells are instead
1383
used for bookkeeping. This is not to save space, but to simplify
1384
indexing, maintain locality, and avoid some initialization tests.
1387
#define top (bin_at(0)->fd) /* The topmost chunk */
1388
#define last_remainder (bin_at(1)) /* remainder from last split */
1392
Because top initially points to its own bin with initial
1393
zero size, thus forcing extension on the first malloc request,
1394
we avoid having any special code in malloc to check whether
1395
it even exists yet. But we still need to in malloc_extend_top.
1398
#define initial_top ((mchunkptr)(bin_at(0)))
1400
/* Helper macro to initialize bins */
1402
#define IAV(i) bin_at(i), bin_at(i)
1404
static mbinptr av_[NAV * 2 + 2] = {
1406
IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1407
IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1408
IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1409
IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1410
IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1411
IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1412
IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1413
IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1414
IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1415
IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1416
IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1417
IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1418
IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1419
IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1420
IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1421
IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1426
/* field-extraction macros */
1428
#define first(b) ((b)->fd)
1429
#define last(b) ((b)->bk)
1435
#define bin_index(sz) \
1436
(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1437
((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1438
((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1439
((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1440
((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1441
((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1444
bins for chunks < 512 are all spaced 8 bytes apart, and hold
1445
identically sized chunks. This is exploited in malloc.
1448
#define MAX_SMALLBIN 63
1449
#define MAX_SMALLBIN_SIZE 512
1450
#define SMALLBIN_WIDTH 8
1452
#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1455
Requests are `small' if both the corresponding and the next bin are small
1458
#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1463
To help compensate for the large number of bins, a one-level index
1464
structure is used for bin-by-bin searching. `binblocks' is a
1465
one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1466
have any (possibly) non-empty bins, so they can be skipped over
1467
all at once during during traversals. The bits are NOT always
1468
cleared as soon as all bins in a block are empty, but instead only
1469
when all are noticed to be empty during traversal in malloc.
1472
#define BINBLOCKWIDTH 4 /* bins per block */
1474
#define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1476
/* bin<->block macros */
1478
#define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1479
#define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1480
#define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1486
/* Other static bookkeeping data */
1488
/* variables holding tunable values */
1490
static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1491
static unsigned long top_pad = DEFAULT_TOP_PAD;
1492
static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1493
static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1495
/* The first value returned from sbrk */
1496
static char* sbrk_base = (char*)(-1);
1498
/* The maximum memory obtained from system via sbrk */
1499
static unsigned long max_sbrked_mem = 0;
1501
/* The maximum via either sbrk or mmap */
1502
static unsigned long max_total_mem = 0;
1504
/* internal working copy of mallinfo */
1505
static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1507
/* The total memory obtained from system via sbrk */
1508
#define sbrked_mem (current_mallinfo.arena)
1510
/* Tracking mmaps */
1512
static unsigned int n_mmaps = 0;
1513
static unsigned int max_n_mmaps = 0;
1514
static unsigned long mmapped_mem = 0;
1515
static unsigned long max_mmapped_mem = 0;
1527
These routines make a number of assertions about the states
1528
of data structures that should be true at all times. If any
1529
are not true, it's very likely that a user program has somehow
1530
trashed memory. (It's also possible that there is a coding error
1531
in malloc. In which case, please report it!)
1535
static void do_check_chunk(mchunkptr p)
1537
static void do_check_chunk(p) mchunkptr p;
1540
INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1542
/* No checkable chunk is mmapped */
1543
assert(!chunk_is_mmapped(p));
1545
/* Check for legal address ... */
1546
assert((char*)p >= sbrk_base);
1548
assert((char*)p + sz <= (char*)top);
1550
assert((char*)p + sz <= sbrk_base + sbrked_mem);
1556
static void do_check_free_chunk(mchunkptr p)
1558
static void do_check_free_chunk(p) mchunkptr p;
1561
INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1562
mchunkptr next = chunk_at_offset(p, sz);
1566
/* Check whether it claims to be free ... */
1569
/* Unless a special marker, must have OK fields */
1570
if ((long)sz >= (long)MINSIZE)
1572
assert((sz & MALLOC_ALIGN_MASK) == 0);
1573
assert(aligned_OK(chunk2mem(p)));
1574
/* ... matching footer field */
1575
assert(next->prev_size == sz);
1576
/* ... and is fully consolidated */
1577
assert(prev_inuse(p));
1578
assert (next == top || inuse(next));
1580
/* ... and has minimally sane links */
1581
assert(p->fd->bk == p);
1582
assert(p->bk->fd == p);
1584
else /* markers are always of size SIZE_SZ */
1585
assert(sz == SIZE_SZ);
1589
static void do_check_inuse_chunk(mchunkptr p)
1591
static void do_check_inuse_chunk(p) mchunkptr p;
1594
mchunkptr next = next_chunk(p);
1597
/* Check whether it claims to be in use ... */
1600
/* ... and is surrounded by OK chunks.
1601
Since more things can be checked with free chunks than inuse ones,
1602
if an inuse chunk borders them and debug is on, it's worth doing them.
1606
mchunkptr prv = prev_chunk(p);
1607
assert(next_chunk(prv) == p);
1608
do_check_free_chunk(prv);
1612
assert(prev_inuse(next));
1613
assert(chunksize(next) >= MINSIZE);
1615
else if (!inuse(next))
1616
do_check_free_chunk(next);
1621
static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1623
static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1626
INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1629
do_check_inuse_chunk(p);
1631
/* Legal size ... */
1632
assert((long)sz >= (long)MINSIZE);
1633
assert((sz & MALLOC_ALIGN_MASK) == 0);
1635
assert(room < (long)MINSIZE);
1637
/* ... and alignment */
1638
assert(aligned_OK(chunk2mem(p)));
1641
/* ... and was allocated at front of an available chunk */
1642
assert(prev_inuse(p));
1647
#define check_free_chunk(P) do_check_free_chunk(P)
1648
#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1649
#define check_chunk(P) do_check_chunk(P)
1650
#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1652
#define check_free_chunk(P)
1653
#define check_inuse_chunk(P)
1654
#define check_chunk(P)
1655
#define check_malloced_chunk(P,N)
1661
Macro-based internal utilities
1666
Linking chunks in bin lists.
1667
Call these only with variables, not arbitrary expressions, as arguments.
1671
Place chunk p of size s in its bin, in size order,
1672
putting it ahead of others of same size.
1676
#define frontlink(P, S, IDX, BK, FD) \
1678
if (S < MAX_SMALLBIN_SIZE) \
1680
IDX = smallbin_index(S); \
1681
mark_binblock(IDX); \
1686
FD->bk = BK->fd = P; \
1690
IDX = bin_index(S); \
1693
if (FD == BK) mark_binblock(IDX); \
1696
while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1701
FD->bk = BK->fd = P; \
1706
/* take a chunk off a list */
1708
#define unlink(P, BK, FD) \
1716
/* Place p as the last remainder */
1718
#define link_last_remainder(P) \
1720
last_remainder->fd = last_remainder->bk = P; \
1721
P->fd = P->bk = last_remainder; \
1724
/* Clear the last_remainder bin */
1726
#define clear_last_remainder \
1727
(last_remainder->fd = last_remainder->bk = last_remainder)
1734
/* Routines dealing with mmap(). */
1739
static mchunkptr mmap_chunk(size_t size)
1741
static mchunkptr mmap_chunk(size) size_t size;
1744
size_t page_mask = malloc_getpagesize - 1;
1747
#ifndef MAP_ANONYMOUS
1751
if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1753
/* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1754
* there is no following chunk whose prev_size field could be used.
1756
size = (size + SIZE_SZ + page_mask) & ~page_mask;
1758
#ifdef MAP_ANONYMOUS
1759
p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1760
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1761
#else /* !MAP_ANONYMOUS */
1764
fd = open("/dev/zero", O_RDWR);
1765
if(fd < 0) return 0;
1767
p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1770
if(p == (mchunkptr)-1) return 0;
1773
if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1775
/* We demand that eight bytes into a page must be 8-byte aligned. */
1776
assert(aligned_OK(chunk2mem(p)));
1778
/* The offset to the start of the mmapped region is stored
1779
* in the prev_size field of the chunk; normally it is zero,
1780
* but that can be changed in memalign().
1783
set_head(p, size|IS_MMAPPED);
1785
mmapped_mem += size;
1786
if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1787
max_mmapped_mem = mmapped_mem;
1788
if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1789
max_total_mem = mmapped_mem + sbrked_mem;
1794
static void munmap_chunk(mchunkptr p)
1796
static void munmap_chunk(p) mchunkptr p;
1799
INTERNAL_SIZE_T size = chunksize(p);
1802
assert (chunk_is_mmapped(p));
1803
assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1804
assert((n_mmaps > 0));
1805
assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1808
mmapped_mem -= (size + p->prev_size);
1810
ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1812
/* munmap returns non-zero on failure */
1819
static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1821
static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1824
size_t page_mask = malloc_getpagesize - 1;
1825
INTERNAL_SIZE_T offset = p->prev_size;
1826
INTERNAL_SIZE_T size = chunksize(p);
1829
assert (chunk_is_mmapped(p));
1830
assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1831
assert((n_mmaps > 0));
1832
assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1834
/* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1835
new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1837
cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1839
if (cp == (char *)-1) return 0;
1841
p = (mchunkptr)(cp + offset);
1843
assert(aligned_OK(chunk2mem(p)));
1845
assert((p->prev_size == offset));
1846
set_head(p, (new_size - offset)|IS_MMAPPED);
1848
mmapped_mem -= size + offset;
1849
mmapped_mem += new_size;
1850
if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1851
max_mmapped_mem = mmapped_mem;
1852
if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1853
max_total_mem = mmapped_mem + sbrked_mem;
1857
#endif /* HAVE_MREMAP */
1859
#endif /* HAVE_MMAP */
1865
Extend the top-most chunk by obtaining memory from system.
1866
Main interface to sbrk (but see also malloc_trim).
1870
static void malloc_extend_top(INTERNAL_SIZE_T nb)
1872
static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1875
char* brk; /* return value from sbrk */
1876
INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1877
INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1878
char* new_brk; /* return of 2nd sbrk call */
1879
INTERNAL_SIZE_T top_size; /* new size of top chunk */
1881
mchunkptr old_top = top; /* Record state of old top */
1882
INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1883
char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
1885
/* Pad request with top_pad plus minimal overhead */
1887
INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
1888
unsigned long pagesz = malloc_getpagesize;
1890
/* If not the first time through, round to preserve page boundary */
1891
/* Otherwise, we need to correct to a page size below anyway. */
1892
/* (We also correct below if an intervening foreign sbrk call.) */
1894
if (sbrk_base != (char*)(-1))
1895
sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1897
brk = (char*)(MORECORE (sbrk_size));
1899
/* Fail if sbrk failed or if a foreign sbrk call killed our space */
1900
if (brk == (char*)(MORECORE_FAILURE) ||
1901
(brk < old_end && old_top != initial_top))
1904
sbrked_mem += sbrk_size;
1906
if (brk == old_end) /* can just add bytes to current top */
1908
top_size = sbrk_size + old_top_size;
1909
set_head(top, top_size | PREV_INUSE);
1913
if (sbrk_base == (char*)(-1)) /* First time through. Record base */
1915
else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
1916
sbrked_mem += brk - (char*)old_end;
1918
/* Guarantee alignment of first new chunk made from this space */
1919
front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
1920
if (front_misalign > 0)
1922
correction = (MALLOC_ALIGNMENT) - front_misalign;
1928
/* Guarantee the next brk will be at a page boundary */
1929
correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
1931
/* Allocate correction */
1932
new_brk = (char*)(MORECORE (correction));
1933
if (new_brk == (char*)(MORECORE_FAILURE)) return;
1935
sbrked_mem += correction;
1937
top = (mchunkptr)brk;
1938
top_size = new_brk - brk + correction;
1939
set_head(top, top_size | PREV_INUSE);
1941
if (old_top != initial_top)
1944
/* There must have been an intervening foreign sbrk call. */
1945
/* A double fencepost is necessary to prevent consolidation */
1947
/* If not enough space to do this, then user did something very wrong */
1948
if (old_top_size < MINSIZE)
1950
set_head(top, PREV_INUSE); /* will force null return from malloc */
1954
/* Also keep size a multiple of MALLOC_ALIGNMENT */
1955
old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
1956
set_head_size(old_top, old_top_size);
1957
chunk_at_offset(old_top, old_top_size )->size =
1959
chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
1961
/* If possible, release the rest. */
1962
if (old_top_size >= MINSIZE)
1963
fREe(chunk2mem(old_top));
1967
if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
1968
max_sbrked_mem = sbrked_mem;
1969
if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1970
max_total_mem = mmapped_mem + sbrked_mem;
1972
/* We always land on a page boundary */
1973
assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
1979
/* Main public routines */
1985
The requested size is first converted into a usable form, `nb'.
1986
This currently means to add 4 bytes overhead plus possibly more to
1987
obtain 8-byte alignment and/or to obtain a size of at least
1988
MINSIZE (currently 16 bytes), the smallest allocatable size.
1989
(All fits are considered `exact' if they are within MINSIZE bytes.)
1991
From there, the first successful of the following steps is taken:
1993
1. The bin corresponding to the request size is scanned, and if
1994
a chunk of exactly the right size is found, it is taken.
1996
2. The most recently remaindered chunk is used if it is big
1997
enough. This is a form of (roving) first fit, used only in
1998
the absence of exact fits. Runs of consecutive requests use
1999
the remainder of the chunk used for the previous such request
2000
whenever possible. This limited use of a first-fit style
2001
allocation strategy tends to give contiguous chunks
2002
coextensive lifetimes, which improves locality and can reduce
2003
fragmentation in the long run.
2005
3. Other bins are scanned in increasing size order, using a
2006
chunk big enough to fulfill the request, and splitting off
2007
any remainder. This search is strictly by best-fit; i.e.,
2008
the smallest (with ties going to approximately the least
2009
recently used) chunk that fits is selected.
2011
4. If large enough, the chunk bordering the end of memory
2012
(`top') is split off. (This use of `top' is in accord with
2013
the best-fit search rule. In effect, `top' is treated as
2014
larger (and thus less well fitting) than any other available
2015
chunk since it can be extended to be as large as necessary
2016
(up to system limitations).
2018
5. If the request size meets the mmap threshold and the
2019
system supports mmap, and there are few enough currently
2020
allocated mmapped regions, and a call to mmap succeeds,
2021
the request is allocated via direct memory mapping.
2023
6. Otherwise, the top of memory is extended by
2024
obtaining more space from the system (normally using sbrk,
2025
but definable to anything else via the MORECORE macro).
2026
Memory is gathered from the system (in system page-sized
2027
units) in a way that allows chunks obtained across different
2028
sbrk calls to be consolidated, but does not require
2029
contiguous memory. Thus, it should be safe to intersperse
2030
mallocs with other sbrk calls.
2033
All allocations are made from the the `lowest' part of any found
2034
chunk. (The implementation invariant is that prev_inuse is
2035
always true of any allocated chunk; i.e., that each allocated
2036
chunk borders either a previously allocated and still in-use chunk,
2037
or the base of its memory arena.)
2042
Void_t* mALLOc(size_t bytes)
2044
Void_t* mALLOc(bytes) size_t bytes;
2047
mchunkptr victim; /* inspected/selected chunk */
2048
INTERNAL_SIZE_T victim_size; /* its size */
2049
int idx; /* index for bin traversal */
2050
mbinptr bin; /* associated bin */
2051
mchunkptr remainder; /* remainder from a split */
2052
long remainder_size; /* its size */
2053
int remainder_index; /* its bin index */
2054
unsigned long block; /* block traverser bit */
2055
int startidx; /* first bin of a traversed block */
2056
mchunkptr fwd; /* misc temp for linking */
2057
mchunkptr bck; /* misc temp for linking */
2058
mbinptr q; /* misc temp */
2060
INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2062
/* Check for exact match in a bin */
2064
if (is_small_request(nb)) /* Faster version for small requests */
2066
idx = smallbin_index(nb);
2068
/* No traversal or size check necessary for small bins. */
2073
/* Also scan the next one, since it would have a remainder < MINSIZE */
2081
victim_size = chunksize(victim);
2082
unlink(victim, bck, fwd);
2083
set_inuse_bit_at_offset(victim, victim_size);
2084
check_malloced_chunk(victim, nb);
2085
return chunk2mem(victim);
2088
idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2093
idx = bin_index(nb);
2096
for (victim = last(bin); victim != bin; victim = victim->bk)
2098
victim_size = chunksize(victim);
2099
remainder_size = victim_size - nb;
2101
if (remainder_size >= (long)MINSIZE) /* too big */
2103
--idx; /* adjust to rescan below after checking last remainder */
2107
else if (remainder_size >= 0) /* exact fit */
2109
unlink(victim, bck, fwd);
2110
set_inuse_bit_at_offset(victim, victim_size);
2111
check_malloced_chunk(victim, nb);
2112
return chunk2mem(victim);
2120
/* Try to use the last split-off remainder */
2122
if ( (victim = last_remainder->fd) != last_remainder)
2124
victim_size = chunksize(victim);
2125
remainder_size = victim_size - nb;
2127
if (remainder_size >= (long)MINSIZE) /* re-split */
2129
remainder = chunk_at_offset(victim, nb);
2130
set_head(victim, nb | PREV_INUSE);
2131
link_last_remainder(remainder);
2132
set_head(remainder, remainder_size | PREV_INUSE);
2133
set_foot(remainder, remainder_size);
2134
check_malloced_chunk(victim, nb);
2135
return chunk2mem(victim);
2138
clear_last_remainder;
2140
if (remainder_size >= 0) /* exhaust */
2142
set_inuse_bit_at_offset(victim, victim_size);
2143
check_malloced_chunk(victim, nb);
2144
return chunk2mem(victim);
2147
/* Else place in bin */
2149
frontlink(victim, victim_size, remainder_index, bck, fwd);
2153
If there are any possibly nonempty big-enough blocks,
2154
search for best fitting chunk by scanning bins in blockwidth units.
2157
if ( (block = idx2binblock(idx)) <= binblocks)
2160
/* Get to the first marked block */
2162
if ( (block & binblocks) == 0)
2164
/* force to an even block boundary */
2165
idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2167
while ((block & binblocks) == 0)
2169
idx += BINBLOCKWIDTH;
2174
/* For each possibly nonempty block ... */
2177
startidx = idx; /* (track incomplete blocks) */
2178
q = bin = bin_at(idx);
2180
/* For each bin in this block ... */
2183
/* Find and use first big enough chunk ... */
2185
for (victim = last(bin); victim != bin; victim = victim->bk)
2187
victim_size = chunksize(victim);
2188
remainder_size = victim_size - nb;
2190
if (remainder_size >= (long)MINSIZE) /* split */
2192
remainder = chunk_at_offset(victim, nb);
2193
set_head(victim, nb | PREV_INUSE);
2194
unlink(victim, bck, fwd);
2195
link_last_remainder(remainder);
2196
set_head(remainder, remainder_size | PREV_INUSE);
2197
set_foot(remainder, remainder_size);
2198
check_malloced_chunk(victim, nb);
2199
return chunk2mem(victim);
2202
else if (remainder_size >= 0) /* take */
2204
set_inuse_bit_at_offset(victim, victim_size);
2205
unlink(victim, bck, fwd);
2206
check_malloced_chunk(victim, nb);
2207
return chunk2mem(victim);
2212
bin = next_bin(bin);
2214
} while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2216
/* Clear out the block bit. */
2218
do /* Possibly backtrack to try to clear a partial block */
2220
if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2222
binblocks &= ~block;
2227
} while (first(q) == q);
2229
/* Get to the next possibly nonempty block */
2231
if ( (block <<= 1) <= binblocks && (block != 0) )
2233
while ((block & binblocks) == 0)
2235
idx += BINBLOCKWIDTH;
2245
/* Try to use top chunk */
2247
/* Require that there be a remainder, ensuring top always exists */
2248
if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2252
/* If big and would otherwise need to extend, try to use mmap instead */
2253
if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2254
(victim = mmap_chunk(nb)) != 0)
2255
return chunk2mem(victim);
2259
malloc_extend_top(nb);
2260
if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2261
return 0; /* propagate failure */
2265
set_head(victim, nb | PREV_INUSE);
2266
top = chunk_at_offset(victim, nb);
2267
set_head(top, remainder_size | PREV_INUSE);
2268
check_malloced_chunk(victim, nb);
2269
return chunk2mem(victim);
2282
1. free(0) has no effect.
2284
2. If the chunk was allocated via mmap, it is release via munmap().
2286
3. If a returned chunk borders the current high end of memory,
2287
it is consolidated into the top, and if the total unused
2288
topmost memory exceeds the trim threshold, malloc_trim is
2291
4. Other chunks are consolidated as they arrive, and
2292
placed in corresponding bins. (This includes the case of
2293
consolidating with the current `last_remainder').
2299
void fREe(Void_t* mem)
2301
void fREe(mem) Void_t* mem;
2304
mchunkptr p; /* chunk corresponding to mem */
2305
INTERNAL_SIZE_T hd; /* its head field */
2306
INTERNAL_SIZE_T sz; /* its size */
2307
int idx; /* its bin index */
2308
mchunkptr next; /* next contiguous chunk */
2309
INTERNAL_SIZE_T nextsz; /* its size */
2310
INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2311
mchunkptr bck; /* misc temp for linking */
2312
mchunkptr fwd; /* misc temp for linking */
2313
int islr; /* track whether merging with last_remainder */
2315
if (mem == 0) /* free(0) has no effect */
2322
if (hd & IS_MMAPPED) /* release mmapped memory. */
2329
check_inuse_chunk(p);
2331
sz = hd & ~PREV_INUSE;
2332
next = chunk_at_offset(p, sz);
2333
nextsz = chunksize(next);
2335
if (next == top) /* merge with top */
2339
if (!(hd & PREV_INUSE)) /* consolidate backward */
2341
prevsz = p->prev_size;
2342
p = chunk_at_offset(p, -prevsz);
2344
unlink(p, bck, fwd);
2347
set_head(p, sz | PREV_INUSE);
2349
if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2350
malloc_trim(top_pad);
2354
set_head(next, nextsz); /* clear inuse bit */
2358
if (!(hd & PREV_INUSE)) /* consolidate backward */
2360
prevsz = p->prev_size;
2361
p = chunk_at_offset(p, -prevsz);
2364
if (p->fd == last_remainder) /* keep as last_remainder */
2367
unlink(p, bck, fwd);
2370
if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2374
if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2377
link_last_remainder(p);
2380
unlink(next, bck, fwd);
2384
set_head(p, sz | PREV_INUSE);
2387
frontlink(p, sz, idx, bck, fwd);
2398
Chunks that were obtained via mmap cannot be extended or shrunk
2399
unless HAVE_MREMAP is defined, in which case mremap is used.
2400
Otherwise, if their reallocation is for additional space, they are
2401
copied. If for less, they are just left alone.
2403
Otherwise, if the reallocation is for additional space, and the
2404
chunk can be extended, it is, else a malloc-copy-free sequence is
2405
taken. There are several different ways that a chunk could be
2406
extended. All are tried:
2408
* Extending forward into following adjacent free chunk.
2409
* Shifting backwards, joining preceding adjacent space
2410
* Both shifting backwards and extending forward.
2411
* Extending into newly sbrked space
2413
Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2414
size argument of zero (re)allocates a minimum-sized chunk.
2416
If the reallocation is for less space, and the new request is for
2417
a `small' (<512 bytes) size, then the newly unused space is lopped
2420
The old unix realloc convention of allowing the last-free'd chunk
2421
to be used as an argument to realloc is no longer supported.
2422
I don't know of any programs still relying on this feature,
2423
and allowing it would also allow too many other incorrect
2424
usages of realloc to be sensible.
2431
Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2433
Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2436
INTERNAL_SIZE_T nb; /* padded request size */
2438
mchunkptr oldp; /* chunk corresponding to oldmem */
2439
INTERNAL_SIZE_T oldsize; /* its size */
2441
mchunkptr newp; /* chunk to return */
2442
INTERNAL_SIZE_T newsize; /* its size */
2443
Void_t* newmem; /* corresponding user mem */
2445
mchunkptr next; /* next contiguous chunk after oldp */
2446
INTERNAL_SIZE_T nextsize; /* its size */
2448
mchunkptr prev; /* previous contiguous chunk before oldp */
2449
INTERNAL_SIZE_T prevsize; /* its size */
2451
mchunkptr remainder; /* holds split off extra space from newp */
2452
INTERNAL_SIZE_T remainder_size; /* its size */
2454
mchunkptr bck; /* misc temp for linking */
2455
mchunkptr fwd; /* misc temp for linking */
2457
#ifdef REALLOC_ZERO_BYTES_FREES
2458
if (bytes == 0) { fREe(oldmem); return 0; }
2462
/* realloc of null is supposed to be same as malloc */
2463
if (oldmem == 0) return mALLOc(bytes);
2465
newp = oldp = mem2chunk(oldmem);
2466
newsize = oldsize = chunksize(oldp);
2469
nb = request2size(bytes);
2472
if (chunk_is_mmapped(oldp))
2475
newp = mremap_chunk(oldp, nb);
2476
if(newp) return chunk2mem(newp);
2478
/* Note the extra SIZE_SZ overhead. */
2479
if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2480
/* Must alloc, copy, free. */
2481
newmem = mALLOc(bytes);
2482
if (newmem == 0) return 0; /* propagate failure */
2483
MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2489
check_inuse_chunk(oldp);
2491
if ((long)(oldsize) < (long)(nb))
2494
/* Try expanding forward */
2496
next = chunk_at_offset(oldp, oldsize);
2497
if (next == top || !inuse(next))
2499
nextsize = chunksize(next);
2501
/* Forward into top only if a remainder */
2504
if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2506
newsize += nextsize;
2507
top = chunk_at_offset(oldp, nb);
2508
set_head(top, (newsize - nb) | PREV_INUSE);
2509
set_head_size(oldp, nb);
2510
return chunk2mem(oldp);
2514
/* Forward into next chunk */
2515
else if (((long)(nextsize + newsize) >= (long)(nb)))
2517
unlink(next, bck, fwd);
2518
newsize += nextsize;
2528
/* Try shifting backwards. */
2530
if (!prev_inuse(oldp))
2532
prev = prev_chunk(oldp);
2533
prevsize = chunksize(prev);
2535
/* try forward + backward first to save a later consolidation */
2542
if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2544
unlink(prev, bck, fwd);
2546
newsize += prevsize + nextsize;
2547
newmem = chunk2mem(newp);
2548
MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2549
top = chunk_at_offset(newp, nb);
2550
set_head(top, (newsize - nb) | PREV_INUSE);
2551
set_head_size(newp, nb);
2556
/* into next chunk */
2557
else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2559
unlink(next, bck, fwd);
2560
unlink(prev, bck, fwd);
2562
newsize += nextsize + prevsize;
2563
newmem = chunk2mem(newp);
2564
MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2570
if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2572
unlink(prev, bck, fwd);
2574
newsize += prevsize;
2575
newmem = chunk2mem(newp);
2576
MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2583
newmem = mALLOc (bytes);
2585
if (newmem == 0) /* propagate failure */
2588
/* Avoid copy if newp is next chunk after oldp. */
2589
/* (This can only happen when new chunk is sbrk'ed.) */
2591
if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2593
newsize += chunksize(newp);
2598
/* Otherwise copy, free, and exit */
2599
MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2605
split: /* split off extra room in old or expanded chunk */
2607
if (newsize - nb >= MINSIZE) /* split off remainder */
2609
remainder = chunk_at_offset(newp, nb);
2610
remainder_size = newsize - nb;
2611
set_head_size(newp, nb);
2612
set_head(remainder, remainder_size | PREV_INUSE);
2613
set_inuse_bit_at_offset(remainder, remainder_size);
2614
fREe(chunk2mem(remainder)); /* let free() deal with it */
2618
set_head_size(newp, newsize);
2619
set_inuse_bit_at_offset(newp, newsize);
2622
check_inuse_chunk(newp);
2623
return chunk2mem(newp);
2633
memalign requests more than enough space from malloc, finds a spot
2634
within that chunk that meets the alignment request, and then
2635
possibly frees the leading and trailing space.
2637
The alignment argument must be a power of two. This property is not
2638
checked by memalign, so misuse may result in random runtime errors.
2640
8-byte alignment is guaranteed by normal malloc calls, so don't
2641
bother calling memalign with an argument of 8 or less.
2643
Overreliance on memalign is a sure way to fragment space.
2649
Void_t* mEMALIGn(size_t alignment, size_t bytes)
2651
Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2654
INTERNAL_SIZE_T nb; /* padded request size */
2655
char* m; /* memory returned by malloc call */
2656
mchunkptr p; /* corresponding chunk */
2657
char* brk; /* alignment point within p */
2658
mchunkptr newp; /* chunk to return */
2659
INTERNAL_SIZE_T newsize; /* its size */
2660
INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2661
mchunkptr remainder; /* spare room at end to split off */
2662
long remainder_size; /* its size */
2664
/* If need less alignment than we give anyway, just relay to malloc */
2666
if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2668
/* Otherwise, ensure that it is at least a minimum chunk size */
2670
if (alignment < MINSIZE) alignment = MINSIZE;
2672
/* Call malloc with worst case padding to hit alignment. */
2674
nb = request2size(bytes);
2675
m = (char*)(mALLOc(nb + alignment + MINSIZE));
2677
if (m == 0) return 0; /* propagate failure */
2681
if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2684
if(chunk_is_mmapped(p))
2685
return chunk2mem(p); /* nothing more to do */
2688
else /* misaligned */
2691
Find an aligned spot inside chunk.
2692
Since we need to give back leading space in a chunk of at
2693
least MINSIZE, if the first calculation places us at
2694
a spot with less than MINSIZE leader, we can move to the
2695
next aligned spot -- we've allocated enough total room so that
2696
this is always possible.
2699
brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
2700
if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2702
newp = (mchunkptr)brk;
2703
leadsize = brk - (char*)(p);
2704
newsize = chunksize(p) - leadsize;
2707
if(chunk_is_mmapped(p))
2709
newp->prev_size = p->prev_size + leadsize;
2710
set_head(newp, newsize|IS_MMAPPED);
2711
return chunk2mem(newp);
2715
/* give back leader, use the rest */
2717
set_head(newp, newsize | PREV_INUSE);
2718
set_inuse_bit_at_offset(newp, newsize);
2719
set_head_size(p, leadsize);
2723
assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2726
/* Also give back spare room at the end */
2728
remainder_size = chunksize(p) - nb;
2730
if (remainder_size >= (long)MINSIZE)
2732
remainder = chunk_at_offset(p, nb);
2733
set_head(remainder, remainder_size | PREV_INUSE);
2734
set_head_size(p, nb);
2735
fREe(chunk2mem(remainder));
2738
check_inuse_chunk(p);
2739
return chunk2mem(p);
2747
valloc just invokes memalign with alignment argument equal
2748
to the page size of the system (or as near to this as can
2749
be figured out from all the includes/defines above.)
2753
Void_t* vALLOc(size_t bytes)
2755
Void_t* vALLOc(bytes) size_t bytes;
2758
return mEMALIGn (malloc_getpagesize, bytes);
2762
pvalloc just invokes valloc for the nearest pagesize
2763
that will accommodate request
2768
Void_t* pvALLOc(size_t bytes)
2770
Void_t* pvALLOc(bytes) size_t bytes;
2773
size_t pagesize = malloc_getpagesize;
2774
return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2779
calloc calls malloc, then zeroes out the allocated chunk.
2784
Void_t* cALLOc(size_t n, size_t elem_size)
2786
Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2790
INTERNAL_SIZE_T csz;
2792
INTERNAL_SIZE_T sz = n * elem_size;
2794
/* check if expand_top called, in which case don't need to clear */
2796
mchunkptr oldtop = top;
2797
INTERNAL_SIZE_T oldtopsize = chunksize(top);
2799
Void_t* mem = mALLOc (sz);
2807
/* Two optional cases in which clearing not necessary */
2811
if (chunk_is_mmapped(p)) return mem;
2817
if (p == oldtop && csz > oldtopsize)
2819
/* clear only the bytes from non-freshly-sbrked memory */
2824
MALLOC_ZERO(mem, csz - SIZE_SZ);
2831
cfree just calls free. It is needed/defined on some systems
2832
that pair it with calloc, presumably for odd historical reasons.
2836
#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2838
void cfree(Void_t *mem)
2840
void cfree(mem) Void_t *mem;
2851
Malloc_trim gives memory back to the system (via negative
2852
arguments to sbrk) if there is unused memory at the `high' end of
2853
the malloc pool. You can call this after freeing large blocks of
2854
memory to potentially reduce the system-level memory requirements
2855
of a program. However, it cannot guarantee to reduce memory. Under
2856
some allocation patterns, some large free blocks of memory will be
2857
locked between two used chunks, so they cannot be given back to
2860
The `pad' argument to malloc_trim represents the amount of free
2861
trailing space to leave untrimmed. If this argument is zero,
2862
only the minimum amount of memory to maintain internal data
2863
structures will be left (one page or less). Non-zero arguments
2864
can be supplied to maintain enough trailing space to service
2865
future expected allocations without having to re-obtain memory
2868
Malloc_trim returns 1 if it actually released any memory, else 0.
2873
int malloc_trim(size_t pad)
2875
int malloc_trim(pad) size_t pad;
2878
long top_size; /* Amount of top-most memory */
2879
long extra; /* Amount to release */
2880
char* current_brk; /* address returned by pre-check sbrk call */
2881
char* new_brk; /* address returned by negative sbrk call */
2883
unsigned long pagesz = malloc_getpagesize;
2885
top_size = chunksize(top);
2886
extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2888
if (extra < (long)pagesz) /* Not enough memory to release */
2893
/* Test to make sure no one else called sbrk */
2894
current_brk = (char*)(MORECORE (0));
2895
if (current_brk != (char*)(top) + top_size)
2896
return 0; /* Apparently we don't own memory; must fail */
2900
new_brk = (char*)(MORECORE (-extra));
2902
if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
2904
/* Try to figure out what we have */
2905
current_brk = (char*)(MORECORE (0));
2906
top_size = current_brk - (char*)top;
2907
if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
2909
sbrked_mem = current_brk - sbrk_base;
2910
set_head(top, top_size | PREV_INUSE);
2918
/* Success. Adjust top accordingly. */
2919
set_head(top, (top_size - extra) | PREV_INUSE);
2920
sbrked_mem -= extra;
2933
This routine tells you how many bytes you can actually use in an
2934
allocated chunk, which may be more than you requested (although
2935
often not). You can use this many bytes without worrying about
2936
overwriting other allocated objects. Not a particularly great
2937
programming practice, but still sometimes useful.
2942
size_t malloc_usable_size(Void_t* mem)
2944
size_t malloc_usable_size(mem) Void_t* mem;
2953
if(!chunk_is_mmapped(p))
2955
if (!inuse(p)) return 0;
2956
check_inuse_chunk(p);
2957
return chunksize(p) - SIZE_SZ;
2959
return chunksize(p) - 2*SIZE_SZ;
2966
/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
2968
static void malloc_update_mallinfo()
2977
INTERNAL_SIZE_T avail = chunksize(top);
2978
int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
2980
for (i = 1; i < NAV; ++i)
2983
for (p = last(b); p != b; p = p->bk)
2986
check_free_chunk(p);
2987
for (q = next_chunk(p);
2988
q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
2990
check_inuse_chunk(q);
2992
avail += chunksize(p);
2997
current_mallinfo.ordblks = navail;
2998
current_mallinfo.uordblks = sbrked_mem - avail;
2999
current_mallinfo.fordblks = avail;
3000
current_mallinfo.hblks = n_mmaps;
3001
current_mallinfo.hblkhd = mmapped_mem;
3002
current_mallinfo.keepcost = chunksize(top);
3012
Prints on stderr the amount of space obtain from the system (both
3013
via sbrk and mmap), the maximum amount (which may be more than
3014
current if malloc_trim and/or munmap got called), the maximum
3015
number of simultaneous mmap regions used, and the current number
3016
of bytes allocated via malloc (or realloc, etc) but not yet
3017
freed. (Note that this is the number of bytes allocated, not the
3018
number requested. It will be larger than the number requested
3019
because of alignment and bookkeeping overhead.)
3025
malloc_update_mallinfo();
3026
fprintf(stderr, "max system bytes = %10u\n",
3027
(unsigned int)(max_total_mem));
3028
fprintf(stderr, "system bytes = %10u\n",
3029
(unsigned int)(sbrked_mem + mmapped_mem));
3030
fprintf(stderr, "in use bytes = %10u\n",
3031
(unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3033
fprintf(stderr, "max mmap regions = %10u\n",
3034
(unsigned int)max_n_mmaps);
3039
mallinfo returns a copy of updated current mallinfo.
3042
struct mallinfo mALLINFo()
3044
malloc_update_mallinfo();
3045
return current_mallinfo;
3054
mallopt is the general SVID/XPG interface to tunable parameters.
3055
The format is to provide a (parameter-number, parameter-value) pair.
3056
mallopt then sets the corresponding parameter to the argument
3057
value if it can (i.e., so long as the value is meaningful),
3058
and returns 1 if successful else 0.
3060
See descriptions of tunable parameters above.
3065
int mALLOPt(int param_number, int value)
3067
int mALLOPt(param_number, value) int param_number; int value;
3070
switch(param_number)
3072
case M_TRIM_THRESHOLD:
3073
trim_threshold = value; return 1;
3075
top_pad = value; return 1;
3076
case M_MMAP_THRESHOLD:
3077
mmap_threshold = value; return 1;
3080
n_mmaps_max = value; return 1;
3082
if (value != 0) return 0; else n_mmaps_max = value; return 1;
3094
V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3095
* Fixed ordering problem with boundary-stamping
3097
V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3098
* Added pvalloc, as recommended by H.J. Liu
3099
* Added 64bit pointer support mainly from Wolfram Gloger
3100
* Added anonymously donated WIN32 sbrk emulation
3101
* Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3102
* malloc_extend_top: fix mask error that caused wastage after
3104
* Add linux mremap support code from HJ Liu
3106
V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3107
* Integrated most documentation with the code.
3108
* Add support for mmap, with help from
3109
Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3110
* Use last_remainder in more cases.
3111
* Pack bins using idea from colin@nyx10.cs.du.edu
3112
* Use ordered bins instead of best-fit threshhold
3113
* Eliminate block-local decls to simplify tracing and debugging.
3114
* Support another case of realloc via move into top
3115
* Fix error occuring when initial sbrk_base not word-aligned.
3116
* Rely on page size for units instead of SBRK_UNIT to
3117
avoid surprises about sbrk alignment conventions.
3118
* Add mallinfo, mallopt. Thanks to Raymond Nijssen
3119
(raymond@es.ele.tue.nl) for the suggestion.
3120
* Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3121
* More precautions for cases where other routines call sbrk,
3122
courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3123
* Added macros etc., allowing use in linux libc from
3124
H.J. Lu (hjl@gnu.ai.mit.edu)
3125
* Inverted this history list
3127
V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3128
* Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3129
* Removed all preallocation code since under current scheme
3130
the work required to undo bad preallocations exceeds
3131
the work saved in good cases for most test programs.
3132
* No longer use return list or unconsolidated bins since
3133
no scheme using them consistently outperforms those that don't
3134
given above changes.
3135
* Use best fit for very large chunks to prevent some worst-cases.
3136
* Added some support for debugging
3138
V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3139
* Removed footers when chunks are in use. Thanks to
3140
Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3142
V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3143
* Added malloc_trim, with help from Wolfram Gloger
3144
(wmglo@Dent.MED.Uni-Muenchen.DE).
3146
V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3148
V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3149
* realloc: try to expand in both directions
3150
* malloc: swap order of clean-bin strategy;
3151
* realloc: only conditionally expand backwards
3152
* Try not to scavenge used bins
3153
* Use bin counts as a guide to preallocation
3154
* Occasionally bin return list chunks in first scan
3155
* Add a few optimizations from colin@nyx10.cs.du.edu
3157
V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3158
* faster bin computation & slightly different binning
3159
* merged all consolidations to one part of malloc proper
3160
(eliminating old malloc_find_space & malloc_clean_bin)
3161
* Scan 2 returns chunks (not just 1)
3162
* Propagate failure in realloc if malloc returns 0
3163
* Add stuff to allow compilation on non-ANSI compilers
3164
from kpv@research.att.com
3166
V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3167
* removed potential for odd address access in prev_chunk
3168
* removed dependency on getpagesize.h
3169
* misc cosmetics and a bit more internal documentation
3170
* anticosmetics: mangled names in macros to evade debugger strangeness
3171
* tested on sparc, hp-700, dec-mips, rs6000
3172
with gcc & native cc (hp, dec only) allowing
3173
Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3175
Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3176
* Based loosely on libg++-1.2X malloc. (It retains some of the overall
3177
structure of old version, but most details differ.)