1 | n/a | /* |
---|
2 | n/a | This is a version (aka dlmalloc) of malloc/free/realloc written by |
---|
3 | n/a | Doug Lea and released to the public domain, as explained at |
---|
4 | n/a | http://creativecommons.org/licenses/publicdomain. Send questions, |
---|
5 | n/a | comments, complaints, performance data, etc to dl@cs.oswego.edu |
---|
6 | n/a | |
---|
7 | n/a | * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee) |
---|
8 | n/a | |
---|
9 | n/a | Note: There may be an updated version of this malloc obtainable at |
---|
10 | n/a | ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
---|
11 | n/a | Check before installing! |
---|
12 | n/a | |
---|
13 | n/a | * Quickstart |
---|
14 | n/a | |
---|
15 | n/a | This library is all in one file to simplify the most common usage: |
---|
16 | n/a | ftp it, compile it (-O3), and link it into another program. All of |
---|
17 | n/a | the compile-time options default to reasonable values for use on |
---|
18 | n/a | most platforms. You might later want to step through various |
---|
19 | n/a | compile-time and dynamic tuning options. |
---|
20 | n/a | |
---|
21 | n/a | For convenience, an include file for code using this malloc is at: |
---|
22 | n/a | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h |
---|
23 | n/a | You don't really need this .h file unless you call functions not |
---|
24 | n/a | defined in your system include files. The .h file contains only the |
---|
25 | n/a | excerpts from this file needed for using this malloc on ANSI C/C++ |
---|
26 | n/a | systems, so long as you haven't changed compile-time options about |
---|
27 | n/a | naming and tuning parameters. If you do, then you can create your |
---|
28 | n/a | own malloc.h that does include all settings by cutting at the point |
---|
29 | n/a | indicated below. Note that you may already by default be using a C |
---|
30 | n/a | library containing a malloc that is based on some version of this |
---|
31 | n/a | malloc (for example in linux). You might still want to use the one |
---|
32 | n/a | in this file to customize settings or to avoid overheads associated |
---|
33 | n/a | with library versions. |
---|
34 | n/a | |
---|
35 | n/a | * Vital statistics: |
---|
36 | n/a | |
---|
37 | n/a | Supported pointer/size_t representation: 4 or 8 bytes |
---|
38 | n/a | size_t MUST be an unsigned type of the same width as |
---|
39 | n/a | pointers. (If you are using an ancient system that declares |
---|
40 | n/a | size_t as a signed type, or need it to be a different width |
---|
41 | n/a | than pointers, you can use a previous release of this malloc |
---|
42 | n/a | (e.g. 2.7.2) supporting these.) |
---|
43 | n/a | |
---|
44 | n/a | Alignment: 8 bytes (default) |
---|
45 | n/a | This suffices for nearly all current machines and C compilers. |
---|
46 | n/a | However, you can define MALLOC_ALIGNMENT to be wider than this |
---|
47 | n/a | if necessary (up to 128bytes), at the expense of using more space. |
---|
48 | n/a | |
---|
49 | n/a | Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) |
---|
50 | n/a | 8 or 16 bytes (if 8byte sizes) |
---|
51 | n/a | Each malloced chunk has a hidden word of overhead holding size |
---|
52 | n/a | and status information, and additional cross-check word |
---|
53 | n/a | if FOOTERS is defined. |
---|
54 | n/a | |
---|
55 | n/a | Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) |
---|
56 | n/a | 8-byte ptrs: 32 bytes (including overhead) |
---|
57 | n/a | |
---|
58 | n/a | Even a request for zero bytes (i.e., malloc(0)) returns a |
---|
59 | n/a | pointer to something of the minimum allocatable size. |
---|
60 | n/a | The maximum overhead wastage (i.e., number of extra bytes |
---|
61 | n/a | allocated than were requested in malloc) is less than or equal |
---|
62 | n/a | to the minimum size, except for requests >= mmap_threshold that |
---|
63 | n/a | are serviced via mmap(), where the worst case wastage is about |
---|
64 | n/a | 32 bytes plus the remainder from a system page (the minimal |
---|
65 | n/a | mmap unit); typically 4096 or 8192 bytes. |
---|
66 | n/a | |
---|
67 | n/a | Security: static-safe; optionally more or less |
---|
68 | n/a | The "security" of malloc refers to the ability of malicious |
---|
69 | n/a | code to accentuate the effects of errors (for example, freeing |
---|
70 | n/a | space that is not currently malloc'ed or overwriting past the |
---|
71 | n/a | ends of chunks) in code that calls malloc. This malloc |
---|
72 | n/a | guarantees not to modify any memory locations below the base of |
---|
73 | n/a | heap, i.e., static variables, even in the presence of usage |
---|
74 | n/a | errors. The routines additionally detect most improper frees |
---|
75 | n/a | and reallocs. All this holds as long as the static bookkeeping |
---|
76 | n/a | for malloc itself is not corrupted by some other means. This |
---|
77 | n/a | is only one aspect of security -- these checks do not, and |
---|
78 | n/a | cannot, detect all possible programming errors. |
---|
79 | n/a | |
---|
80 | n/a | If FOOTERS is defined nonzero, then each allocated chunk |
---|
81 | n/a | carries an additional check word to verify that it was malloced |
---|
82 | n/a | from its space. These check words are the same within each |
---|
83 | n/a | execution of a program using malloc, but differ across |
---|
84 | n/a | executions, so externally crafted fake chunks cannot be |
---|
85 | n/a | freed. This improves security by rejecting frees/reallocs that |
---|
86 | n/a | could corrupt heap memory, in addition to the checks preventing |
---|
87 | n/a | writes to statics that are always on. This may further improve |
---|
88 | n/a | security at the expense of time and space overhead. (Note that |
---|
89 | n/a | FOOTERS may also be worth using with MSPACES.) |
---|
90 | n/a | |
---|
91 | n/a | By default detected errors cause the program to abort (calling |
---|
92 | n/a | "abort()"). You can override this to instead proceed past |
---|
93 | n/a | errors by defining PROCEED_ON_ERROR. In this case, a bad free |
---|
94 | n/a | has no effect, and a malloc that encounters a bad address |
---|
95 | n/a | caused by user overwrites will ignore the bad address by |
---|
96 | n/a | dropping pointers and indices to all known memory. This may |
---|
97 | n/a | be appropriate for programs that should continue if at all |
---|
98 | n/a | possible in the face of programming errors, although they may |
---|
99 | n/a | run out of memory because dropped memory is never reclaimed. |
---|
100 | n/a | |
---|
101 | n/a | If you don't like either of these options, you can define |
---|
102 | n/a | CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything |
---|
103 | n/a | else. And if if you are sure that your program using malloc has |
---|
104 | n/a | no errors or vulnerabilities, you can define INSECURE to 1, |
---|
105 | n/a | which might (or might not) provide a small performance improvement. |
---|
106 | n/a | |
---|
107 | n/a | Thread-safety: NOT thread-safe unless USE_LOCKS defined |
---|
108 | n/a | When USE_LOCKS is defined, each public call to malloc, free, |
---|
109 | n/a | etc is surrounded with either a pthread mutex or a win32 |
---|
110 | n/a | spinlock (depending on WIN32). This is not especially fast, and |
---|
111 | n/a | can be a major bottleneck. It is designed only to provide |
---|
112 | n/a | minimal protection in concurrent environments, and to provide a |
---|
113 | n/a | basis for extensions. If you are using malloc in a concurrent |
---|
114 | n/a | program, consider instead using ptmalloc, which is derived from |
---|
115 | n/a | a version of this malloc. (See http://www.malloc.de). |
---|
116 | n/a | |
---|
117 | n/a | System requirements: Any combination of MORECORE and/or MMAP/MUNMAP |
---|
118 | n/a | This malloc can use unix sbrk or any emulation (invoked using |
---|
119 | n/a | the CALL_MORECORE macro) and/or mmap/munmap or any emulation |
---|
120 | n/a | (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system |
---|
121 | n/a | memory. On most unix systems, it tends to work best if both |
---|
122 | n/a | MORECORE and MMAP are enabled. On Win32, it uses emulations |
---|
123 | n/a | based on VirtualAlloc. It also uses common C library functions |
---|
124 | n/a | like memset. |
---|
125 | n/a | |
---|
126 | n/a | Compliance: I believe it is compliant with the Single Unix Specification |
---|
127 | n/a | (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably |
---|
128 | n/a | others as well. |
---|
129 | n/a | |
---|
130 | n/a | * Overview of algorithms |
---|
131 | n/a | |
---|
132 | n/a | This is not the fastest, most space-conserving, most portable, or |
---|
133 | n/a | most tunable malloc ever written. However it is among the fastest |
---|
134 | n/a | while also being among the most space-conserving, portable and |
---|
135 | n/a | tunable. Consistent balance across these factors results in a good |
---|
136 | n/a | general-purpose allocator for malloc-intensive programs. |
---|
137 | n/a | |
---|
138 | n/a | In most ways, this malloc is a best-fit allocator. Generally, it |
---|
139 | n/a | chooses the best-fitting existing chunk for a request, with ties |
---|
140 | n/a | broken in approximately least-recently-used order. (This strategy |
---|
141 | n/a | normally maintains low fragmentation.) However, for requests less |
---|
142 | n/a | than 256bytes, it deviates from best-fit when there is not an |
---|
143 | n/a | exactly fitting available chunk by preferring to use space adjacent |
---|
144 | n/a | to that used for the previous small request, as well as by breaking |
---|
145 | n/a | ties in approximately most-recently-used order. (These enhance |
---|
146 | n/a | locality of series of small allocations.) And for very large requests |
---|
147 | n/a | (>= 256Kb by default), it relies on system memory mapping |
---|
148 | n/a | facilities, if supported. (This helps avoid carrying around and |
---|
149 | n/a | possibly fragmenting memory used only for large chunks.) |
---|
150 | n/a | |
---|
151 | n/a | All operations (except malloc_stats and mallinfo) have execution |
---|
152 | n/a | times that are bounded by a constant factor of the number of bits in |
---|
153 | n/a | a size_t, not counting any clearing in calloc or copying in realloc, |
---|
154 | n/a | or actions surrounding MORECORE and MMAP that have times |
---|
155 | n/a | proportional to the number of non-contiguous regions returned by |
---|
156 | n/a | system allocation routines, which is often just 1. |
---|
157 | n/a | |
---|
158 | n/a | The implementation is not very modular and seriously overuses |
---|
159 | n/a | macros. Perhaps someday all C compilers will do as good a job |
---|
160 | n/a | inlining modular code as can now be done by brute-force expansion, |
---|
161 | n/a | but now, enough of them seem not to. |
---|
162 | n/a | |
---|
163 | n/a | Some compilers issue a lot of warnings about code that is |
---|
164 | n/a | dead/unreachable only on some platforms, and also about intentional |
---|
165 | n/a | uses of negation on unsigned types. All known cases of each can be |
---|
166 | n/a | ignored. |
---|
167 | n/a | |
---|
168 | n/a | For a longer but out of date high-level description, see |
---|
169 | n/a | http://gee.cs.oswego.edu/dl/html/malloc.html |
---|
170 | n/a | |
---|
171 | n/a | * MSPACES |
---|
172 | n/a | If MSPACES is defined, then in addition to malloc, free, etc., |
---|
173 | n/a | this file also defines mspace_malloc, mspace_free, etc. These |
---|
174 | n/a | are versions of malloc routines that take an "mspace" argument |
---|
175 | n/a | obtained using create_mspace, to control all internal bookkeeping. |
---|
176 | n/a | If ONLY_MSPACES is defined, only these versions are compiled. |
---|
177 | n/a | So if you would like to use this allocator for only some allocations, |
---|
178 | n/a | and your system malloc for others, you can compile with |
---|
179 | n/a | ONLY_MSPACES and then do something like... |
---|
180 | n/a | static mspace mymspace = create_mspace(0,0); // for example |
---|
181 | n/a | #define mymalloc(bytes) mspace_malloc(mymspace, bytes) |
---|
182 | n/a | |
---|
183 | n/a | (Note: If you only need one instance of an mspace, you can instead |
---|
184 | n/a | use "USE_DL_PREFIX" to relabel the global malloc.) |
---|
185 | n/a | |
---|
186 | n/a | You can similarly create thread-local allocators by storing |
---|
187 | n/a | mspaces as thread-locals. For example: |
---|
188 | n/a | static __thread mspace tlms = 0; |
---|
189 | n/a | void* tlmalloc(size_t bytes) { |
---|
190 | n/a | if (tlms == 0) tlms = create_mspace(0, 0); |
---|
191 | n/a | return mspace_malloc(tlms, bytes); |
---|
192 | n/a | } |
---|
193 | n/a | void tlfree(void* mem) { mspace_free(tlms, mem); } |
---|
194 | n/a | |
---|
195 | n/a | Unless FOOTERS is defined, each mspace is completely independent. |
---|
196 | n/a | You cannot allocate from one and free to another (although |
---|
197 | n/a | conformance is only weakly checked, so usage errors are not always |
---|
198 | n/a | caught). If FOOTERS is defined, then each chunk carries around a tag |
---|
199 | n/a | indicating its originating mspace, and frees are directed to their |
---|
200 | n/a | originating spaces. |
---|
201 | n/a | |
---|
202 | n/a | ------------------------- Compile-time options --------------------------- |
---|
203 | n/a | |
---|
204 | n/a | Be careful in setting #define values for numerical constants of type |
---|
205 | n/a | size_t. On some systems, literal values are not automatically extended |
---|
206 | n/a | to size_t precision unless they are explicitly casted. |
---|
207 | n/a | |
---|
208 | n/a | WIN32 default: defined if _WIN32 defined |
---|
209 | n/a | Defining WIN32 sets up defaults for MS environment and compilers. |
---|
210 | n/a | Otherwise defaults are for unix. |
---|
211 | n/a | |
---|
212 | n/a | MALLOC_ALIGNMENT default: (size_t)8 |
---|
213 | n/a | Controls the minimum alignment for malloc'ed chunks. It must be a |
---|
214 | n/a | power of two and at least 8, even on machines for which smaller |
---|
215 | n/a | alignments would suffice. It may be defined as larger than this |
---|
216 | n/a | though. Note however that code and data structures are optimized for |
---|
217 | n/a | the case of 8-byte alignment. |
---|
218 | n/a | |
---|
219 | n/a | MSPACES default: 0 (false) |
---|
220 | n/a | If true, compile in support for independent allocation spaces. |
---|
221 | n/a | This is only supported if HAVE_MMAP is true. |
---|
222 | n/a | |
---|
223 | n/a | ONLY_MSPACES default: 0 (false) |
---|
224 | n/a | If true, only compile in mspace versions, not regular versions. |
---|
225 | n/a | |
---|
226 | n/a | USE_LOCKS default: 0 (false) |
---|
227 | n/a | Causes each call to each public routine to be surrounded with |
---|
228 | n/a | pthread or WIN32 mutex lock/unlock. (If set true, this can be |
---|
229 | n/a | overridden on a per-mspace basis for mspace versions.) |
---|
230 | n/a | |
---|
231 | n/a | FOOTERS default: 0 |
---|
232 | n/a | If true, provide extra checking and dispatching by placing |
---|
233 | n/a | information in the footers of allocated chunks. This adds |
---|
234 | n/a | space and time overhead. |
---|
235 | n/a | |
---|
236 | n/a | INSECURE default: 0 |
---|
237 | n/a | If true, omit checks for usage errors and heap space overwrites. |
---|
238 | n/a | |
---|
239 | n/a | USE_DL_PREFIX default: NOT defined |
---|
240 | n/a | Causes compiler to prefix all public routines with the string 'dl'. |
---|
241 | n/a | This can be useful when you only want to use this malloc in one part |
---|
242 | n/a | of a program, using your regular system malloc elsewhere. |
---|
243 | n/a | |
---|
244 | n/a | ABORT default: defined as abort() |
---|
245 | n/a | Defines how to abort on failed checks. On most systems, a failed |
---|
246 | n/a | check cannot die with an "assert" or even print an informative |
---|
247 | n/a | message, because the underlying print routines in turn call malloc, |
---|
248 | n/a | which will fail again. Generally, the best policy is to simply call |
---|
249 | n/a | abort(). It's not very useful to do more than this because many |
---|
250 | n/a | errors due to overwriting will show up as address faults (null, odd |
---|
251 | n/a | addresses etc) rather than malloc-triggered checks, so will also |
---|
252 | n/a | abort. Also, most compilers know that abort() does not return, so |
---|
253 | n/a | can better optimize code conditionally calling it. |
---|
254 | n/a | |
---|
255 | n/a | PROCEED_ON_ERROR default: defined as 0 (false) |
---|
256 | n/a | Controls whether detected bad addresses cause them to bypassed |
---|
257 | n/a | rather than aborting. If set, detected bad arguments to free and |
---|
258 | n/a | realloc are ignored. And all bookkeeping information is zeroed out |
---|
259 | n/a | upon a detected overwrite of freed heap space, thus losing the |
---|
260 | n/a | ability to ever return it from malloc again, but enabling the |
---|
261 | n/a | application to proceed. If PROCEED_ON_ERROR is defined, the |
---|
262 | n/a | static variable malloc_corruption_error_count is compiled in |
---|
263 | n/a | and can be examined to see if errors have occurred. This option |
---|
264 | n/a | generates slower code than the default abort policy. |
---|
265 | n/a | |
---|
266 | n/a | DEBUG default: NOT defined |
---|
267 | n/a | The DEBUG setting is mainly intended for people trying to modify |
---|
268 | n/a | this code or diagnose problems when porting to new platforms. |
---|
269 | n/a | However, it may also be able to better isolate user errors than just |
---|
270 | n/a | using runtime checks. The assertions in the check routines spell |
---|
271 | n/a | out in more detail the assumptions and invariants underlying the |
---|
272 | n/a | algorithms. The checking is fairly extensive, and will slow down |
---|
273 | n/a | execution noticeably. Calling malloc_stats or mallinfo with DEBUG |
---|
274 | n/a | set will attempt to check every non-mmapped allocated and free chunk |
---|
275 | n/a | in the course of computing the summaries. |
---|
276 | n/a | |
---|
277 | n/a | ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) |
---|
278 | n/a | Debugging assertion failures can be nearly impossible if your |
---|
279 | n/a | version of the assert macro causes malloc to be called, which will |
---|
280 | n/a | lead to a cascade of further failures, blowing the runtime stack. |
---|
281 | n/a | ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), |
---|
282 | n/a | which will usually make debugging easier. |
---|
283 | n/a | |
---|
284 | n/a | MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 |
---|
285 | n/a | The action to take before "return 0" when malloc fails to be able to |
---|
286 | n/a | return memory because there is none available. |
---|
287 | n/a | |
---|
288 | n/a | HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES |
---|
289 | n/a | True if this system supports sbrk or an emulation of it. |
---|
290 | n/a | |
---|
291 | n/a | MORECORE default: sbrk |
---|
292 | n/a | The name of the sbrk-style system routine to call to obtain more |
---|
293 | n/a | memory. See below for guidance on writing custom MORECORE |
---|
294 | n/a | functions. The type of the argument to sbrk/MORECORE varies across |
---|
295 | n/a | systems. It cannot be size_t, because it supports negative |
---|
296 | n/a | arguments, so it is normally the signed type of the same width as |
---|
297 | n/a | size_t (sometimes declared as "intptr_t"). It doesn't much matter |
---|
298 | n/a | though. Internally, we only call it with arguments less than half |
---|
299 | n/a | the max value of a size_t, which should work across all reasonable |
---|
300 | n/a | possibilities, although sometimes generating compiler warnings. See |
---|
301 | n/a | near the end of this file for guidelines for creating a custom |
---|
302 | n/a | version of MORECORE. |
---|
303 | n/a | |
---|
304 | n/a | MORECORE_CONTIGUOUS default: 1 (true) |
---|
305 | n/a | If true, take advantage of fact that consecutive calls to MORECORE |
---|
306 | n/a | with positive arguments always return contiguous increasing |
---|
307 | n/a | addresses. This is true of unix sbrk. It does not hurt too much to |
---|
308 | n/a | set it true anyway, since malloc copes with non-contiguities. |
---|
309 | n/a | Setting it false when definitely non-contiguous saves time |
---|
310 | n/a | and possibly wasted space it would take to discover this though. |
---|
311 | n/a | |
---|
312 | n/a | MORECORE_CANNOT_TRIM default: NOT defined |
---|
313 | n/a | True if MORECORE cannot release space back to the system when given |
---|
314 | n/a | negative arguments. This is generally necessary only if you are |
---|
315 | n/a | using a hand-crafted MORECORE function that cannot handle negative |
---|
316 | n/a | arguments. |
---|
317 | n/a | |
---|
318 | n/a | HAVE_MMAP default: 1 (true) |
---|
319 | n/a | True if this system supports mmap or an emulation of it. If so, and |
---|
320 | n/a | HAVE_MORECORE is not true, MMAP is used for all system |
---|
321 | n/a | allocation. If set and HAVE_MORECORE is true as well, MMAP is |
---|
322 | n/a | primarily used to directly allocate very large blocks. It is also |
---|
323 | n/a | used as a backup strategy in cases where MORECORE fails to provide |
---|
324 | n/a | space from system. Note: A single call to MUNMAP is assumed to be |
---|
325 | n/a | able to unmap memory that may have be allocated using multiple calls |
---|
326 | n/a | to MMAP, so long as they are adjacent. |
---|
327 | n/a | |
---|
328 | n/a | HAVE_MREMAP default: 1 on linux, else 0 |
---|
329 | n/a | If true realloc() uses mremap() to re-allocate large blocks and |
---|
330 | n/a | extend or shrink allocation spaces. |
---|
331 | n/a | |
---|
332 | n/a | MMAP_CLEARS default: 1 on unix |
---|
333 | n/a | True if mmap clears memory so calloc doesn't need to. This is true |
---|
334 | n/a | for standard unix mmap using /dev/zero. |
---|
335 | n/a | |
---|
336 | n/a | USE_BUILTIN_FFS default: 0 (i.e., not used) |
---|
337 | n/a | Causes malloc to use the builtin ffs() function to compute indices. |
---|
338 | n/a | Some compilers may recognize and intrinsify ffs to be faster than the |
---|
339 | n/a | supplied C version. Also, the case of x86 using gcc is special-cased |
---|
340 | n/a | to an asm instruction, so is already as fast as it can be, and so |
---|
341 | n/a | this setting has no effect. (On most x86s, the asm version is only |
---|
342 | n/a | slightly faster than the C version.) |
---|
343 | n/a | |
---|
344 | n/a | malloc_getpagesize default: derive from system includes, or 4096. |
---|
345 | n/a | The system page size. To the extent possible, this malloc manages |
---|
346 | n/a | memory from the system in page-size units. This may be (and |
---|
347 | n/a | usually is) a function rather than a constant. This is ignored |
---|
348 | n/a | if WIN32, where page size is determined using getSystemInfo during |
---|
349 | n/a | initialization. |
---|
350 | n/a | |
---|
351 | n/a | USE_DEV_RANDOM default: 0 (i.e., not used) |
---|
352 | n/a | Causes malloc to use /dev/random to initialize secure magic seed for |
---|
353 | n/a | stamping footers. Otherwise, the current time is used. |
---|
354 | n/a | |
---|
355 | n/a | NO_MALLINFO default: 0 |
---|
356 | n/a | If defined, don't compile "mallinfo". This can be a simple way |
---|
357 | n/a | of dealing with mismatches between system declarations and |
---|
358 | n/a | those in this file. |
---|
359 | n/a | |
---|
360 | n/a | MALLINFO_FIELD_TYPE default: size_t |
---|
361 | n/a | The type of the fields in the mallinfo struct. This was originally |
---|
362 | n/a | defined as "int" in SVID etc, but is more usefully defined as |
---|
363 | n/a | size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set |
---|
364 | n/a | |
---|
365 | n/a | REALLOC_ZERO_BYTES_FREES default: not defined |
---|
366 | n/a | This should be set if a call to realloc with zero bytes should |
---|
367 | n/a | be the same as a call to free. Some people think it should. Otherwise, |
---|
368 | n/a | since this malloc returns a unique pointer for malloc(0), so does |
---|
369 | n/a | realloc(p, 0). |
---|
370 | n/a | |
---|
371 | n/a | LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H |
---|
372 | n/a | LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H |
---|
373 | n/a | LACKS_STDLIB_H default: NOT defined unless on WIN32 |
---|
374 | n/a | Define these if your system does not have these header files. |
---|
375 | n/a | You might need to manually insert some of the declarations they provide. |
---|
376 | n/a | |
---|
377 | n/a | DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, |
---|
378 | n/a | system_info.dwAllocationGranularity in WIN32, |
---|
379 | n/a | otherwise 64K. |
---|
380 | n/a | Also settable using mallopt(M_GRANULARITY, x) |
---|
381 | n/a | The unit for allocating and deallocating memory from the system. On |
---|
382 | n/a | most systems with contiguous MORECORE, there is no reason to |
---|
383 | n/a | make this more than a page. However, systems with MMAP tend to |
---|
384 | n/a | either require or encourage larger granularities. You can increase |
---|
385 | n/a | this value to prevent system allocation functions to be called so |
---|
386 | n/a | often, especially if they are slow. The value must be at least one |
---|
387 | n/a | page and must be a power of two. Setting to 0 causes initialization |
---|
388 | n/a | to either page size or win32 region size. (Note: In previous |
---|
389 | n/a | versions of malloc, the equivalent of this option was called |
---|
390 | n/a | "TOP_PAD") |
---|
391 | n/a | |
---|
392 | n/a | DEFAULT_TRIM_THRESHOLD default: 2MB |
---|
393 | n/a | Also settable using mallopt(M_TRIM_THRESHOLD, x) |
---|
394 | n/a | The maximum amount of unused top-most memory to keep before |
---|
395 | n/a | releasing via malloc_trim in free(). Automatic trimming is mainly |
---|
396 | n/a | useful in long-lived programs using contiguous MORECORE. Because |
---|
397 | n/a | trimming via sbrk can be slow on some systems, and can sometimes be |
---|
398 | n/a | wasteful (in cases where programs immediately afterward allocate |
---|
399 | n/a | more large chunks) the value should be high enough so that your |
---|
400 | n/a | overall system performance would improve by releasing this much |
---|
401 | n/a | memory. As a rough guide, you might set to a value close to the |
---|
402 | n/a | average size of a process (program) running on your system. |
---|
403 | n/a | Releasing this much memory would allow such a process to run in |
---|
404 | n/a | memory. Generally, it is worth tuning trim thresholds when a |
---|
405 | n/a | program undergoes phases where several large chunks are allocated |
---|
406 | n/a | and released in ways that can reuse each other's storage, perhaps |
---|
407 | n/a | mixed with phases where there are no such chunks at all. The trim |
---|
408 | n/a | value must be greater than page size to have any useful effect. To |
---|
409 | n/a | disable trimming completely, you can set to MAX_SIZE_T. Note that the trick |
---|
410 | n/a | some people use of mallocing a huge space and then freeing it at |
---|
411 | n/a | program startup, in an attempt to reserve system memory, doesn't |
---|
412 | n/a | have the intended effect under automatic trimming, since that memory |
---|
413 | n/a | will immediately be returned to the system. |
---|
414 | n/a | |
---|
415 | n/a | DEFAULT_MMAP_THRESHOLD default: 256K |
---|
416 | n/a | Also settable using mallopt(M_MMAP_THRESHOLD, x) |
---|
417 | n/a | The request size threshold for using MMAP to directly service a |
---|
418 | n/a | request. Requests of at least this size that cannot be allocated |
---|
419 | n/a | using already-existing space will be serviced via mmap. (If enough |
---|
420 | n/a | normal freed space already exists it is used instead.) Using mmap |
---|
421 | n/a | segregates relatively large chunks of memory so that they can be |
---|
422 | n/a | individually obtained and released from the host system. A request |
---|
423 | n/a | serviced through mmap is never reused by any other request (at least |
---|
424 | n/a | not directly; the system may just so happen to remap successive |
---|
425 | n/a | requests to the same locations). Segregating space in this way has |
---|
426 | n/a | the benefits that: Mmapped space can always be individually released |
---|
427 | n/a | back to the system, which helps keep the system level memory demands |
---|
428 | n/a | of a long-lived program low. Also, mapped memory doesn't become |
---|
429 | n/a | `locked' between other chunks, as can happen with normally allocated |
---|
430 | n/a | chunks, which means that even trimming via malloc_trim would not |
---|
431 | n/a | release them. However, it has the disadvantage that the space |
---|
432 | n/a | cannot be reclaimed, consolidated, and then used to service later |
---|
433 | n/a | requests, as happens with normal chunks. The advantages of mmap |
---|
434 | n/a | nearly always outweigh disadvantages for "large" chunks, but the |
---|
435 | n/a | value of "large" may vary across systems. The default is an |
---|
436 | n/a | empirically derived value that works well in most systems. You can |
---|
437 | n/a | disable mmap by setting to MAX_SIZE_T. |
---|
438 | n/a | |
---|
439 | n/a | */ |
---|
440 | n/a | |
---|
441 | n/a | #ifndef WIN32 |
---|
442 | n/a | #ifdef _WIN32 |
---|
443 | n/a | #define WIN32 1 |
---|
444 | n/a | #endif /* _WIN32 */ |
---|
445 | n/a | #endif /* WIN32 */ |
---|
446 | n/a | #ifdef WIN32 |
---|
447 | n/a | #define WIN32_LEAN_AND_MEAN |
---|
448 | n/a | #include <windows.h> |
---|
449 | n/a | #define HAVE_MMAP 1 |
---|
450 | n/a | #define HAVE_MORECORE 0 |
---|
451 | n/a | #define LACKS_UNISTD_H |
---|
452 | n/a | #define LACKS_SYS_PARAM_H |
---|
453 | n/a | #define LACKS_SYS_MMAN_H |
---|
454 | n/a | #define LACKS_STRING_H |
---|
455 | n/a | #define LACKS_STRINGS_H |
---|
456 | n/a | #define LACKS_SYS_TYPES_H |
---|
457 | n/a | #define LACKS_ERRNO_H |
---|
458 | n/a | #define MALLOC_FAILURE_ACTION |
---|
459 | n/a | #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */ |
---|
460 | n/a | #elif !defined _GNU_SOURCE |
---|
461 | n/a | /* mremap() on Linux requires this via sys/mman.h |
---|
462 | n/a | * See roundup issue 10309 |
---|
463 | n/a | */ |
---|
464 | n/a | #define _GNU_SOURCE 1 |
---|
465 | n/a | #endif /* WIN32 */ |
---|
466 | n/a | |
---|
467 | n/a | #ifdef __OS2__ |
---|
468 | n/a | #define INCL_DOS |
---|
469 | n/a | #include <os2.h> |
---|
470 | n/a | #define HAVE_MMAP 1 |
---|
471 | n/a | #define HAVE_MORECORE 0 |
---|
472 | n/a | #define LACKS_SYS_MMAN_H |
---|
473 | n/a | #endif /* __OS2__ */ |
---|
474 | n/a | |
---|
475 | n/a | #if defined(DARWIN) || defined(_DARWIN) |
---|
476 | n/a | /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ |
---|
477 | n/a | #ifndef HAVE_MORECORE |
---|
478 | n/a | #define HAVE_MORECORE 0 |
---|
479 | n/a | #define HAVE_MMAP 1 |
---|
480 | n/a | #endif /* HAVE_MORECORE */ |
---|
481 | n/a | #endif /* DARWIN */ |
---|
482 | n/a | |
---|
483 | n/a | #ifndef LACKS_SYS_TYPES_H |
---|
484 | n/a | #include <sys/types.h> /* For size_t */ |
---|
485 | n/a | #endif /* LACKS_SYS_TYPES_H */ |
---|
486 | n/a | |
---|
487 | n/a | /* The maximum possible size_t value has all bits set */ |
---|
488 | n/a | #define MAX_SIZE_T (~(size_t)0) |
---|
489 | n/a | |
---|
490 | n/a | #ifndef ONLY_MSPACES |
---|
491 | n/a | #define ONLY_MSPACES 0 |
---|
492 | n/a | #endif /* ONLY_MSPACES */ |
---|
493 | n/a | #ifndef MSPACES |
---|
494 | n/a | #if ONLY_MSPACES |
---|
495 | n/a | #define MSPACES 1 |
---|
496 | n/a | #else /* ONLY_MSPACES */ |
---|
497 | n/a | #define MSPACES 0 |
---|
498 | n/a | #endif /* ONLY_MSPACES */ |
---|
499 | n/a | #endif /* MSPACES */ |
---|
500 | n/a | #ifndef MALLOC_ALIGNMENT |
---|
501 | n/a | #define MALLOC_ALIGNMENT ((size_t)8U) |
---|
502 | n/a | #endif /* MALLOC_ALIGNMENT */ |
---|
503 | n/a | #ifndef FOOTERS |
---|
504 | n/a | #define FOOTERS 0 |
---|
505 | n/a | #endif /* FOOTERS */ |
---|
506 | n/a | #ifndef ABORT |
---|
507 | n/a | #define ABORT abort() |
---|
508 | n/a | #endif /* ABORT */ |
---|
509 | n/a | #ifndef ABORT_ON_ASSERT_FAILURE |
---|
510 | n/a | #define ABORT_ON_ASSERT_FAILURE 1 |
---|
511 | n/a | #endif /* ABORT_ON_ASSERT_FAILURE */ |
---|
512 | n/a | #ifndef PROCEED_ON_ERROR |
---|
513 | n/a | #define PROCEED_ON_ERROR 0 |
---|
514 | n/a | #endif /* PROCEED_ON_ERROR */ |
---|
515 | n/a | #ifndef USE_LOCKS |
---|
516 | n/a | #define USE_LOCKS 0 |
---|
517 | n/a | #endif /* USE_LOCKS */ |
---|
518 | n/a | #ifndef INSECURE |
---|
519 | n/a | #define INSECURE 0 |
---|
520 | n/a | #endif /* INSECURE */ |
---|
521 | n/a | #ifndef HAVE_MMAP |
---|
522 | n/a | #define HAVE_MMAP 1 |
---|
523 | n/a | #endif /* HAVE_MMAP */ |
---|
524 | n/a | #ifndef MMAP_CLEARS |
---|
525 | n/a | #define MMAP_CLEARS 1 |
---|
526 | n/a | #endif /* MMAP_CLEARS */ |
---|
527 | n/a | #ifndef HAVE_MREMAP |
---|
528 | n/a | #ifdef __linux__ |
---|
529 | n/a | #define HAVE_MREMAP 1 |
---|
530 | n/a | #else /* linux */ |
---|
531 | n/a | #define HAVE_MREMAP 0 |
---|
532 | n/a | #endif /* linux */ |
---|
533 | n/a | #endif /* HAVE_MREMAP */ |
---|
534 | n/a | #ifndef MALLOC_FAILURE_ACTION |
---|
535 | n/a | #define MALLOC_FAILURE_ACTION errno = ENOMEM; |
---|
536 | n/a | #endif /* MALLOC_FAILURE_ACTION */ |
---|
537 | n/a | #ifndef HAVE_MORECORE |
---|
538 | n/a | #if ONLY_MSPACES |
---|
539 | n/a | #define HAVE_MORECORE 0 |
---|
540 | n/a | #else /* ONLY_MSPACES */ |
---|
541 | n/a | #define HAVE_MORECORE 1 |
---|
542 | n/a | #endif /* ONLY_MSPACES */ |
---|
543 | n/a | #endif /* HAVE_MORECORE */ |
---|
544 | n/a | #if !HAVE_MORECORE |
---|
545 | n/a | #define MORECORE_CONTIGUOUS 0 |
---|
546 | n/a | #else /* !HAVE_MORECORE */ |
---|
547 | n/a | #ifndef MORECORE |
---|
548 | n/a | #define MORECORE sbrk |
---|
549 | n/a | #endif /* MORECORE */ |
---|
550 | n/a | #ifndef MORECORE_CONTIGUOUS |
---|
551 | n/a | #define MORECORE_CONTIGUOUS 1 |
---|
552 | n/a | #endif /* MORECORE_CONTIGUOUS */ |
---|
553 | n/a | #endif /* HAVE_MORECORE */ |
---|
554 | n/a | #ifndef DEFAULT_GRANULARITY |
---|
555 | n/a | #if MORECORE_CONTIGUOUS |
---|
556 | n/a | #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ |
---|
557 | n/a | #else /* MORECORE_CONTIGUOUS */ |
---|
558 | n/a | #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) |
---|
559 | n/a | #endif /* MORECORE_CONTIGUOUS */ |
---|
560 | n/a | #endif /* DEFAULT_GRANULARITY */ |
---|
561 | n/a | #ifndef DEFAULT_TRIM_THRESHOLD |
---|
562 | n/a | #ifndef MORECORE_CANNOT_TRIM |
---|
563 | n/a | #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) |
---|
564 | n/a | #else /* MORECORE_CANNOT_TRIM */ |
---|
565 | n/a | #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T |
---|
566 | n/a | #endif /* MORECORE_CANNOT_TRIM */ |
---|
567 | n/a | #endif /* DEFAULT_TRIM_THRESHOLD */ |
---|
568 | n/a | #ifndef DEFAULT_MMAP_THRESHOLD |
---|
569 | n/a | #if HAVE_MMAP |
---|
570 | n/a | #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) |
---|
571 | n/a | #else /* HAVE_MMAP */ |
---|
572 | n/a | #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T |
---|
573 | n/a | #endif /* HAVE_MMAP */ |
---|
574 | n/a | #endif /* DEFAULT_MMAP_THRESHOLD */ |
---|
575 | n/a | #ifndef USE_BUILTIN_FFS |
---|
576 | n/a | #define USE_BUILTIN_FFS 0 |
---|
577 | n/a | #endif /* USE_BUILTIN_FFS */ |
---|
578 | n/a | #ifndef USE_DEV_RANDOM |
---|
579 | n/a | #define USE_DEV_RANDOM 0 |
---|
580 | n/a | #endif /* USE_DEV_RANDOM */ |
---|
581 | n/a | #ifndef NO_MALLINFO |
---|
582 | n/a | #define NO_MALLINFO 0 |
---|
583 | n/a | #endif /* NO_MALLINFO */ |
---|
584 | n/a | #ifndef MALLINFO_FIELD_TYPE |
---|
585 | n/a | #define MALLINFO_FIELD_TYPE size_t |
---|
586 | n/a | #endif /* MALLINFO_FIELD_TYPE */ |
---|
587 | n/a | |
---|
588 | n/a | /* |
---|
589 | n/a | mallopt tuning options. SVID/XPG defines four standard parameter |
---|
590 | n/a | numbers for mallopt, normally defined in malloc.h. None of these |
---|
591 | n/a | are used in this malloc, so setting them has no effect. But this |
---|
592 | n/a | malloc does support the following options. |
---|
593 | n/a | */ |
---|
594 | n/a | |
---|
595 | n/a | #define M_TRIM_THRESHOLD (-1) |
---|
596 | n/a | #define M_GRANULARITY (-2) |
---|
597 | n/a | #define M_MMAP_THRESHOLD (-3) |
---|
598 | n/a | |
---|
599 | n/a | /* ------------------------ Mallinfo declarations ------------------------ */ |
---|
600 | n/a | |
---|
601 | n/a | #if !NO_MALLINFO |
---|
602 | n/a | /* |
---|
603 | n/a | This version of malloc supports the standard SVID/XPG mallinfo |
---|
604 | n/a | routine that returns a struct containing usage properties and |
---|
605 | n/a | statistics. It should work on any system that has a |
---|
606 | n/a | /usr/include/malloc.h defining struct mallinfo. The main |
---|
607 | n/a | declaration needed is the mallinfo struct that is returned (by-copy) |
---|
608 | n/a | by mallinfo(). The malloinfo struct contains a bunch of fields that |
---|
609 | n/a | are not even meaningful in this version of malloc. These fields are |
---|
610 | n/a | are instead filled by mallinfo() with other numbers that might be of |
---|
611 | n/a | interest. |
---|
612 | n/a | |
---|
613 | n/a | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a |
---|
614 | n/a | /usr/include/malloc.h file that includes a declaration of struct |
---|
615 | n/a | mallinfo. If so, it is included; else a compliant version is |
---|
616 | n/a | declared below. These must be precisely the same for mallinfo() to |
---|
617 | n/a | work. The original SVID version of this struct, defined on most |
---|
618 | n/a | systems with mallinfo, declares all fields as ints. But some others |
---|
619 | n/a | define as unsigned long. If your system defines the fields using a |
---|
620 | n/a | type of different width than listed here, you MUST #include your |
---|
621 | n/a | system version and #define HAVE_USR_INCLUDE_MALLOC_H. |
---|
622 | n/a | */ |
---|
623 | n/a | |
---|
624 | n/a | /* #define HAVE_USR_INCLUDE_MALLOC_H */ |
---|
625 | n/a | |
---|
626 | n/a | #ifdef HAVE_USR_INCLUDE_MALLOC_H |
---|
627 | n/a | #include "/usr/include/malloc.h" |
---|
628 | n/a | #else /* HAVE_USR_INCLUDE_MALLOC_H */ |
---|
629 | n/a | |
---|
630 | n/a | /* HP-UX's stdlib.h redefines mallinfo unless _STRUCT_MALLINFO is defined */ |
---|
631 | n/a | #define _STRUCT_MALLINFO |
---|
632 | n/a | |
---|
633 | n/a | struct mallinfo { |
---|
634 | n/a | MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ |
---|
635 | n/a | MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ |
---|
636 | n/a | MALLINFO_FIELD_TYPE smblks; /* always 0 */ |
---|
637 | n/a | MALLINFO_FIELD_TYPE hblks; /* always 0 */ |
---|
638 | n/a | MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ |
---|
639 | n/a | MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ |
---|
640 | n/a | MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ |
---|
641 | n/a | MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ |
---|
642 | n/a | MALLINFO_FIELD_TYPE fordblks; /* total free space */ |
---|
643 | n/a | MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ |
---|
644 | n/a | }; |
---|
645 | n/a | |
---|
646 | n/a | #endif /* HAVE_USR_INCLUDE_MALLOC_H */ |
---|
647 | n/a | #endif /* NO_MALLINFO */ |
---|
648 | n/a | |
---|
649 | n/a | #ifdef __cplusplus |
---|
650 | n/a | extern "C" { |
---|
651 | n/a | #endif /* __cplusplus */ |
---|
652 | n/a | |
---|
653 | n/a | #if !ONLY_MSPACES |
---|
654 | n/a | |
---|
655 | n/a | /* ------------------- Declarations of public routines ------------------- */ |
---|
656 | n/a | |
---|
657 | n/a | #ifndef USE_DL_PREFIX |
---|
658 | n/a | #define dlcalloc calloc |
---|
659 | n/a | #define dlfree free |
---|
660 | n/a | #define dlmalloc malloc |
---|
661 | n/a | #define dlmemalign memalign |
---|
662 | n/a | #define dlrealloc realloc |
---|
663 | n/a | #define dlvalloc valloc |
---|
664 | n/a | #define dlpvalloc pvalloc |
---|
665 | n/a | #define dlmallinfo mallinfo |
---|
666 | n/a | #define dlmallopt mallopt |
---|
667 | n/a | #define dlmalloc_trim malloc_trim |
---|
668 | n/a | #define dlmalloc_stats malloc_stats |
---|
669 | n/a | #define dlmalloc_usable_size malloc_usable_size |
---|
670 | n/a | #define dlmalloc_footprint malloc_footprint |
---|
671 | n/a | #define dlmalloc_max_footprint malloc_max_footprint |
---|
672 | n/a | #define dlindependent_calloc independent_calloc |
---|
673 | n/a | #define dlindependent_comalloc independent_comalloc |
---|
674 | n/a | #endif /* USE_DL_PREFIX */ |
---|
675 | n/a | |
---|
676 | n/a | |
---|
677 | n/a | /* |
---|
678 | n/a | malloc(size_t n) |
---|
679 | n/a | Returns a pointer to a newly allocated chunk of at least n bytes, or |
---|
680 | n/a | null if no space is available, in which case errno is set to ENOMEM |
---|
681 | n/a | on ANSI C systems. |
---|
682 | n/a | |
---|
683 | n/a | If n is zero, malloc returns a minimum-sized chunk. (The minimum |
---|
684 | n/a | size is 16 bytes on most 32bit systems, and 32 bytes on 64bit |
---|
685 | n/a | systems.) Note that size_t is an unsigned type, so calls with |
---|
686 | n/a | arguments that would be negative if signed are interpreted as |
---|
687 | n/a | requests for huge amounts of space, which will often fail. The |
---|
688 | n/a | maximum supported value of n differs across systems, but is in all |
---|
689 | n/a | cases less than the maximum representable value of a size_t. |
---|
690 | n/a | */ |
---|
691 | n/a | void* dlmalloc(size_t); |
---|
692 | n/a | |
---|
693 | n/a | /* |
---|
694 | n/a | free(void* p) |
---|
695 | n/a | Releases the chunk of memory pointed to by p, that had been previously |
---|
696 | n/a | allocated using malloc or a related routine such as realloc. |
---|
697 | n/a | It has no effect if p is null. If p was not malloced or already |
---|
698 | n/a | freed, free(p) will by default cause the current program to abort. |
---|
699 | n/a | */ |
---|
700 | n/a | void dlfree(void*); |
---|
701 | n/a | |
---|
702 | n/a | /* |
---|
703 | n/a | calloc(size_t n_elements, size_t element_size); |
---|
704 | n/a | Returns a pointer to n_elements * element_size bytes, with all locations |
---|
705 | n/a | set to zero. |
---|
706 | n/a | */ |
---|
707 | n/a | void* dlcalloc(size_t, size_t); |
---|
708 | n/a | |
---|
709 | n/a | /* |
---|
710 | n/a | realloc(void* p, size_t n) |
---|
711 | n/a | Returns a pointer to a chunk of size n that contains the same data |
---|
712 | n/a | as does chunk p up to the minimum of (n, p's size) bytes, or null |
---|
713 | n/a | if no space is available. |
---|
714 | n/a | |
---|
715 | n/a | The returned pointer may or may not be the same as p. The algorithm |
---|
716 | n/a | prefers extending p in most cases when possible, otherwise it |
---|
717 | n/a | employs the equivalent of a malloc-copy-free sequence. |
---|
718 | n/a | |
---|
719 | n/a | If p is null, realloc is equivalent to malloc. |
---|
720 | n/a | |
---|
721 | n/a | If space is not available, realloc returns null, errno is set (if on |
---|
722 | n/a | ANSI) and p is NOT freed. |
---|
723 | n/a | |
---|
724 | n/a | if n is for fewer bytes than already held by p, the newly unused |
---|
725 | n/a | space is lopped off and freed if possible. realloc with a size |
---|
726 | n/a | argument of zero (re)allocates a minimum-sized chunk. |
---|
727 | n/a | |
---|
728 | n/a | The old unix realloc convention of allowing the last-free'd chunk |
---|
729 | n/a | to be used as an argument to realloc is not supported. |
---|
730 | n/a | */ |
---|
731 | n/a | |
---|
732 | n/a | void* dlrealloc(void*, size_t); |
---|
733 | n/a | |
---|
734 | n/a | /* |
---|
735 | n/a | memalign(size_t alignment, size_t n); |
---|
736 | n/a | Returns a pointer to a newly allocated chunk of n bytes, aligned |
---|
737 | n/a | in accord with the alignment argument. |
---|
738 | n/a | |
---|
739 | n/a | The alignment argument should be a power of two. If the argument is |
---|
740 | n/a | not a power of two, the nearest greater power is used. |
---|
741 | n/a | 8-byte alignment is guaranteed by normal malloc calls, so don't |
---|
742 | n/a | bother calling memalign with an argument of 8 or less. |
---|
743 | n/a | |
---|
744 | n/a | Overreliance on memalign is a sure way to fragment space. |
---|
745 | n/a | */ |
---|
746 | n/a | void* dlmemalign(size_t, size_t); |
---|
747 | n/a | |
---|
748 | n/a | /* |
---|
749 | n/a | valloc(size_t n); |
---|
750 | n/a | Equivalent to memalign(pagesize, n), where pagesize is the page |
---|
751 | n/a | size of the system. If the pagesize is unknown, 4096 is used. |
---|
752 | n/a | */ |
---|
753 | n/a | void* dlvalloc(size_t); |
---|
754 | n/a | |
---|
755 | n/a | /* |
---|
756 | n/a | mallopt(int parameter_number, int parameter_value) |
---|
757 | n/a | Sets tunable parameters The format is to provide a |
---|
758 | n/a | (parameter-number, parameter-value) pair. mallopt then sets the |
---|
759 | n/a | corresponding parameter to the argument value if it can (i.e., so |
---|
760 | n/a | long as the value is meaningful), and returns 1 if successful else |
---|
761 | n/a | 0. SVID/XPG/ANSI defines four standard param numbers for mallopt, |
---|
762 | n/a | normally defined in malloc.h. None of these are use in this malloc, |
---|
763 | n/a | so setting them has no effect. But this malloc also supports other |
---|
764 | n/a | options in mallopt. See below for details. Briefly, supported |
---|
765 | n/a | parameters are as follows (listed defaults are for "typical" |
---|
766 | n/a | configurations). |
---|
767 | n/a | |
---|
768 | n/a | Symbol param # default allowed param values |
---|
769 | n/a | M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables) |
---|
770 | n/a | M_GRANULARITY -2 page size any power of 2 >= page size |
---|
771 | n/a | M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) |
---|
772 | n/a | */ |
---|
773 | n/a | int dlmallopt(int, int); |
---|
774 | n/a | |
---|
775 | n/a | /* |
---|
776 | n/a | malloc_footprint(); |
---|
777 | n/a | Returns the number of bytes obtained from the system. The total |
---|
778 | n/a | number of bytes allocated by malloc, realloc etc., is less than this |
---|
779 | n/a | value. Unlike mallinfo, this function returns only a precomputed |
---|
780 | n/a | result, so can be called frequently to monitor memory consumption. |
---|
781 | n/a | Even if locks are otherwise defined, this function does not use them, |
---|
782 | n/a | so results might not be up to date. |
---|
783 | n/a | */ |
---|
784 | n/a | size_t dlmalloc_footprint(void); |
---|
785 | n/a | |
---|
786 | n/a | /* |
---|
787 | n/a | malloc_max_footprint(); |
---|
788 | n/a | Returns the maximum number of bytes obtained from the system. This |
---|
789 | n/a | value will be greater than current footprint if deallocated space |
---|
790 | n/a | has been reclaimed by the system. The peak number of bytes allocated |
---|
791 | n/a | by malloc, realloc etc., is less than this value. Unlike mallinfo, |
---|
792 | n/a | this function returns only a precomputed result, so can be called |
---|
793 | n/a | frequently to monitor memory consumption. Even if locks are |
---|
794 | n/a | otherwise defined, this function does not use them, so results might |
---|
795 | n/a | not be up to date. |
---|
796 | n/a | */ |
---|
797 | n/a | size_t dlmalloc_max_footprint(void); |
---|
798 | n/a | |
---|
799 | n/a | #if !NO_MALLINFO |
---|
800 | n/a | /* |
---|
801 | n/a | mallinfo() |
---|
802 | n/a | Returns (by copy) a struct containing various summary statistics: |
---|
803 | n/a | |
---|
804 | n/a | arena: current total non-mmapped bytes allocated from system |
---|
805 | n/a | ordblks: the number of free chunks |
---|
806 | n/a | smblks: always zero. |
---|
807 | n/a | hblks: current number of mmapped regions |
---|
808 | n/a | hblkhd: total bytes held in mmapped regions |
---|
809 | n/a | usmblks: the maximum total allocated space. This will be greater |
---|
810 | n/a | than current total if trimming has occurred. |
---|
811 | n/a | fsmblks: always zero |
---|
812 | n/a | uordblks: current total allocated space (normal or mmapped) |
---|
813 | n/a | fordblks: total free space |
---|
814 | n/a | keepcost: the maximum number of bytes that could ideally be released |
---|
815 | n/a | back to system via malloc_trim. ("ideally" means that |
---|
816 | n/a | it ignores page restrictions etc.) |
---|
817 | n/a | |
---|
818 | n/a | Because these fields are ints, but internal bookkeeping may |
---|
819 | n/a | be kept as longs, the reported values may wrap around zero and |
---|
820 | n/a | thus be inaccurate. |
---|
821 | n/a | */ |
---|
822 | n/a | struct mallinfo dlmallinfo(void); |
---|
823 | n/a | #endif /* NO_MALLINFO */ |
---|
824 | n/a | |
---|
825 | n/a | /* |
---|
826 | n/a | independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); |
---|
827 | n/a | |
---|
828 | n/a | independent_calloc is similar to calloc, but instead of returning a |
---|
829 | n/a | single cleared space, it returns an array of pointers to n_elements |
---|
830 | n/a | independent elements that can hold contents of size elem_size, each |
---|
831 | n/a | of which starts out cleared, and can be independently freed, |
---|
832 | n/a | realloc'ed etc. The elements are guaranteed to be adjacently |
---|
833 | n/a | allocated (this is not guaranteed to occur with multiple callocs or |
---|
834 | n/a | mallocs), which may also improve cache locality in some |
---|
835 | n/a | applications. |
---|
836 | n/a | |
---|
837 | n/a | The "chunks" argument is optional (i.e., may be null, which is |
---|
838 | n/a | probably the most typical usage). If it is null, the returned array |
---|
839 | n/a | is itself dynamically allocated and should also be freed when it is |
---|
840 | n/a | no longer needed. Otherwise, the chunks array must be of at least |
---|
841 | n/a | n_elements in length. It is filled in with the pointers to the |
---|
842 | n/a | chunks. |
---|
843 | n/a | |
---|
844 | n/a | In either case, independent_calloc returns this pointer array, or |
---|
845 | n/a | null if the allocation failed. If n_elements is zero and "chunks" |
---|
846 | n/a | is null, it returns a chunk representing an array with zero elements |
---|
847 | n/a | (which should be freed if not wanted). |
---|
848 | n/a | |
---|
849 | n/a | Each element must be individually freed when it is no longer |
---|
850 | n/a | needed. If you'd like to instead be able to free all at once, you |
---|
851 | n/a | should instead use regular calloc and assign pointers into this |
---|
852 | n/a | space to represent elements. (In this case though, you cannot |
---|
853 | n/a | independently free elements.) |
---|
854 | n/a | |
---|
855 | n/a | independent_calloc simplifies and speeds up implementations of many |
---|
856 | n/a | kinds of pools. It may also be useful when constructing large data |
---|
857 | n/a | structures that initially have a fixed number of fixed-sized nodes, |
---|
858 | n/a | but the number is not known at compile time, and some of the nodes |
---|
859 | n/a | may later need to be freed. For example: |
---|
860 | n/a | |
---|
861 | n/a | struct Node { int item; struct Node* next; }; |
---|
862 | n/a | |
---|
863 | n/a | struct Node* build_list() { |
---|
864 | n/a | struct Node** pool; |
---|
865 | n/a | int n = read_number_of_nodes_needed(); |
---|
866 | n/a | if (n <= 0) return 0; |
---|
867 | n/a | pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); |
---|
868 | n/a | if (pool == 0) die(); |
---|
869 | n/a | // organize into a linked list... |
---|
870 | n/a | struct Node* first = pool[0]; |
---|
871 | n/a | for (i = 0; i < n-1; ++i) |
---|
872 | n/a | pool[i]->next = pool[i+1]; |
---|
873 | n/a | free(pool); // Can now free the array (or not, if it is needed later) |
---|
874 | n/a | return first; |
---|
875 | n/a | } |
---|
876 | n/a | */ |
---|
877 | n/a | void** dlindependent_calloc(size_t, size_t, void**); |
---|
878 | n/a | |
---|
879 | n/a | /* |
---|
880 | n/a | independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); |
---|
881 | n/a | |
---|
882 | n/a | independent_comalloc allocates, all at once, a set of n_elements |
---|
883 | n/a | chunks with sizes indicated in the "sizes" array. It returns |
---|
884 | n/a | an array of pointers to these elements, each of which can be |
---|
885 | n/a | independently freed, realloc'ed etc. The elements are guaranteed to |
---|
886 | n/a | be adjacently allocated (this is not guaranteed to occur with |
---|
887 | n/a | multiple callocs or mallocs), which may also improve cache locality |
---|
888 | n/a | in some applications. |
---|
889 | n/a | |
---|
890 | n/a | The "chunks" argument is optional (i.e., may be null). If it is null |
---|
891 | n/a | the returned array is itself dynamically allocated and should also |
---|
892 | n/a | be freed when it is no longer needed. Otherwise, the chunks array |
---|
893 | n/a | must be of at least n_elements in length. It is filled in with the |
---|
894 | n/a | pointers to the chunks. |
---|
895 | n/a | |
---|
896 | n/a | In either case, independent_comalloc returns this pointer array, or |
---|
897 | n/a | null if the allocation failed. If n_elements is zero and chunks is |
---|
898 | n/a | null, it returns a chunk representing an array with zero elements |
---|
899 | n/a | (which should be freed if not wanted). |
---|
900 | n/a | |
---|
901 | n/a | Each element must be individually freed when it is no longer |
---|
902 | n/a | needed. If you'd like to instead be able to free all at once, you |
---|
903 | n/a | should instead use a single regular malloc, and assign pointers at |
---|
904 | n/a | particular offsets in the aggregate space. (In this case though, you |
---|
905 | n/a | cannot independently free elements.) |
---|
906 | n/a | |
---|
907 | n/a | independent_comallac differs from independent_calloc in that each |
---|
908 | n/a | element may have a different size, and also that it does not |
---|
909 | n/a | automatically clear elements. |
---|
910 | n/a | |
---|
911 | n/a | independent_comalloc can be used to speed up allocation in cases |
---|
912 | n/a | where several structs or objects must always be allocated at the |
---|
913 | n/a | same time. For example: |
---|
914 | n/a | |
---|
915 | n/a | struct Head { ... } |
---|
916 | n/a | struct Foot { ... } |
---|
917 | n/a | |
---|
918 | n/a | void send_message(char* msg) { |
---|
919 | n/a | int msglen = strlen(msg); |
---|
920 | n/a | size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; |
---|
921 | n/a | void* chunks[3]; |
---|
922 | n/a | if (independent_comalloc(3, sizes, chunks) == 0) |
---|
923 | n/a | die(); |
---|
924 | n/a | struct Head* head = (struct Head*)(chunks[0]); |
---|
925 | n/a | char* body = (char*)(chunks[1]); |
---|
926 | n/a | struct Foot* foot = (struct Foot*)(chunks[2]); |
---|
927 | n/a | // ... |
---|
928 | n/a | } |
---|
929 | n/a | |
---|
930 | n/a | In general though, independent_comalloc is worth using only for |
---|
931 | n/a | larger values of n_elements. For small values, you probably won't |
---|
932 | n/a | detect enough difference from series of malloc calls to bother. |
---|
933 | n/a | |
---|
934 | n/a | Overuse of independent_comalloc can increase overall memory usage, |
---|
935 | n/a | since it cannot reuse existing noncontiguous small chunks that |
---|
936 | n/a | might be available for some of the elements. |
---|
937 | n/a | */ |
---|
938 | n/a | void** dlindependent_comalloc(size_t, size_t*, void**); |
---|
939 | n/a | |
---|
940 | n/a | |
---|
941 | n/a | /* |
---|
942 | n/a | pvalloc(size_t n); |
---|
943 | n/a | Equivalent to valloc(minimum-page-that-holds(n)), that is, |
---|
944 | n/a | round up n to nearest pagesize. |
---|
945 | n/a | */ |
---|
946 | n/a | void* dlpvalloc(size_t); |
---|
947 | n/a | |
---|
948 | n/a | /* |
---|
949 | n/a | malloc_trim(size_t pad); |
---|
950 | n/a | |
---|
951 | n/a | If possible, gives memory back to the system (via negative arguments |
---|
952 | n/a | to sbrk) if there is unused memory at the `high' end of the malloc |
---|
953 | n/a | pool or in unused MMAP segments. You can call this after freeing |
---|
954 | n/a | large blocks of memory to potentially reduce the system-level memory |
---|
955 | n/a | requirements of a program. However, it cannot guarantee to reduce |
---|
956 | n/a | memory. Under some allocation patterns, some large free blocks of |
---|
957 | n/a | memory will be locked between two used chunks, so they cannot be |
---|
958 | n/a | given back to the system. |
---|
959 | n/a | |
---|
960 | n/a | The `pad' argument to malloc_trim represents the amount of free |
---|
961 | n/a | trailing space to leave untrimmed. If this argument is zero, only |
---|
962 | n/a | the minimum amount of memory to maintain internal data structures |
---|
963 | n/a | will be left. Non-zero arguments can be supplied to maintain enough |
---|
964 | n/a | trailing space to service future expected allocations without having |
---|
965 | n/a | to re-obtain memory from the system. |
---|
966 | n/a | |
---|
967 | n/a | Malloc_trim returns 1 if it actually released any memory, else 0. |
---|
968 | n/a | */ |
---|
969 | n/a | int dlmalloc_trim(size_t); |
---|
970 | n/a | |
---|
971 | n/a | /* |
---|
972 | n/a | malloc_usable_size(void* p); |
---|
973 | n/a | |
---|
974 | n/a | Returns the number of bytes you can actually use in |
---|
975 | n/a | an allocated chunk, which may be more than you requested (although |
---|
976 | n/a | often not) due to alignment and minimum size constraints. |
---|
977 | n/a | You can use this many bytes without worrying about |
---|
978 | n/a | overwriting other allocated objects. This is not a particularly great |
---|
979 | n/a | programming practice. malloc_usable_size can be more useful in |
---|
980 | n/a | debugging and assertions, for example: |
---|
981 | n/a | |
---|
982 | n/a | p = malloc(n); |
---|
983 | n/a | assert(malloc_usable_size(p) >= 256); |
---|
984 | n/a | */ |
---|
985 | n/a | size_t dlmalloc_usable_size(void*); |
---|
986 | n/a | |
---|
987 | n/a | /* |
---|
988 | n/a | malloc_stats(); |
---|
989 | n/a | Prints on stderr the amount of space obtained from the system (both |
---|
990 | n/a | via sbrk and mmap), the maximum amount (which may be more than |
---|
991 | n/a | current if malloc_trim and/or munmap got called), and the current |
---|
992 | n/a | number of bytes allocated via malloc (or realloc, etc) but not yet |
---|
993 | n/a | freed. Note that this is the number of bytes allocated, not the |
---|
994 | n/a | number requested. It will be larger than the number requested |
---|
995 | n/a | because of alignment and bookkeeping overhead. Because it includes |
---|
996 | n/a | alignment wastage as being in use, this figure may be greater than |
---|
997 | n/a | zero even when no user-level chunks are allocated. |
---|
998 | n/a | |
---|
999 | n/a | The reported current and maximum system memory can be inaccurate if |
---|
1000 | n/a | a program makes other calls to system memory allocation functions |
---|
1001 | n/a | (normally sbrk) outside of malloc. |
---|
1002 | n/a | |
---|
1003 | n/a | malloc_stats prints only the most commonly interesting statistics. |
---|
1004 | n/a | More information can be obtained by calling mallinfo. |
---|
1005 | n/a | */ |
---|
1006 | n/a | void dlmalloc_stats(void); |
---|
1007 | n/a | |
---|
1008 | n/a | #endif /* ONLY_MSPACES */ |
---|
1009 | n/a | |
---|
1010 | n/a | #if MSPACES |
---|
1011 | n/a | |
---|
1012 | n/a | /* |
---|
1013 | n/a | mspace is an opaque type representing an independent |
---|
1014 | n/a | region of space that supports mspace_malloc, etc. |
---|
1015 | n/a | */ |
---|
1016 | n/a | typedef void* mspace; |
---|
1017 | n/a | |
---|
1018 | n/a | /* |
---|
1019 | n/a | create_mspace creates and returns a new independent space with the |
---|
1020 | n/a | given initial capacity, or, if 0, the default granularity size. It |
---|
1021 | n/a | returns null if there is no system memory available to create the |
---|
1022 | n/a | space. If argument locked is non-zero, the space uses a separate |
---|
1023 | n/a | lock to control access. The capacity of the space will grow |
---|
1024 | n/a | dynamically as needed to service mspace_malloc requests. You can |
---|
1025 | n/a | control the sizes of incremental increases of this space by |
---|
1026 | n/a | compiling with a different DEFAULT_GRANULARITY or dynamically |
---|
1027 | n/a | setting with mallopt(M_GRANULARITY, value). |
---|
1028 | n/a | */ |
---|
1029 | n/a | mspace create_mspace(size_t capacity, int locked); |
---|
1030 | n/a | |
---|
1031 | n/a | /* |
---|
1032 | n/a | destroy_mspace destroys the given space, and attempts to return all |
---|
1033 | n/a | of its memory back to the system, returning the total number of |
---|
1034 | n/a | bytes freed. After destruction, the results of access to all memory |
---|
1035 | n/a | used by the space become undefined. |
---|
1036 | n/a | */ |
---|
1037 | n/a | size_t destroy_mspace(mspace msp); |
---|
1038 | n/a | |
---|
1039 | n/a | /* |
---|
1040 | n/a | create_mspace_with_base uses the memory supplied as the initial base |
---|
1041 | n/a | of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this |
---|
1042 | n/a | space is used for bookkeeping, so the capacity must be at least this |
---|
1043 | n/a | large. (Otherwise 0 is returned.) When this initial space is |
---|
1044 | n/a | exhausted, additional memory will be obtained from the system. |
---|
1045 | n/a | Destroying this space will deallocate all additionally allocated |
---|
1046 | n/a | space (if possible) but not the initial base. |
---|
1047 | n/a | */ |
---|
1048 | n/a | mspace create_mspace_with_base(void* base, size_t capacity, int locked); |
---|
1049 | n/a | |
---|
1050 | n/a | /* |
---|
1051 | n/a | mspace_malloc behaves as malloc, but operates within |
---|
1052 | n/a | the given space. |
---|
1053 | n/a | */ |
---|
1054 | n/a | void* mspace_malloc(mspace msp, size_t bytes); |
---|
1055 | n/a | |
---|
1056 | n/a | /* |
---|
1057 | n/a | mspace_free behaves as free, but operates within |
---|
1058 | n/a | the given space. |
---|
1059 | n/a | |
---|
1060 | n/a | If compiled with FOOTERS==1, mspace_free is not actually needed. |
---|
1061 | n/a | free may be called instead of mspace_free because freed chunks from |
---|
1062 | n/a | any space are handled by their originating spaces. |
---|
1063 | n/a | */ |
---|
1064 | n/a | void mspace_free(mspace msp, void* mem); |
---|
1065 | n/a | |
---|
1066 | n/a | /* |
---|
1067 | n/a | mspace_realloc behaves as realloc, but operates within |
---|
1068 | n/a | the given space. |
---|
1069 | n/a | |
---|
1070 | n/a | If compiled with FOOTERS==1, mspace_realloc is not actually |
---|
1071 | n/a | needed. realloc may be called instead of mspace_realloc because |
---|
1072 | n/a | realloced chunks from any space are handled by their originating |
---|
1073 | n/a | spaces. |
---|
1074 | n/a | */ |
---|
1075 | n/a | void* mspace_realloc(mspace msp, void* mem, size_t newsize); |
---|
1076 | n/a | |
---|
1077 | n/a | /* |
---|
1078 | n/a | mspace_calloc behaves as calloc, but operates within |
---|
1079 | n/a | the given space. |
---|
1080 | n/a | */ |
---|
1081 | n/a | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size); |
---|
1082 | n/a | |
---|
1083 | n/a | /* |
---|
1084 | n/a | mspace_memalign behaves as memalign, but operates within |
---|
1085 | n/a | the given space. |
---|
1086 | n/a | */ |
---|
1087 | n/a | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes); |
---|
1088 | n/a | |
---|
1089 | n/a | /* |
---|
1090 | n/a | mspace_independent_calloc behaves as independent_calloc, but |
---|
1091 | n/a | operates within the given space. |
---|
1092 | n/a | */ |
---|
1093 | n/a | void** mspace_independent_calloc(mspace msp, size_t n_elements, |
---|
1094 | n/a | size_t elem_size, void* chunks[]); |
---|
1095 | n/a | |
---|
1096 | n/a | /* |
---|
1097 | n/a | mspace_independent_comalloc behaves as independent_comalloc, but |
---|
1098 | n/a | operates within the given space. |
---|
1099 | n/a | */ |
---|
1100 | n/a | void** mspace_independent_comalloc(mspace msp, size_t n_elements, |
---|
1101 | n/a | size_t sizes[], void* chunks[]); |
---|
1102 | n/a | |
---|
1103 | n/a | /* |
---|
1104 | n/a | mspace_footprint() returns the number of bytes obtained from the |
---|
1105 | n/a | system for this space. |
---|
1106 | n/a | */ |
---|
1107 | n/a | size_t mspace_footprint(mspace msp); |
---|
1108 | n/a | |
---|
1109 | n/a | /* |
---|
1110 | n/a | mspace_max_footprint() returns the peak number of bytes obtained from the |
---|
1111 | n/a | system for this space. |
---|
1112 | n/a | */ |
---|
1113 | n/a | size_t mspace_max_footprint(mspace msp); |
---|
1114 | n/a | |
---|
1115 | n/a | |
---|
1116 | n/a | #if !NO_MALLINFO |
---|
1117 | n/a | /* |
---|
1118 | n/a | mspace_mallinfo behaves as mallinfo, but reports properties of |
---|
1119 | n/a | the given space. |
---|
1120 | n/a | */ |
---|
1121 | n/a | struct mallinfo mspace_mallinfo(mspace msp); |
---|
1122 | n/a | #endif /* NO_MALLINFO */ |
---|
1123 | n/a | |
---|
1124 | n/a | /* |
---|
1125 | n/a | mspace_malloc_stats behaves as malloc_stats, but reports |
---|
1126 | n/a | properties of the given space. |
---|
1127 | n/a | */ |
---|
1128 | n/a | void mspace_malloc_stats(mspace msp); |
---|
1129 | n/a | |
---|
1130 | n/a | /* |
---|
1131 | n/a | mspace_trim behaves as malloc_trim, but |
---|
1132 | n/a | operates within the given space. |
---|
1133 | n/a | */ |
---|
1134 | n/a | int mspace_trim(mspace msp, size_t pad); |
---|
1135 | n/a | |
---|
1136 | n/a | /* |
---|
1137 | n/a | An alias for mallopt. |
---|
1138 | n/a | */ |
---|
1139 | n/a | int mspace_mallopt(int, int); |
---|
1140 | n/a | |
---|
1141 | n/a | #endif /* MSPACES */ |
---|
1142 | n/a | |
---|
1143 | n/a | #ifdef __cplusplus |
---|
1144 | n/a | }; /* end of extern "C" */ |
---|
1145 | n/a | #endif /* __cplusplus */ |
---|
1146 | n/a | |
---|
1147 | n/a | /* |
---|
1148 | n/a | ======================================================================== |
---|
1149 | n/a | To make a fully customizable malloc.h header file, cut everything |
---|
1150 | n/a | above this line, put into file malloc.h, edit to suit, and #include it |
---|
1151 | n/a | on the next line, as well as in programs that use this malloc. |
---|
1152 | n/a | ======================================================================== |
---|
1153 | n/a | */ |
---|
1154 | n/a | |
---|
1155 | n/a | /* #include "malloc.h" */ |
---|
1156 | n/a | |
---|
1157 | n/a | /*------------------------------ internal #includes ---------------------- */ |
---|
1158 | n/a | |
---|
1159 | n/a | #ifdef _MSC_VER |
---|
1160 | n/a | #pragma warning( disable : 4146 ) /* no "unsigned" warnings */ |
---|
1161 | n/a | #endif /* _MSC_VER */ |
---|
1162 | n/a | |
---|
1163 | n/a | #include <stdio.h> /* for printing in malloc_stats */ |
---|
1164 | n/a | |
---|
1165 | n/a | #ifndef LACKS_ERRNO_H |
---|
1166 | n/a | #include <errno.h> /* for MALLOC_FAILURE_ACTION */ |
---|
1167 | n/a | #endif /* LACKS_ERRNO_H */ |
---|
1168 | n/a | #if FOOTERS |
---|
1169 | n/a | #include <time.h> /* for magic initialization */ |
---|
1170 | n/a | #endif /* FOOTERS */ |
---|
1171 | n/a | #ifndef LACKS_STDLIB_H |
---|
1172 | n/a | #include <stdlib.h> /* for abort() */ |
---|
1173 | n/a | #endif /* LACKS_STDLIB_H */ |
---|
1174 | n/a | #ifdef DEBUG |
---|
1175 | n/a | #if ABORT_ON_ASSERT_FAILURE |
---|
1176 | n/a | #define assert(x) if(!(x)) ABORT |
---|
1177 | n/a | #else /* ABORT_ON_ASSERT_FAILURE */ |
---|
1178 | n/a | #include <assert.h> |
---|
1179 | n/a | #endif /* ABORT_ON_ASSERT_FAILURE */ |
---|
1180 | n/a | #else /* DEBUG */ |
---|
1181 | n/a | #define assert(x) |
---|
1182 | n/a | #endif /* DEBUG */ |
---|
1183 | n/a | #ifndef LACKS_STRING_H |
---|
1184 | n/a | #include <string.h> /* for memset etc */ |
---|
1185 | n/a | #endif /* LACKS_STRING_H */ |
---|
1186 | n/a | #if USE_BUILTIN_FFS |
---|
1187 | n/a | #ifndef LACKS_STRINGS_H |
---|
1188 | n/a | #include <strings.h> /* for ffs */ |
---|
1189 | n/a | #endif /* LACKS_STRINGS_H */ |
---|
1190 | n/a | #endif /* USE_BUILTIN_FFS */ |
---|
1191 | n/a | #if HAVE_MMAP |
---|
1192 | n/a | #ifndef LACKS_SYS_MMAN_H |
---|
1193 | n/a | #include <sys/mman.h> /* for mmap */ |
---|
1194 | n/a | #endif /* LACKS_SYS_MMAN_H */ |
---|
1195 | n/a | #ifndef LACKS_FCNTL_H |
---|
1196 | n/a | #include <fcntl.h> |
---|
1197 | n/a | #endif /* LACKS_FCNTL_H */ |
---|
1198 | n/a | #endif /* HAVE_MMAP */ |
---|
1199 | n/a | #if HAVE_MORECORE |
---|
1200 | n/a | #ifndef LACKS_UNISTD_H |
---|
1201 | n/a | #include <unistd.h> /* for sbrk */ |
---|
1202 | n/a | #else /* LACKS_UNISTD_H */ |
---|
1203 | n/a | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) |
---|
1204 | n/a | extern void* sbrk(ptrdiff_t); |
---|
1205 | n/a | #endif /* FreeBSD etc */ |
---|
1206 | n/a | #endif /* LACKS_UNISTD_H */ |
---|
1207 | n/a | #endif /* HAVE_MMAP */ |
---|
1208 | n/a | |
---|
1209 | n/a | #ifndef WIN32 |
---|
1210 | n/a | #ifndef malloc_getpagesize |
---|
1211 | n/a | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ |
---|
1212 | n/a | # ifndef _SC_PAGE_SIZE |
---|
1213 | n/a | # define _SC_PAGE_SIZE _SC_PAGESIZE |
---|
1214 | n/a | # endif |
---|
1215 | n/a | # endif |
---|
1216 | n/a | # ifdef _SC_PAGE_SIZE |
---|
1217 | n/a | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) |
---|
1218 | n/a | # else |
---|
1219 | n/a | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) |
---|
1220 | n/a | extern size_t getpagesize(); |
---|
1221 | n/a | # define malloc_getpagesize getpagesize() |
---|
1222 | n/a | # else |
---|
1223 | n/a | # ifdef WIN32 /* use supplied emulation of getpagesize */ |
---|
1224 | n/a | # define malloc_getpagesize getpagesize() |
---|
1225 | n/a | # else |
---|
1226 | n/a | # ifndef LACKS_SYS_PARAM_H |
---|
1227 | n/a | # include <sys/param.h> |
---|
1228 | n/a | # endif |
---|
1229 | n/a | # ifdef EXEC_PAGESIZE |
---|
1230 | n/a | # define malloc_getpagesize EXEC_PAGESIZE |
---|
1231 | n/a | # else |
---|
1232 | n/a | # ifdef NBPG |
---|
1233 | n/a | # ifndef CLSIZE |
---|
1234 | n/a | # define malloc_getpagesize NBPG |
---|
1235 | n/a | # else |
---|
1236 | n/a | # define malloc_getpagesize (NBPG * CLSIZE) |
---|
1237 | n/a | # endif |
---|
1238 | n/a | # else |
---|
1239 | n/a | # ifdef NBPC |
---|
1240 | n/a | # define malloc_getpagesize NBPC |
---|
1241 | n/a | # else |
---|
1242 | n/a | # ifdef PAGESIZE |
---|
1243 | n/a | # define malloc_getpagesize PAGESIZE |
---|
1244 | n/a | # else /* just guess */ |
---|
1245 | n/a | # define malloc_getpagesize ((size_t)4096U) |
---|
1246 | n/a | # endif |
---|
1247 | n/a | # endif |
---|
1248 | n/a | # endif |
---|
1249 | n/a | # endif |
---|
1250 | n/a | # endif |
---|
1251 | n/a | # endif |
---|
1252 | n/a | # endif |
---|
1253 | n/a | #endif |
---|
1254 | n/a | #endif |
---|
1255 | n/a | |
---|
1256 | n/a | /* ------------------- size_t and alignment properties -------------------- */ |
---|
1257 | n/a | |
---|
1258 | n/a | /* The byte and bit size of a size_t */ |
---|
1259 | n/a | #define SIZE_T_SIZE (sizeof(size_t)) |
---|
1260 | n/a | #define SIZE_T_BITSIZE (sizeof(size_t) << 3) |
---|
1261 | n/a | |
---|
1262 | n/a | /* Some constants coerced to size_t */ |
---|
1263 | n/a | /* Annoying but necessary to avoid errors on some platforms */ |
---|
1264 | n/a | #define SIZE_T_ZERO ((size_t)0) |
---|
1265 | n/a | #define SIZE_T_ONE ((size_t)1) |
---|
1266 | n/a | #define SIZE_T_TWO ((size_t)2) |
---|
1267 | n/a | #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) |
---|
1268 | n/a | #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) |
---|
1269 | n/a | #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) |
---|
1270 | n/a | #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) |
---|
1271 | n/a | |
---|
1272 | n/a | /* The bit mask value corresponding to MALLOC_ALIGNMENT */ |
---|
1273 | n/a | #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) |
---|
1274 | n/a | |
---|
1275 | n/a | /* True if address a has acceptable alignment */ |
---|
1276 | n/a | #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) |
---|
1277 | n/a | |
---|
1278 | n/a | /* the number of bytes to offset an address to align it */ |
---|
1279 | n/a | #define align_offset(A)\ |
---|
1280 | n/a | ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ |
---|
1281 | n/a | ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) |
---|
1282 | n/a | |
---|
1283 | n/a | /* -------------------------- MMAP preliminaries ------------------------- */ |
---|
1284 | n/a | |
---|
1285 | n/a | /* |
---|
1286 | n/a | If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and |
---|
1287 | n/a | checks to fail so compiler optimizer can delete code rather than |
---|
1288 | n/a | using so many "#if"s. |
---|
1289 | n/a | */ |
---|
1290 | n/a | |
---|
1291 | n/a | |
---|
1292 | n/a | /* MORECORE and MMAP must return MFAIL on failure */ |
---|
1293 | n/a | #define MFAIL ((void*)(MAX_SIZE_T)) |
---|
1294 | n/a | #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ |
---|
1295 | n/a | |
---|
1296 | n/a | #if !HAVE_MMAP |
---|
1297 | n/a | #define IS_MMAPPED_BIT (SIZE_T_ZERO) |
---|
1298 | n/a | #define USE_MMAP_BIT (SIZE_T_ZERO) |
---|
1299 | n/a | #define CALL_MMAP(s) MFAIL |
---|
1300 | n/a | #define CALL_MUNMAP(a, s) (-1) |
---|
1301 | n/a | #define DIRECT_MMAP(s) MFAIL |
---|
1302 | n/a | |
---|
1303 | n/a | #else /* HAVE_MMAP */ |
---|
1304 | n/a | #define IS_MMAPPED_BIT (SIZE_T_ONE) |
---|
1305 | n/a | #define USE_MMAP_BIT (SIZE_T_ONE) |
---|
1306 | n/a | |
---|
1307 | n/a | #if !defined(WIN32) && !defined (__OS2__) |
---|
1308 | n/a | #define CALL_MUNMAP(a, s) munmap((a), (s)) |
---|
1309 | n/a | #define MMAP_PROT (PROT_READ|PROT_WRITE) |
---|
1310 | n/a | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) |
---|
1311 | n/a | #define MAP_ANONYMOUS MAP_ANON |
---|
1312 | n/a | #endif /* MAP_ANON */ |
---|
1313 | n/a | #ifdef MAP_ANONYMOUS |
---|
1314 | n/a | #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS) |
---|
1315 | n/a | #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) |
---|
1316 | n/a | #else /* MAP_ANONYMOUS */ |
---|
1317 | n/a | /* |
---|
1318 | n/a | Nearly all versions of mmap support MAP_ANONYMOUS, so the following |
---|
1319 | n/a | is unlikely to be needed, but is supplied just in case. |
---|
1320 | n/a | */ |
---|
1321 | n/a | #define MMAP_FLAGS (MAP_PRIVATE) |
---|
1322 | n/a | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ |
---|
1323 | n/a | #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \ |
---|
1324 | n/a | (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
---|
1325 | n/a | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ |
---|
1326 | n/a | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) |
---|
1327 | n/a | #endif /* MAP_ANONYMOUS */ |
---|
1328 | n/a | |
---|
1329 | n/a | #define DIRECT_MMAP(s) CALL_MMAP(s) |
---|
1330 | n/a | |
---|
1331 | n/a | #elif defined(__OS2__) |
---|
1332 | n/a | |
---|
1333 | n/a | /* OS/2 MMAP via DosAllocMem */ |
---|
1334 | n/a | static void* os2mmap(size_t size) { |
---|
1335 | n/a | void* ptr; |
---|
1336 | n/a | if (DosAllocMem(&ptr, size, OBJ_ANY|PAG_COMMIT|PAG_READ|PAG_WRITE) && |
---|
1337 | n/a | DosAllocMem(&ptr, size, PAG_COMMIT|PAG_READ|PAG_WRITE)) |
---|
1338 | n/a | return MFAIL; |
---|
1339 | n/a | return ptr; |
---|
1340 | n/a | } |
---|
1341 | n/a | |
---|
1342 | n/a | #define os2direct_mmap(n) os2mmap(n) |
---|
1343 | n/a | |
---|
1344 | n/a | /* This function supports releasing coalesed segments */ |
---|
1345 | n/a | static int os2munmap(void* ptr, size_t size) { |
---|
1346 | n/a | while (size) { |
---|
1347 | n/a | ULONG ulSize = size; |
---|
1348 | n/a | ULONG ulFlags = 0; |
---|
1349 | n/a | if (DosQueryMem(ptr, &ulSize, &ulFlags) != 0) |
---|
1350 | n/a | return -1; |
---|
1351 | n/a | if ((ulFlags & PAG_BASE) == 0 ||(ulFlags & PAG_COMMIT) == 0 || |
---|
1352 | n/a | ulSize > size) |
---|
1353 | n/a | return -1; |
---|
1354 | n/a | if (DosFreeMem(ptr) != 0) |
---|
1355 | n/a | return -1; |
---|
1356 | n/a | ptr = ( void * ) ( ( char * ) ptr + ulSize ); |
---|
1357 | n/a | size -= ulSize; |
---|
1358 | n/a | } |
---|
1359 | n/a | return 0; |
---|
1360 | n/a | } |
---|
1361 | n/a | |
---|
1362 | n/a | #define CALL_MMAP(s) os2mmap(s) |
---|
1363 | n/a | #define CALL_MUNMAP(a, s) os2munmap((a), (s)) |
---|
1364 | n/a | #define DIRECT_MMAP(s) os2direct_mmap(s) |
---|
1365 | n/a | |
---|
1366 | n/a | #else /* WIN32 */ |
---|
1367 | n/a | |
---|
1368 | n/a | /* Win32 MMAP via VirtualAlloc */ |
---|
1369 | n/a | static void* win32mmap(size_t size) { |
---|
1370 | n/a | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE); |
---|
1371 | n/a | return (ptr != 0)? ptr: MFAIL; |
---|
1372 | n/a | } |
---|
1373 | n/a | |
---|
1374 | n/a | /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ |
---|
1375 | n/a | static void* win32direct_mmap(size_t size) { |
---|
1376 | n/a | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, |
---|
1377 | n/a | PAGE_EXECUTE_READWRITE); |
---|
1378 | n/a | return (ptr != 0)? ptr: MFAIL; |
---|
1379 | n/a | } |
---|
1380 | n/a | |
---|
1381 | n/a | /* This function supports releasing coalesed segments */ |
---|
1382 | n/a | static int win32munmap(void* ptr, size_t size) { |
---|
1383 | n/a | MEMORY_BASIC_INFORMATION minfo; |
---|
1384 | n/a | char* cptr = ptr; |
---|
1385 | n/a | while (size) { |
---|
1386 | n/a | if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) |
---|
1387 | n/a | return -1; |
---|
1388 | n/a | if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || |
---|
1389 | n/a | minfo.State != MEM_COMMIT || minfo.RegionSize > size) |
---|
1390 | n/a | return -1; |
---|
1391 | n/a | if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) |
---|
1392 | n/a | return -1; |
---|
1393 | n/a | cptr += minfo.RegionSize; |
---|
1394 | n/a | size -= minfo.RegionSize; |
---|
1395 | n/a | } |
---|
1396 | n/a | return 0; |
---|
1397 | n/a | } |
---|
1398 | n/a | |
---|
1399 | n/a | #define CALL_MMAP(s) win32mmap(s) |
---|
1400 | n/a | #define CALL_MUNMAP(a, s) win32munmap((a), (s)) |
---|
1401 | n/a | #define DIRECT_MMAP(s) win32direct_mmap(s) |
---|
1402 | n/a | #endif /* WIN32 */ |
---|
1403 | n/a | #endif /* HAVE_MMAP */ |
---|
1404 | n/a | |
---|
1405 | n/a | #if HAVE_MMAP && HAVE_MREMAP |
---|
1406 | n/a | #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) |
---|
1407 | n/a | #else /* HAVE_MMAP && HAVE_MREMAP */ |
---|
1408 | n/a | #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL |
---|
1409 | n/a | #endif /* HAVE_MMAP && HAVE_MREMAP */ |
---|
1410 | n/a | |
---|
1411 | n/a | #if HAVE_MORECORE |
---|
1412 | n/a | #define CALL_MORECORE(S) MORECORE(S) |
---|
1413 | n/a | #else /* HAVE_MORECORE */ |
---|
1414 | n/a | #define CALL_MORECORE(S) MFAIL |
---|
1415 | n/a | #endif /* HAVE_MORECORE */ |
---|
1416 | n/a | |
---|
1417 | n/a | /* mstate bit set if contiguous morecore disabled or failed */ |
---|
1418 | n/a | #define USE_NONCONTIGUOUS_BIT (4U) |
---|
1419 | n/a | |
---|
1420 | n/a | /* segment bit set in create_mspace_with_base */ |
---|
1421 | n/a | #define EXTERN_BIT (8U) |
---|
1422 | n/a | |
---|
1423 | n/a | |
---|
1424 | n/a | /* --------------------------- Lock preliminaries ------------------------ */ |
---|
1425 | n/a | |
---|
1426 | n/a | #if USE_LOCKS |
---|
1427 | n/a | |
---|
1428 | n/a | /* |
---|
1429 | n/a | When locks are defined, there are up to two global locks: |
---|
1430 | n/a | |
---|
1431 | n/a | * If HAVE_MORECORE, morecore_mutex protects sequences of calls to |
---|
1432 | n/a | MORECORE. In many cases sys_alloc requires two calls, that should |
---|
1433 | n/a | not be interleaved with calls by other threads. This does not |
---|
1434 | n/a | protect against direct calls to MORECORE by other threads not |
---|
1435 | n/a | using this lock, so there is still code to cope the best we can on |
---|
1436 | n/a | interference. |
---|
1437 | n/a | |
---|
1438 | n/a | * magic_init_mutex ensures that mparams.magic and other |
---|
1439 | n/a | unique mparams values are initialized only once. |
---|
1440 | n/a | */ |
---|
1441 | n/a | |
---|
1442 | n/a | #if !defined(WIN32) && !defined(__OS2__) |
---|
1443 | n/a | /* By default use posix locks */ |
---|
1444 | n/a | #include <pthread.h> |
---|
1445 | n/a | #define MLOCK_T pthread_mutex_t |
---|
1446 | n/a | #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL) |
---|
1447 | n/a | #define ACQUIRE_LOCK(l) pthread_mutex_lock(l) |
---|
1448 | n/a | #define RELEASE_LOCK(l) pthread_mutex_unlock(l) |
---|
1449 | n/a | |
---|
1450 | n/a | #if HAVE_MORECORE |
---|
1451 | n/a | static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER; |
---|
1452 | n/a | #endif /* HAVE_MORECORE */ |
---|
1453 | n/a | |
---|
1454 | n/a | static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER; |
---|
1455 | n/a | |
---|
1456 | n/a | #elif defined(__OS2__) |
---|
1457 | n/a | #define MLOCK_T HMTX |
---|
1458 | n/a | #define INITIAL_LOCK(l) DosCreateMutexSem(0, l, 0, FALSE) |
---|
1459 | n/a | #define ACQUIRE_LOCK(l) DosRequestMutexSem(*l, SEM_INDEFINITE_WAIT) |
---|
1460 | n/a | #define RELEASE_LOCK(l) DosReleaseMutexSem(*l) |
---|
1461 | n/a | #if HAVE_MORECORE |
---|
1462 | n/a | static MLOCK_T morecore_mutex; |
---|
1463 | n/a | #endif /* HAVE_MORECORE */ |
---|
1464 | n/a | static MLOCK_T magic_init_mutex; |
---|
1465 | n/a | |
---|
1466 | n/a | #else /* WIN32 */ |
---|
1467 | n/a | /* |
---|
1468 | n/a | Because lock-protected regions have bounded times, and there |
---|
1469 | n/a | are no recursive lock calls, we can use simple spinlocks. |
---|
1470 | n/a | */ |
---|
1471 | n/a | |
---|
1472 | n/a | #define MLOCK_T long |
---|
1473 | n/a | static int win32_acquire_lock (MLOCK_T *sl) { |
---|
1474 | n/a | for (;;) { |
---|
1475 | n/a | #ifdef InterlockedCompareExchangePointer |
---|
1476 | n/a | if (!InterlockedCompareExchange(sl, 1, 0)) |
---|
1477 | n/a | return 0; |
---|
1478 | n/a | #else /* Use older void* version */ |
---|
1479 | n/a | if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0)) |
---|
1480 | n/a | return 0; |
---|
1481 | n/a | #endif /* InterlockedCompareExchangePointer */ |
---|
1482 | n/a | Sleep (0); |
---|
1483 | n/a | } |
---|
1484 | n/a | } |
---|
1485 | n/a | |
---|
1486 | n/a | static void win32_release_lock (MLOCK_T *sl) { |
---|
1487 | n/a | InterlockedExchange (sl, 0); |
---|
1488 | n/a | } |
---|
1489 | n/a | |
---|
1490 | n/a | #define INITIAL_LOCK(l) *(l)=0 |
---|
1491 | n/a | #define ACQUIRE_LOCK(l) win32_acquire_lock(l) |
---|
1492 | n/a | #define RELEASE_LOCK(l) win32_release_lock(l) |
---|
1493 | n/a | #if HAVE_MORECORE |
---|
1494 | n/a | static MLOCK_T morecore_mutex; |
---|
1495 | n/a | #endif /* HAVE_MORECORE */ |
---|
1496 | n/a | static MLOCK_T magic_init_mutex; |
---|
1497 | n/a | #endif /* WIN32 */ |
---|
1498 | n/a | |
---|
1499 | n/a | #define USE_LOCK_BIT (2U) |
---|
1500 | n/a | #else /* USE_LOCKS */ |
---|
1501 | n/a | #define USE_LOCK_BIT (0U) |
---|
1502 | n/a | #define INITIAL_LOCK(l) |
---|
1503 | n/a | #endif /* USE_LOCKS */ |
---|
1504 | n/a | |
---|
1505 | n/a | #if USE_LOCKS && HAVE_MORECORE |
---|
1506 | n/a | #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex); |
---|
1507 | n/a | #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex); |
---|
1508 | n/a | #else /* USE_LOCKS && HAVE_MORECORE */ |
---|
1509 | n/a | #define ACQUIRE_MORECORE_LOCK() |
---|
1510 | n/a | #define RELEASE_MORECORE_LOCK() |
---|
1511 | n/a | #endif /* USE_LOCKS && HAVE_MORECORE */ |
---|
1512 | n/a | |
---|
1513 | n/a | #if USE_LOCKS |
---|
1514 | n/a | #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex); |
---|
1515 | n/a | #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex); |
---|
1516 | n/a | #else /* USE_LOCKS */ |
---|
1517 | n/a | #define ACQUIRE_MAGIC_INIT_LOCK() |
---|
1518 | n/a | #define RELEASE_MAGIC_INIT_LOCK() |
---|
1519 | n/a | #endif /* USE_LOCKS */ |
---|
1520 | n/a | |
---|
1521 | n/a | |
---|
1522 | n/a | /* ----------------------- Chunk representations ------------------------ */ |
---|
1523 | n/a | |
---|
1524 | n/a | /* |
---|
1525 | n/a | (The following includes lightly edited explanations by Colin Plumb.) |
---|
1526 | n/a | |
---|
1527 | n/a | The malloc_chunk declaration below is misleading (but accurate and |
---|
1528 | n/a | necessary). It declares a "view" into memory allowing access to |
---|
1529 | n/a | necessary fields at known offsets from a given base. |
---|
1530 | n/a | |
---|
1531 | n/a | Chunks of memory are maintained using a `boundary tag' method as |
---|
1532 | n/a | originally described by Knuth. (See the paper by Paul Wilson |
---|
1533 | n/a | ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such |
---|
1534 | n/a | techniques.) Sizes of free chunks are stored both in the front of |
---|
1535 | n/a | each chunk and at the end. This makes consolidating fragmented |
---|
1536 | n/a | chunks into bigger chunks fast. The head fields also hold bits |
---|
1537 | n/a | representing whether chunks are free or in use. |
---|
1538 | n/a | |
---|
1539 | n/a | Here are some pictures to make it clearer. They are "exploded" to |
---|
1540 | n/a | show that the state of a chunk can be thought of as extending from |
---|
1541 | n/a | the high 31 bits of the head field of its header through the |
---|
1542 | n/a | prev_foot and PINUSE_BIT bit of the following chunk header. |
---|
1543 | n/a | |
---|
1544 | n/a | A chunk that's in use looks like: |
---|
1545 | n/a | |
---|
1546 | n/a | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1547 | n/a | | Size of previous chunk (if P = 1) | |
---|
1548 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1549 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
---|
1550 | n/a | | Size of this chunk 1| +-+ |
---|
1551 | n/a | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1552 | n/a | | | |
---|
1553 | n/a | +- -+ |
---|
1554 | n/a | | | |
---|
1555 | n/a | +- -+ |
---|
1556 | n/a | | : |
---|
1557 | n/a | +- size - sizeof(size_t) available payload bytes -+ |
---|
1558 | n/a | : | |
---|
1559 | n/a | chunk-> +- -+ |
---|
1560 | n/a | | | |
---|
1561 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1562 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| |
---|
1563 | n/a | | Size of next chunk (may or may not be in use) | +-+ |
---|
1564 | n/a | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1565 | n/a | |
---|
1566 | n/a | And if it's free, it looks like this: |
---|
1567 | n/a | |
---|
1568 | n/a | chunk-> +- -+ |
---|
1569 | n/a | | User payload (must be in use, or we would have merged!) | |
---|
1570 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1571 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
---|
1572 | n/a | | Size of this chunk 0| +-+ |
---|
1573 | n/a | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1574 | n/a | | Next pointer | |
---|
1575 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1576 | n/a | | Prev pointer | |
---|
1577 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1578 | n/a | | : |
---|
1579 | n/a | +- size - sizeof(struct chunk) unused bytes -+ |
---|
1580 | n/a | : | |
---|
1581 | n/a | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1582 | n/a | | Size of this chunk | |
---|
1583 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1584 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| |
---|
1585 | n/a | | Size of next chunk (must be in use, or we would have merged)| +-+ |
---|
1586 | n/a | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1587 | n/a | | : |
---|
1588 | n/a | +- User payload -+ |
---|
1589 | n/a | : | |
---|
1590 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1591 | n/a | |0| |
---|
1592 | n/a | +-+ |
---|
1593 | n/a | Note that since we always merge adjacent free chunks, the chunks |
---|
1594 | n/a | adjacent to a free chunk must be in use. |
---|
1595 | n/a | |
---|
1596 | n/a | Given a pointer to a chunk (which can be derived trivially from the |
---|
1597 | n/a | payload pointer) we can, in O(1) time, find out whether the adjacent |
---|
1598 | n/a | chunks are free, and if so, unlink them from the lists that they |
---|
1599 | n/a | are on and merge them with the current chunk. |
---|
1600 | n/a | |
---|
1601 | n/a | Chunks always begin on even word boundaries, so the mem portion |
---|
1602 | n/a | (which is returned to the user) is also on an even word boundary, and |
---|
1603 | n/a | thus at least double-word aligned. |
---|
1604 | n/a | |
---|
1605 | n/a | The P (PINUSE_BIT) bit, stored in the unused low-order bit of the |
---|
1606 | n/a | chunk size (which is always a multiple of two words), is an in-use |
---|
1607 | n/a | bit for the *previous* chunk. If that bit is *clear*, then the |
---|
1608 | n/a | word before the current chunk size contains the previous chunk |
---|
1609 | n/a | size, and can be used to find the front of the previous chunk. |
---|
1610 | n/a | The very first chunk allocated always has this bit set, preventing |
---|
1611 | n/a | access to non-existent (or non-owned) memory. If pinuse is set for |
---|
1612 | n/a | any given chunk, then you CANNOT determine the size of the |
---|
1613 | n/a | previous chunk, and might even get a memory addressing fault when |
---|
1614 | n/a | trying to do so. |
---|
1615 | n/a | |
---|
1616 | n/a | The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of |
---|
1617 | n/a | the chunk size redundantly records whether the current chunk is |
---|
1618 | n/a | inuse. This redundancy enables usage checks within free and realloc, |
---|
1619 | n/a | and reduces indirection when freeing and consolidating chunks. |
---|
1620 | n/a | |
---|
1621 | n/a | Each freshly allocated chunk must have both cinuse and pinuse set. |
---|
1622 | n/a | That is, each allocated chunk borders either a previously allocated |
---|
1623 | n/a | and still in-use chunk, or the base of its memory arena. This is |
---|
1624 | n/a | ensured by making all allocations from the the `lowest' part of any |
---|
1625 | n/a | found chunk. Further, no free chunk physically borders another one, |
---|
1626 | n/a | so each free chunk is known to be preceded and followed by either |
---|
1627 | n/a | inuse chunks or the ends of memory. |
---|
1628 | n/a | |
---|
1629 | n/a | Note that the `foot' of the current chunk is actually represented |
---|
1630 | n/a | as the prev_foot of the NEXT chunk. This makes it easier to |
---|
1631 | n/a | deal with alignments etc but can be very confusing when trying |
---|
1632 | n/a | to extend or adapt this code. |
---|
1633 | n/a | |
---|
1634 | n/a | The exceptions to all this are |
---|
1635 | n/a | |
---|
1636 | n/a | 1. The special chunk `top' is the top-most available chunk (i.e., |
---|
1637 | n/a | the one bordering the end of available memory). It is treated |
---|
1638 | n/a | specially. Top is never included in any bin, is used only if |
---|
1639 | n/a | no other chunk is available, and is released back to the |
---|
1640 | n/a | system if it is very large (see M_TRIM_THRESHOLD). In effect, |
---|
1641 | n/a | the top chunk is treated as larger (and thus less well |
---|
1642 | n/a | fitting) than any other available chunk. The top chunk |
---|
1643 | n/a | doesn't update its trailing size field since there is no next |
---|
1644 | n/a | contiguous chunk that would have to index off it. However, |
---|
1645 | n/a | space is still allocated for it (TOP_FOOT_SIZE) to enable |
---|
1646 | n/a | separation or merging when space is extended. |
---|
1647 | n/a | |
---|
1648 | n/a | 3. Chunks allocated via mmap, which have the lowest-order bit |
---|
1649 | n/a | (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set |
---|
1650 | n/a | PINUSE_BIT in their head fields. Because they are allocated |
---|
1651 | n/a | one-by-one, each must carry its own prev_foot field, which is |
---|
1652 | n/a | also used to hold the offset this chunk has within its mmapped |
---|
1653 | n/a | region, which is needed to preserve alignment. Each mmapped |
---|
1654 | n/a | chunk is trailed by the first two fields of a fake next-chunk |
---|
1655 | n/a | for sake of usage checks. |
---|
1656 | n/a | |
---|
1657 | n/a | */ |
---|
1658 | n/a | |
---|
1659 | n/a | struct malloc_chunk { |
---|
1660 | n/a | size_t prev_foot; /* Size of previous chunk (if free). */ |
---|
1661 | n/a | size_t head; /* Size and inuse bits. */ |
---|
1662 | n/a | struct malloc_chunk* fd; /* double links -- used only if free. */ |
---|
1663 | n/a | struct malloc_chunk* bk; |
---|
1664 | n/a | }; |
---|
1665 | n/a | |
---|
1666 | n/a | typedef struct malloc_chunk mchunk; |
---|
1667 | n/a | typedef struct malloc_chunk* mchunkptr; |
---|
1668 | n/a | typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ |
---|
1669 | n/a | typedef size_t bindex_t; /* Described below */ |
---|
1670 | n/a | typedef unsigned int binmap_t; /* Described below */ |
---|
1671 | n/a | typedef unsigned int flag_t; /* The type of various bit flag sets */ |
---|
1672 | n/a | |
---|
1673 | n/a | /* ------------------- Chunks sizes and alignments ----------------------- */ |
---|
1674 | n/a | |
---|
1675 | n/a | #define MCHUNK_SIZE (sizeof(mchunk)) |
---|
1676 | n/a | |
---|
1677 | n/a | #if FOOTERS |
---|
1678 | n/a | #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
---|
1679 | n/a | #else /* FOOTERS */ |
---|
1680 | n/a | #define CHUNK_OVERHEAD (SIZE_T_SIZE) |
---|
1681 | n/a | #endif /* FOOTERS */ |
---|
1682 | n/a | |
---|
1683 | n/a | /* MMapped chunks need a second word of overhead ... */ |
---|
1684 | n/a | #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
---|
1685 | n/a | /* ... and additional padding for fake next-chunk at foot */ |
---|
1686 | n/a | #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) |
---|
1687 | n/a | |
---|
1688 | n/a | /* The smallest size we can malloc is an aligned minimal chunk */ |
---|
1689 | n/a | #define MIN_CHUNK_SIZE\ |
---|
1690 | n/a | ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
---|
1691 | n/a | |
---|
1692 | n/a | /* conversion from malloc headers to user pointers, and back */ |
---|
1693 | n/a | #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) |
---|
1694 | n/a | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) |
---|
1695 | n/a | /* chunk associated with aligned address A */ |
---|
1696 | n/a | #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) |
---|
1697 | n/a | |
---|
1698 | n/a | /* Bounds on request (not chunk) sizes. */ |
---|
1699 | n/a | #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) |
---|
1700 | n/a | #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) |
---|
1701 | n/a | |
---|
1702 | n/a | /* pad request bytes into a usable size */ |
---|
1703 | n/a | #define pad_request(req) \ |
---|
1704 | n/a | (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
---|
1705 | n/a | |
---|
1706 | n/a | /* pad request, checking for minimum (but not maximum) */ |
---|
1707 | n/a | #define request2size(req) \ |
---|
1708 | n/a | (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) |
---|
1709 | n/a | |
---|
1710 | n/a | |
---|
1711 | n/a | /* ------------------ Operations on head and foot fields ----------------- */ |
---|
1712 | n/a | |
---|
1713 | n/a | /* |
---|
1714 | n/a | The head field of a chunk is or'ed with PINUSE_BIT when previous |
---|
1715 | n/a | adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in |
---|
1716 | n/a | use. If the chunk was obtained with mmap, the prev_foot field has |
---|
1717 | n/a | IS_MMAPPED_BIT set, otherwise holding the offset of the base of the |
---|
1718 | n/a | mmapped region to the base of the chunk. |
---|
1719 | n/a | */ |
---|
1720 | n/a | |
---|
1721 | n/a | #define PINUSE_BIT (SIZE_T_ONE) |
---|
1722 | n/a | #define CINUSE_BIT (SIZE_T_TWO) |
---|
1723 | n/a | #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) |
---|
1724 | n/a | |
---|
1725 | n/a | /* Head value for fenceposts */ |
---|
1726 | n/a | #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) |
---|
1727 | n/a | |
---|
1728 | n/a | /* extraction of fields from head words */ |
---|
1729 | n/a | #define cinuse(p) ((p)->head & CINUSE_BIT) |
---|
1730 | n/a | #define pinuse(p) ((p)->head & PINUSE_BIT) |
---|
1731 | n/a | #define chunksize(p) ((p)->head & ~(INUSE_BITS)) |
---|
1732 | n/a | |
---|
1733 | n/a | #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) |
---|
1734 | n/a | #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT) |
---|
1735 | n/a | |
---|
1736 | n/a | /* Treat space at ptr +/- offset as a chunk */ |
---|
1737 | n/a | #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) |
---|
1738 | n/a | #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) |
---|
1739 | n/a | |
---|
1740 | n/a | /* Ptr to next or previous physical malloc_chunk. */ |
---|
1741 | n/a | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS))) |
---|
1742 | n/a | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) |
---|
1743 | n/a | |
---|
1744 | n/a | /* extract next chunk's pinuse bit */ |
---|
1745 | n/a | #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) |
---|
1746 | n/a | |
---|
1747 | n/a | /* Get/set size at footer */ |
---|
1748 | n/a | #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) |
---|
1749 | n/a | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) |
---|
1750 | n/a | |
---|
1751 | n/a | /* Set size, pinuse bit, and foot */ |
---|
1752 | n/a | #define set_size_and_pinuse_of_free_chunk(p, s)\ |
---|
1753 | n/a | ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) |
---|
1754 | n/a | |
---|
1755 | n/a | /* Set size, pinuse bit, foot, and clear next pinuse */ |
---|
1756 | n/a | #define set_free_with_pinuse(p, s, n)\ |
---|
1757 | n/a | (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) |
---|
1758 | n/a | |
---|
1759 | n/a | #define is_mmapped(p)\ |
---|
1760 | n/a | (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT)) |
---|
1761 | n/a | |
---|
1762 | n/a | /* Get the internal overhead associated with chunk p */ |
---|
1763 | n/a | #define overhead_for(p)\ |
---|
1764 | n/a | (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) |
---|
1765 | n/a | |
---|
1766 | n/a | /* Return true if malloced space is not necessarily cleared */ |
---|
1767 | n/a | #if MMAP_CLEARS |
---|
1768 | n/a | #define calloc_must_clear(p) (!is_mmapped(p)) |
---|
1769 | n/a | #else /* MMAP_CLEARS */ |
---|
1770 | n/a | #define calloc_must_clear(p) (1) |
---|
1771 | n/a | #endif /* MMAP_CLEARS */ |
---|
1772 | n/a | |
---|
1773 | n/a | /* ---------------------- Overlaid data structures ----------------------- */ |
---|
1774 | n/a | |
---|
1775 | n/a | /* |
---|
1776 | n/a | When chunks are not in use, they are treated as nodes of either |
---|
1777 | n/a | lists or trees. |
---|
1778 | n/a | |
---|
1779 | n/a | "Small" chunks are stored in circular doubly-linked lists, and look |
---|
1780 | n/a | like this: |
---|
1781 | n/a | |
---|
1782 | n/a | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1783 | n/a | | Size of previous chunk | |
---|
1784 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1785 | n/a | `head:' | Size of chunk, in bytes |P| |
---|
1786 | n/a | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1787 | n/a | | Forward pointer to next chunk in list | |
---|
1788 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1789 | n/a | | Back pointer to previous chunk in list | |
---|
1790 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1791 | n/a | | Unused space (may be 0 bytes long) . |
---|
1792 | n/a | . . |
---|
1793 | n/a | . | |
---|
1794 | n/a | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1795 | n/a | `foot:' | Size of chunk, in bytes | |
---|
1796 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1797 | n/a | |
---|
1798 | n/a | Larger chunks are kept in a form of bitwise digital trees (aka |
---|
1799 | n/a | tries) keyed on chunksizes. Because malloc_tree_chunks are only for |
---|
1800 | n/a | free chunks greater than 256 bytes, their size doesn't impose any |
---|
1801 | n/a | constraints on user chunk sizes. Each node looks like: |
---|
1802 | n/a | |
---|
1803 | n/a | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1804 | n/a | | Size of previous chunk | |
---|
1805 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1806 | n/a | `head:' | Size of chunk, in bytes |P| |
---|
1807 | n/a | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1808 | n/a | | Forward pointer to next chunk of same size | |
---|
1809 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1810 | n/a | | Back pointer to previous chunk of same size | |
---|
1811 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1812 | n/a | | Pointer to left child (child[0]) | |
---|
1813 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1814 | n/a | | Pointer to right child (child[1]) | |
---|
1815 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1816 | n/a | | Pointer to parent | |
---|
1817 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1818 | n/a | | bin index of this chunk | |
---|
1819 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1820 | n/a | | Unused space . |
---|
1821 | n/a | . | |
---|
1822 | n/a | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1823 | n/a | `foot:' | Size of chunk, in bytes | |
---|
1824 | n/a | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
---|
1825 | n/a | |
---|
1826 | n/a | Each tree holding treenodes is a tree of unique chunk sizes. Chunks |
---|
1827 | n/a | of the same size are arranged in a circularly-linked list, with only |
---|
1828 | n/a | the oldest chunk (the next to be used, in our FIFO ordering) |
---|
1829 | n/a | actually in the tree. (Tree members are distinguished by a non-null |
---|
1830 | n/a | parent pointer.) If a chunk with the same size an an existing node |
---|
1831 | n/a | is inserted, it is linked off the existing node using pointers that |
---|
1832 | n/a | work in the same way as fd/bk pointers of small chunks. |
---|
1833 | n/a | |
---|
1834 | n/a | Each tree contains a power of 2 sized range of chunk sizes (the |
---|
1835 | n/a | smallest is 0x100 <= x < 0x180), which is is divided in half at each |
---|
1836 | n/a | tree level, with the chunks in the smaller half of the range (0x100 |
---|
1837 | n/a | <= x < 0x140 for the top nose) in the left subtree and the larger |
---|
1838 | n/a | half (0x140 <= x < 0x180) in the right subtree. This is, of course, |
---|
1839 | n/a | done by inspecting individual bits. |
---|
1840 | n/a | |
---|
1841 | n/a | Using these rules, each node's left subtree contains all smaller |
---|
1842 | n/a | sizes than its right subtree. However, the node at the root of each |
---|
1843 | n/a | subtree has no particular ordering relationship to either. (The |
---|
1844 | n/a | dividing line between the subtree sizes is based on trie relation.) |
---|
1845 | n/a | If we remove the last chunk of a given size from the interior of the |
---|
1846 | n/a | tree, we need to replace it with a leaf node. The tree ordering |
---|
1847 | n/a | rules permit a node to be replaced by any leaf below it. |
---|
1848 | n/a | |
---|
1849 | n/a | The smallest chunk in a tree (a common operation in a best-fit |
---|
1850 | n/a | allocator) can be found by walking a path to the leftmost leaf in |
---|
1851 | n/a | the tree. Unlike a usual binary tree, where we follow left child |
---|
1852 | n/a | pointers until we reach a null, here we follow the right child |
---|
1853 | n/a | pointer any time the left one is null, until we reach a leaf with |
---|
1854 | n/a | both child pointers null. The smallest chunk in the tree will be |
---|
1855 | n/a | somewhere along that path. |
---|
1856 | n/a | |
---|
1857 | n/a | The worst case number of steps to add, find, or remove a node is |
---|
1858 | n/a | bounded by the number of bits differentiating chunks within |
---|
1859 | n/a | bins. Under current bin calculations, this ranges from 6 up to 21 |
---|
1860 | n/a | (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case |
---|
1861 | n/a | is of course much better. |
---|
1862 | n/a | */ |
---|
1863 | n/a | |
---|
1864 | n/a | struct malloc_tree_chunk { |
---|
1865 | n/a | /* The first four fields must be compatible with malloc_chunk */ |
---|
1866 | n/a | size_t prev_foot; |
---|
1867 | n/a | size_t head; |
---|
1868 | n/a | struct malloc_tree_chunk* fd; |
---|
1869 | n/a | struct malloc_tree_chunk* bk; |
---|
1870 | n/a | |
---|
1871 | n/a | struct malloc_tree_chunk* child[2]; |
---|
1872 | n/a | struct malloc_tree_chunk* parent; |
---|
1873 | n/a | bindex_t index; |
---|
1874 | n/a | }; |
---|
1875 | n/a | |
---|
1876 | n/a | typedef struct malloc_tree_chunk tchunk; |
---|
1877 | n/a | typedef struct malloc_tree_chunk* tchunkptr; |
---|
1878 | n/a | typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ |
---|
1879 | n/a | |
---|
1880 | n/a | /* A little helper macro for trees */ |
---|
1881 | n/a | #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) |
---|
1882 | n/a | |
---|
1883 | n/a | /* ----------------------------- Segments -------------------------------- */ |
---|
1884 | n/a | |
---|
1885 | n/a | /* |
---|
1886 | n/a | Each malloc space may include non-contiguous segments, held in a |
---|
1887 | n/a | list headed by an embedded malloc_segment record representing the |
---|
1888 | n/a | top-most space. Segments also include flags holding properties of |
---|
1889 | n/a | the space. Large chunks that are directly allocated by mmap are not |
---|
1890 | n/a | included in this list. They are instead independently created and |
---|
1891 | n/a | destroyed without otherwise keeping track of them. |
---|
1892 | n/a | |
---|
1893 | n/a | Segment management mainly comes into play for spaces allocated by |
---|
1894 | n/a | MMAP. Any call to MMAP might or might not return memory that is |
---|
1895 | n/a | adjacent to an existing segment. MORECORE normally contiguously |
---|
1896 | n/a | extends the current space, so this space is almost always adjacent, |
---|
1897 | n/a | which is simpler and faster to deal with. (This is why MORECORE is |
---|
1898 | n/a | used preferentially to MMAP when both are available -- see |
---|
1899 | n/a | sys_alloc.) When allocating using MMAP, we don't use any of the |
---|
1900 | n/a | hinting mechanisms (inconsistently) supported in various |
---|
1901 | n/a | implementations of unix mmap, or distinguish reserving from |
---|
1902 | n/a | committing memory. Instead, we just ask for space, and exploit |
---|
1903 | n/a | contiguity when we get it. It is probably possible to do |
---|
1904 | n/a | better than this on some systems, but no general scheme seems |
---|
1905 | n/a | to be significantly better. |
---|
1906 | n/a | |
---|
1907 | n/a | Management entails a simpler variant of the consolidation scheme |
---|
1908 | n/a | used for chunks to reduce fragmentation -- new adjacent memory is |
---|
1909 | n/a | normally prepended or appended to an existing segment. However, |
---|
1910 | n/a | there are limitations compared to chunk consolidation that mostly |
---|
1911 | n/a | reflect the fact that segment processing is relatively infrequent |
---|
1912 | n/a | (occurring only when getting memory from system) and that we |
---|
1913 | n/a | don't expect to have huge numbers of segments: |
---|
1914 | n/a | |
---|
1915 | n/a | * Segments are not indexed, so traversal requires linear scans. (It |
---|
1916 | n/a | would be possible to index these, but is not worth the extra |
---|
1917 | n/a | overhead and complexity for most programs on most platforms.) |
---|
1918 | n/a | * New segments are only appended to old ones when holding top-most |
---|
1919 | n/a | memory; if they cannot be prepended to others, they are held in |
---|
1920 | n/a | different segments. |
---|
1921 | n/a | |
---|
1922 | n/a | Except for the top-most segment of an mstate, each segment record |
---|
1923 | n/a | is kept at the tail of its segment. Segments are added by pushing |
---|
1924 | n/a | segment records onto the list headed by &mstate.seg for the |
---|
1925 | n/a | containing mstate. |
---|
1926 | n/a | |
---|
1927 | n/a | Segment flags control allocation/merge/deallocation policies: |
---|
1928 | n/a | * If EXTERN_BIT set, then we did not allocate this segment, |
---|
1929 | n/a | and so should not try to deallocate or merge with others. |
---|
1930 | n/a | (This currently holds only for the initial segment passed |
---|
1931 | n/a | into create_mspace_with_base.) |
---|
1932 | n/a | * If IS_MMAPPED_BIT set, the segment may be merged with |
---|
1933 | n/a | other surrounding mmapped segments and trimmed/de-allocated |
---|
1934 | n/a | using munmap. |
---|
1935 | n/a | * If neither bit is set, then the segment was obtained using |
---|
1936 | n/a | MORECORE so can be merged with surrounding MORECORE'd segments |
---|
1937 | n/a | and deallocated/trimmed using MORECORE with negative arguments. |
---|
1938 | n/a | */ |
---|
1939 | n/a | |
---|
1940 | n/a | struct malloc_segment { |
---|
1941 | n/a | char* base; /* base address */ |
---|
1942 | n/a | size_t size; /* allocated size */ |
---|
1943 | n/a | struct malloc_segment* next; /* ptr to next segment */ |
---|
1944 | n/a | #if FFI_MMAP_EXEC_WRIT |
---|
1945 | n/a | /* The mmap magic is supposed to store the address of the executable |
---|
1946 | n/a | segment at the very end of the requested block. */ |
---|
1947 | n/a | |
---|
1948 | n/a | # define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t))) |
---|
1949 | n/a | |
---|
1950 | n/a | /* We can only merge segments if their corresponding executable |
---|
1951 | n/a | segments are at identical offsets. */ |
---|
1952 | n/a | # define check_segment_merge(S,b,s) \ |
---|
1953 | n/a | (mmap_exec_offset((b),(s)) == (S)->exec_offset) |
---|
1954 | n/a | |
---|
1955 | n/a | # define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset) |
---|
1956 | n/a | # define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset) |
---|
1957 | n/a | |
---|
1958 | n/a | /* The removal of sflags only works with HAVE_MORECORE == 0. */ |
---|
1959 | n/a | |
---|
1960 | n/a | # define get_segment_flags(S) (IS_MMAPPED_BIT) |
---|
1961 | n/a | # define set_segment_flags(S,v) \ |
---|
1962 | n/a | (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) : \ |
---|
1963 | n/a | (((S)->exec_offset = \ |
---|
1964 | n/a | mmap_exec_offset((S)->base, (S)->size)), \ |
---|
1965 | n/a | (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) != \ |
---|
1966 | n/a | (S)->exec_offset) ? (ABORT, (v)) : \ |
---|
1967 | n/a | (mmap_exec_offset((S)->base, (S)->size) = 0), (v))) |
---|
1968 | n/a | |
---|
1969 | n/a | /* We use an offset here, instead of a pointer, because then, when |
---|
1970 | n/a | base changes, we don't have to modify this. On architectures |
---|
1971 | n/a | with segmented addresses, this might not work. */ |
---|
1972 | n/a | ptrdiff_t exec_offset; |
---|
1973 | n/a | #else |
---|
1974 | n/a | |
---|
1975 | n/a | # define get_segment_flags(S) ((S)->sflags) |
---|
1976 | n/a | # define set_segment_flags(S,v) ((S)->sflags = (v)) |
---|
1977 | n/a | # define check_segment_merge(S,b,s) (1) |
---|
1978 | n/a | |
---|
1979 | n/a | flag_t sflags; /* mmap and extern flag */ |
---|
1980 | n/a | #endif |
---|
1981 | n/a | }; |
---|
1982 | n/a | |
---|
1983 | n/a | #define is_mmapped_segment(S) (get_segment_flags(S) & IS_MMAPPED_BIT) |
---|
1984 | n/a | #define is_extern_segment(S) (get_segment_flags(S) & EXTERN_BIT) |
---|
1985 | n/a | |
---|
1986 | n/a | typedef struct malloc_segment msegment; |
---|
1987 | n/a | typedef struct malloc_segment* msegmentptr; |
---|
1988 | n/a | |
---|
1989 | n/a | /* ---------------------------- malloc_state ----------------------------- */ |
---|
1990 | n/a | |
---|
1991 | n/a | /* |
---|
1992 | n/a | A malloc_state holds all of the bookkeeping for a space. |
---|
1993 | n/a | The main fields are: |
---|
1994 | n/a | |
---|
1995 | n/a | Top |
---|
1996 | n/a | The topmost chunk of the currently active segment. Its size is |
---|
1997 | n/a | cached in topsize. The actual size of topmost space is |
---|
1998 | n/a | topsize+TOP_FOOT_SIZE, which includes space reserved for adding |
---|
1999 | n/a | fenceposts and segment records if necessary when getting more |
---|
2000 | n/a | space from the system. The size at which to autotrim top is |
---|
2001 | n/a | cached from mparams in trim_check, except that it is disabled if |
---|
2002 | n/a | an autotrim fails. |
---|
2003 | n/a | |
---|
2004 | n/a | Designated victim (dv) |
---|
2005 | n/a | This is the preferred chunk for servicing small requests that |
---|
2006 | n/a | don't have exact fits. It is normally the chunk split off most |
---|
2007 | n/a | recently to service another small request. Its size is cached in |
---|
2008 | n/a | dvsize. The link fields of this chunk are not maintained since it |
---|
2009 | n/a | is not kept in a bin. |
---|
2010 | n/a | |
---|
2011 | n/a | SmallBins |
---|
2012 | n/a | An array of bin headers for free chunks. These bins hold chunks |
---|
2013 | n/a | with sizes less than MIN_LARGE_SIZE bytes. Each bin contains |
---|
2014 | n/a | chunks of all the same size, spaced 8 bytes apart. To simplify |
---|
2015 | n/a | use in double-linked lists, each bin header acts as a malloc_chunk |
---|
2016 | n/a | pointing to the real first node, if it exists (else pointing to |
---|
2017 | n/a | itself). This avoids special-casing for headers. But to avoid |
---|
2018 | n/a | waste, we allocate only the fd/bk pointers of bins, and then use |
---|
2019 | n/a | repositioning tricks to treat these as the fields of a chunk. |
---|
2020 | n/a | |
---|
2021 | n/a | TreeBins |
---|
2022 | n/a | Treebins are pointers to the roots of trees holding a range of |
---|
2023 | n/a | sizes. There are 2 equally spaced treebins for each power of two |
---|
2024 | n/a | from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything |
---|
2025 | n/a | larger. |
---|
2026 | n/a | |
---|
2027 | n/a | Bin maps |
---|
2028 | n/a | There is one bit map for small bins ("smallmap") and one for |
---|
2029 | n/a | treebins ("treemap). Each bin sets its bit when non-empty, and |
---|
2030 | n/a | clears the bit when empty. Bit operations are then used to avoid |
---|
2031 | n/a | bin-by-bin searching -- nearly all "search" is done without ever |
---|
2032 | n/a | looking at bins that won't be selected. The bit maps |
---|
2033 | n/a | conservatively use 32 bits per map word, even if on 64bit system. |
---|
2034 | n/a | For a good description of some of the bit-based techniques used |
---|
2035 | n/a | here, see Henry S. Warren Jr's book "Hacker's Delight" (and |
---|
2036 | n/a | supplement at http://hackersdelight.org/). Many of these are |
---|
2037 | n/a | intended to reduce the branchiness of paths through malloc etc, as |
---|
2038 | n/a | well as to reduce the number of memory locations read or written. |
---|
2039 | n/a | |
---|
2040 | n/a | Segments |
---|
2041 | n/a | A list of segments headed by an embedded malloc_segment record |
---|
2042 | n/a | representing the initial space. |
---|
2043 | n/a | |
---|
2044 | n/a | Address check support |
---|
2045 | n/a | The least_addr field is the least address ever obtained from |
---|
2046 | n/a | MORECORE or MMAP. Attempted frees and reallocs of any address less |
---|
2047 | n/a | than this are trapped (unless INSECURE is defined). |
---|
2048 | n/a | |
---|
2049 | n/a | Magic tag |
---|
2050 | n/a | A cross-check field that should always hold same value as mparams.magic. |
---|
2051 | n/a | |
---|
2052 | n/a | Flags |
---|
2053 | n/a | Bits recording whether to use MMAP, locks, or contiguous MORECORE |
---|
2054 | n/a | |
---|
2055 | n/a | Statistics |
---|
2056 | n/a | Each space keeps track of current and maximum system memory |
---|
2057 | n/a | obtained via MORECORE or MMAP. |
---|
2058 | n/a | |
---|
2059 | n/a | Locking |
---|
2060 | n/a | If USE_LOCKS is defined, the "mutex" lock is acquired and released |
---|
2061 | n/a | around every public call using this mspace. |
---|
2062 | n/a | */ |
---|
2063 | n/a | |
---|
2064 | n/a | /* Bin types, widths and sizes */ |
---|
2065 | n/a | #define NSMALLBINS (32U) |
---|
2066 | n/a | #define NTREEBINS (32U) |
---|
2067 | n/a | #define SMALLBIN_SHIFT (3U) |
---|
2068 | n/a | #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) |
---|
2069 | n/a | #define TREEBIN_SHIFT (8U) |
---|
2070 | n/a | #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) |
---|
2071 | n/a | #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) |
---|
2072 | n/a | #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) |
---|
2073 | n/a | |
---|
2074 | n/a | struct malloc_state { |
---|
2075 | n/a | binmap_t smallmap; |
---|
2076 | n/a | binmap_t treemap; |
---|
2077 | n/a | size_t dvsize; |
---|
2078 | n/a | size_t topsize; |
---|
2079 | n/a | char* least_addr; |
---|
2080 | n/a | mchunkptr dv; |
---|
2081 | n/a | mchunkptr top; |
---|
2082 | n/a | size_t trim_check; |
---|
2083 | n/a | size_t magic; |
---|
2084 | n/a | mchunkptr smallbins[(NSMALLBINS+1)*2]; |
---|
2085 | n/a | tbinptr treebins[NTREEBINS]; |
---|
2086 | n/a | size_t footprint; |
---|
2087 | n/a | size_t max_footprint; |
---|
2088 | n/a | flag_t mflags; |
---|
2089 | n/a | #if USE_LOCKS |
---|
2090 | n/a | MLOCK_T mutex; /* locate lock among fields that rarely change */ |
---|
2091 | n/a | #endif /* USE_LOCKS */ |
---|
2092 | n/a | msegment seg; |
---|
2093 | n/a | }; |
---|
2094 | n/a | |
---|
2095 | n/a | typedef struct malloc_state* mstate; |
---|
2096 | n/a | |
---|
2097 | n/a | /* ------------- Global malloc_state and malloc_params ------------------- */ |
---|
2098 | n/a | |
---|
2099 | n/a | /* |
---|
2100 | n/a | malloc_params holds global properties, including those that can be |
---|
2101 | n/a | dynamically set using mallopt. There is a single instance, mparams, |
---|
2102 | n/a | initialized in init_mparams. |
---|
2103 | n/a | */ |
---|
2104 | n/a | |
---|
2105 | n/a | struct malloc_params { |
---|
2106 | n/a | size_t magic; |
---|
2107 | n/a | size_t page_size; |
---|
2108 | n/a | size_t granularity; |
---|
2109 | n/a | size_t mmap_threshold; |
---|
2110 | n/a | size_t trim_threshold; |
---|
2111 | n/a | flag_t default_mflags; |
---|
2112 | n/a | }; |
---|
2113 | n/a | |
---|
2114 | n/a | static struct malloc_params mparams; |
---|
2115 | n/a | |
---|
2116 | n/a | /* The global malloc_state used for all non-"mspace" calls */ |
---|
2117 | n/a | static struct malloc_state _gm_; |
---|
2118 | n/a | #define gm (&_gm_) |
---|
2119 | n/a | #define is_global(M) ((M) == &_gm_) |
---|
2120 | n/a | #define is_initialized(M) ((M)->top != 0) |
---|
2121 | n/a | |
---|
2122 | n/a | /* -------------------------- system alloc setup ------------------------- */ |
---|
2123 | n/a | |
---|
2124 | n/a | /* Operations on mflags */ |
---|
2125 | n/a | |
---|
2126 | n/a | #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) |
---|
2127 | n/a | #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) |
---|
2128 | n/a | #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) |
---|
2129 | n/a | |
---|
2130 | n/a | #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) |
---|
2131 | n/a | #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) |
---|
2132 | n/a | #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) |
---|
2133 | n/a | |
---|
2134 | n/a | #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) |
---|
2135 | n/a | #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) |
---|
2136 | n/a | |
---|
2137 | n/a | #define set_lock(M,L)\ |
---|
2138 | n/a | ((M)->mflags = (L)?\ |
---|
2139 | n/a | ((M)->mflags | USE_LOCK_BIT) :\ |
---|
2140 | n/a | ((M)->mflags & ~USE_LOCK_BIT)) |
---|
2141 | n/a | |
---|
2142 | n/a | /* page-align a size */ |
---|
2143 | n/a | #define page_align(S)\ |
---|
2144 | n/a | (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE)) |
---|
2145 | n/a | |
---|
2146 | n/a | /* granularity-align a size */ |
---|
2147 | n/a | #define granularity_align(S)\ |
---|
2148 | n/a | (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE)) |
---|
2149 | n/a | |
---|
2150 | n/a | #define is_page_aligned(S)\ |
---|
2151 | n/a | (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) |
---|
2152 | n/a | #define is_granularity_aligned(S)\ |
---|
2153 | n/a | (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) |
---|
2154 | n/a | |
---|
2155 | n/a | /* True if segment S holds address A */ |
---|
2156 | n/a | #define segment_holds(S, A)\ |
---|
2157 | n/a | ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) |
---|
2158 | n/a | |
---|
2159 | n/a | /* Return segment holding given address */ |
---|
2160 | n/a | static msegmentptr segment_holding(mstate m, char* addr) { |
---|
2161 | n/a | msegmentptr sp = &m->seg; |
---|
2162 | n/a | for (;;) { |
---|
2163 | n/a | if (addr >= sp->base && addr < sp->base + sp->size) |
---|
2164 | n/a | return sp; |
---|
2165 | n/a | if ((sp = sp->next) == 0) |
---|
2166 | n/a | return 0; |
---|
2167 | n/a | } |
---|
2168 | n/a | } |
---|
2169 | n/a | |
---|
2170 | n/a | /* Return true if segment contains a segment link */ |
---|
2171 | n/a | static int has_segment_link(mstate m, msegmentptr ss) { |
---|
2172 | n/a | msegmentptr sp = &m->seg; |
---|
2173 | n/a | for (;;) { |
---|
2174 | n/a | if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) |
---|
2175 | n/a | return 1; |
---|
2176 | n/a | if ((sp = sp->next) == 0) |
---|
2177 | n/a | return 0; |
---|
2178 | n/a | } |
---|
2179 | n/a | } |
---|
2180 | n/a | |
---|
2181 | n/a | #ifndef MORECORE_CANNOT_TRIM |
---|
2182 | n/a | #define should_trim(M,s) ((s) > (M)->trim_check) |
---|
2183 | n/a | #else /* MORECORE_CANNOT_TRIM */ |
---|
2184 | n/a | #define should_trim(M,s) (0) |
---|
2185 | n/a | #endif /* MORECORE_CANNOT_TRIM */ |
---|
2186 | n/a | |
---|
2187 | n/a | /* |
---|
2188 | n/a | TOP_FOOT_SIZE is padding at the end of a segment, including space |
---|
2189 | n/a | that may be needed to place segment records and fenceposts when new |
---|
2190 | n/a | noncontiguous segments are added. |
---|
2191 | n/a | */ |
---|
2192 | n/a | #define TOP_FOOT_SIZE\ |
---|
2193 | n/a | (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) |
---|
2194 | n/a | |
---|
2195 | n/a | |
---|
2196 | n/a | /* ------------------------------- Hooks -------------------------------- */ |
---|
2197 | n/a | |
---|
2198 | n/a | /* |
---|
2199 | n/a | PREACTION should be defined to return 0 on success, and nonzero on |
---|
2200 | n/a | failure. If you are not using locking, you can redefine these to do |
---|
2201 | n/a | anything you like. |
---|
2202 | n/a | */ |
---|
2203 | n/a | |
---|
2204 | n/a | #if USE_LOCKS |
---|
2205 | n/a | |
---|
2206 | n/a | /* Ensure locks are initialized */ |
---|
2207 | n/a | #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams()) |
---|
2208 | n/a | |
---|
2209 | n/a | #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) |
---|
2210 | n/a | #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } |
---|
2211 | n/a | #else /* USE_LOCKS */ |
---|
2212 | n/a | |
---|
2213 | n/a | #ifndef PREACTION |
---|
2214 | n/a | #define PREACTION(M) (0) |
---|
2215 | n/a | #endif /* PREACTION */ |
---|
2216 | n/a | |
---|
2217 | n/a | #ifndef POSTACTION |
---|
2218 | n/a | #define POSTACTION(M) |
---|
2219 | n/a | #endif /* POSTACTION */ |
---|
2220 | n/a | |
---|
2221 | n/a | #endif /* USE_LOCKS */ |
---|
2222 | n/a | |
---|
2223 | n/a | /* |
---|
2224 | n/a | CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. |
---|
2225 | n/a | USAGE_ERROR_ACTION is triggered on detected bad frees and |
---|
2226 | n/a | reallocs. The argument p is an address that might have triggered the |
---|
2227 | n/a | fault. It is ignored by the two predefined actions, but might be |
---|
2228 | n/a | useful in custom actions that try to help diagnose errors. |
---|
2229 | n/a | */ |
---|
2230 | n/a | |
---|
2231 | n/a | #if PROCEED_ON_ERROR |
---|
2232 | n/a | |
---|
2233 | n/a | /* A count of the number of corruption errors causing resets */ |
---|
2234 | n/a | int malloc_corruption_error_count; |
---|
2235 | n/a | |
---|
2236 | n/a | /* default corruption action */ |
---|
2237 | n/a | static void reset_on_error(mstate m); |
---|
2238 | n/a | |
---|
2239 | n/a | #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) |
---|
2240 | n/a | #define USAGE_ERROR_ACTION(m, p) |
---|
2241 | n/a | |
---|
2242 | n/a | #else /* PROCEED_ON_ERROR */ |
---|
2243 | n/a | |
---|
2244 | n/a | #ifndef CORRUPTION_ERROR_ACTION |
---|
2245 | n/a | #define CORRUPTION_ERROR_ACTION(m) ABORT |
---|
2246 | n/a | #endif /* CORRUPTION_ERROR_ACTION */ |
---|
2247 | n/a | |
---|
2248 | n/a | #ifndef USAGE_ERROR_ACTION |
---|
2249 | n/a | #define USAGE_ERROR_ACTION(m,p) ABORT |
---|
2250 | n/a | #endif /* USAGE_ERROR_ACTION */ |
---|
2251 | n/a | |
---|
2252 | n/a | #endif /* PROCEED_ON_ERROR */ |
---|
2253 | n/a | |
---|
2254 | n/a | /* -------------------------- Debugging setup ---------------------------- */ |
---|
2255 | n/a | |
---|
2256 | n/a | #if ! DEBUG |
---|
2257 | n/a | |
---|
2258 | n/a | #define check_free_chunk(M,P) |
---|
2259 | n/a | #define check_inuse_chunk(M,P) |
---|
2260 | n/a | #define check_malloced_chunk(M,P,N) |
---|
2261 | n/a | #define check_mmapped_chunk(M,P) |
---|
2262 | n/a | #define check_malloc_state(M) |
---|
2263 | n/a | #define check_top_chunk(M,P) |
---|
2264 | n/a | |
---|
2265 | n/a | #else /* DEBUG */ |
---|
2266 | n/a | #define check_free_chunk(M,P) do_check_free_chunk(M,P) |
---|
2267 | n/a | #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) |
---|
2268 | n/a | #define check_top_chunk(M,P) do_check_top_chunk(M,P) |
---|
2269 | n/a | #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) |
---|
2270 | n/a | #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) |
---|
2271 | n/a | #define check_malloc_state(M) do_check_malloc_state(M) |
---|
2272 | n/a | |
---|
2273 | n/a | static void do_check_any_chunk(mstate m, mchunkptr p); |
---|
2274 | n/a | static void do_check_top_chunk(mstate m, mchunkptr p); |
---|
2275 | n/a | static void do_check_mmapped_chunk(mstate m, mchunkptr p); |
---|
2276 | n/a | static void do_check_inuse_chunk(mstate m, mchunkptr p); |
---|
2277 | n/a | static void do_check_free_chunk(mstate m, mchunkptr p); |
---|
2278 | n/a | static void do_check_malloced_chunk(mstate m, void* mem, size_t s); |
---|
2279 | n/a | static void do_check_tree(mstate m, tchunkptr t); |
---|
2280 | n/a | static void do_check_treebin(mstate m, bindex_t i); |
---|
2281 | n/a | static void do_check_smallbin(mstate m, bindex_t i); |
---|
2282 | n/a | static void do_check_malloc_state(mstate m); |
---|
2283 | n/a | static int bin_find(mstate m, mchunkptr x); |
---|
2284 | n/a | static size_t traverse_and_check(mstate m); |
---|
2285 | n/a | #endif /* DEBUG */ |
---|
2286 | n/a | |
---|
2287 | n/a | /* ---------------------------- Indexing Bins ---------------------------- */ |
---|
2288 | n/a | |
---|
2289 | n/a | #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) |
---|
2290 | n/a | #define small_index(s) ((s) >> SMALLBIN_SHIFT) |
---|
2291 | n/a | #define small_index2size(i) ((i) << SMALLBIN_SHIFT) |
---|
2292 | n/a | #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) |
---|
2293 | n/a | |
---|
2294 | n/a | /* addressing by index. See above about smallbin repositioning */ |
---|
2295 | n/a | #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) |
---|
2296 | n/a | #define treebin_at(M,i) (&((M)->treebins[i])) |
---|
2297 | n/a | |
---|
2298 | n/a | /* assign tree index for size S to variable I */ |
---|
2299 | n/a | #if defined(__GNUC__) && defined(i386) |
---|
2300 | n/a | #define compute_tree_index(S, I)\ |
---|
2301 | n/a | {\ |
---|
2302 | n/a | size_t X = S >> TREEBIN_SHIFT;\ |
---|
2303 | n/a | if (X == 0)\ |
---|
2304 | n/a | I = 0;\ |
---|
2305 | n/a | else if (X > 0xFFFF)\ |
---|
2306 | n/a | I = NTREEBINS-1;\ |
---|
2307 | n/a | else {\ |
---|
2308 | n/a | unsigned int K;\ |
---|
2309 | n/a | __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\ |
---|
2310 | n/a | I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ |
---|
2311 | n/a | }\ |
---|
2312 | n/a | } |
---|
2313 | n/a | #else /* GNUC */ |
---|
2314 | n/a | #define compute_tree_index(S, I)\ |
---|
2315 | n/a | {\ |
---|
2316 | n/a | size_t X = S >> TREEBIN_SHIFT;\ |
---|
2317 | n/a | if (X == 0)\ |
---|
2318 | n/a | I = 0;\ |
---|
2319 | n/a | else if (X > 0xFFFF)\ |
---|
2320 | n/a | I = NTREEBINS-1;\ |
---|
2321 | n/a | else {\ |
---|
2322 | n/a | unsigned int Y = (unsigned int)X;\ |
---|
2323 | n/a | unsigned int N = ((Y - 0x100) >> 16) & 8;\ |
---|
2324 | n/a | unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ |
---|
2325 | n/a | N += K;\ |
---|
2326 | n/a | N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ |
---|
2327 | n/a | K = 14 - N + ((Y <<= K) >> 15);\ |
---|
2328 | n/a | I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ |
---|
2329 | n/a | }\ |
---|
2330 | n/a | } |
---|
2331 | n/a | #endif /* GNUC */ |
---|
2332 | n/a | |
---|
2333 | n/a | /* Bit representing maximum resolved size in a treebin at i */ |
---|
2334 | n/a | #define bit_for_tree_index(i) \ |
---|
2335 | n/a | (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) |
---|
2336 | n/a | |
---|
2337 | n/a | /* Shift placing maximum resolved bit in a treebin at i as sign bit */ |
---|
2338 | n/a | #define leftshift_for_tree_index(i) \ |
---|
2339 | n/a | ((i == NTREEBINS-1)? 0 : \ |
---|
2340 | n/a | ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) |
---|
2341 | n/a | |
---|
2342 | n/a | /* The size of the smallest chunk held in bin with index i */ |
---|
2343 | n/a | #define minsize_for_tree_index(i) \ |
---|
2344 | n/a | ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ |
---|
2345 | n/a | (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) |
---|
2346 | n/a | |
---|
2347 | n/a | |
---|
2348 | n/a | /* ------------------------ Operations on bin maps ----------------------- */ |
---|
2349 | n/a | |
---|
2350 | n/a | /* bit corresponding to given index */ |
---|
2351 | n/a | #define idx2bit(i) ((binmap_t)(1) << (i)) |
---|
2352 | n/a | |
---|
2353 | n/a | /* Mark/Clear bits with given index */ |
---|
2354 | n/a | #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) |
---|
2355 | n/a | #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) |
---|
2356 | n/a | #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) |
---|
2357 | n/a | |
---|
2358 | n/a | #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) |
---|
2359 | n/a | #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) |
---|
2360 | n/a | #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) |
---|
2361 | n/a | |
---|
2362 | n/a | /* index corresponding to given bit */ |
---|
2363 | n/a | |
---|
2364 | n/a | #if defined(__GNUC__) && defined(i386) |
---|
2365 | n/a | #define compute_bit2idx(X, I)\ |
---|
2366 | n/a | {\ |
---|
2367 | n/a | unsigned int J;\ |
---|
2368 | n/a | __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\ |
---|
2369 | n/a | I = (bindex_t)J;\ |
---|
2370 | n/a | } |
---|
2371 | n/a | |
---|
2372 | n/a | #else /* GNUC */ |
---|
2373 | n/a | #if USE_BUILTIN_FFS |
---|
2374 | n/a | #define compute_bit2idx(X, I) I = ffs(X)-1 |
---|
2375 | n/a | |
---|
2376 | n/a | #else /* USE_BUILTIN_FFS */ |
---|
2377 | n/a | #define compute_bit2idx(X, I)\ |
---|
2378 | n/a | {\ |
---|
2379 | n/a | unsigned int Y = X - 1;\ |
---|
2380 | n/a | unsigned int K = Y >> (16-4) & 16;\ |
---|
2381 | n/a | unsigned int N = K; Y >>= K;\ |
---|
2382 | n/a | N += K = Y >> (8-3) & 8; Y >>= K;\ |
---|
2383 | n/a | N += K = Y >> (4-2) & 4; Y >>= K;\ |
---|
2384 | n/a | N += K = Y >> (2-1) & 2; Y >>= K;\ |
---|
2385 | n/a | N += K = Y >> (1-0) & 1; Y >>= K;\ |
---|
2386 | n/a | I = (bindex_t)(N + Y);\ |
---|
2387 | n/a | } |
---|
2388 | n/a | #endif /* USE_BUILTIN_FFS */ |
---|
2389 | n/a | #endif /* GNUC */ |
---|
2390 | n/a | |
---|
2391 | n/a | /* isolate the least set bit of a bitmap */ |
---|
2392 | n/a | #define least_bit(x) ((x) & -(x)) |
---|
2393 | n/a | |
---|
2394 | n/a | /* mask with all bits to left of least bit of x on */ |
---|
2395 | n/a | #define left_bits(x) ((x<<1) | -(x<<1)) |
---|
2396 | n/a | |
---|
2397 | n/a | /* mask with all bits to left of or equal to least bit of x on */ |
---|
2398 | n/a | #define same_or_left_bits(x) ((x) | -(x)) |
---|
2399 | n/a | |
---|
2400 | n/a | |
---|
2401 | n/a | /* ----------------------- Runtime Check Support ------------------------- */ |
---|
2402 | n/a | |
---|
2403 | n/a | /* |
---|
2404 | n/a | For security, the main invariant is that malloc/free/etc never |
---|
2405 | n/a | writes to a static address other than malloc_state, unless static |
---|
2406 | n/a | malloc_state itself has been corrupted, which cannot occur via |
---|
2407 | n/a | malloc (because of these checks). In essence this means that we |
---|
2408 | n/a | believe all pointers, sizes, maps etc held in malloc_state, but |
---|
2409 | n/a | check all of those linked or offsetted from other embedded data |
---|
2410 | n/a | structures. These checks are interspersed with main code in a way |
---|
2411 | n/a | that tends to minimize their run-time cost. |
---|
2412 | n/a | |
---|
2413 | n/a | When FOOTERS is defined, in addition to range checking, we also |
---|
2414 | n/a | verify footer fields of inuse chunks, which can be used guarantee |
---|
2415 | n/a | that the mstate controlling malloc/free is intact. This is a |
---|
2416 | n/a | streamlined version of the approach described by William Robertson |
---|
2417 | n/a | et al in "Run-time Detection of Heap-based Overflows" LISA'03 |
---|
2418 | n/a | http://www.usenix.org/events/lisa03/tech/robertson.html The footer |
---|
2419 | n/a | of an inuse chunk holds the xor of its mstate and a random seed, |
---|
2420 | n/a | that is checked upon calls to free() and realloc(). This is |
---|
2421 | n/a | (probablistically) unguessable from outside the program, but can be |
---|
2422 | n/a | computed by any code successfully malloc'ing any chunk, so does not |
---|
2423 | n/a | itself provide protection against code that has already broken |
---|
2424 | n/a | security through some other means. Unlike Robertson et al, we |
---|
2425 | n/a | always dynamically check addresses of all offset chunks (previous, |
---|
2426 | n/a | next, etc). This turns out to be cheaper than relying on hashes. |
---|
2427 | n/a | */ |
---|
2428 | n/a | |
---|
2429 | n/a | #if !INSECURE |
---|
2430 | n/a | /* Check if address a is at least as high as any from MORECORE or MMAP */ |
---|
2431 | n/a | #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) |
---|
2432 | n/a | /* Check if address of next chunk n is higher than base chunk p */ |
---|
2433 | n/a | #define ok_next(p, n) ((char*)(p) < (char*)(n)) |
---|
2434 | n/a | /* Check if p has its cinuse bit on */ |
---|
2435 | n/a | #define ok_cinuse(p) cinuse(p) |
---|
2436 | n/a | /* Check if p has its pinuse bit on */ |
---|
2437 | n/a | #define ok_pinuse(p) pinuse(p) |
---|
2438 | n/a | |
---|
2439 | n/a | #else /* !INSECURE */ |
---|
2440 | n/a | #define ok_address(M, a) (1) |
---|
2441 | n/a | #define ok_next(b, n) (1) |
---|
2442 | n/a | #define ok_cinuse(p) (1) |
---|
2443 | n/a | #define ok_pinuse(p) (1) |
---|
2444 | n/a | #endif /* !INSECURE */ |
---|
2445 | n/a | |
---|
2446 | n/a | #if (FOOTERS && !INSECURE) |
---|
2447 | n/a | /* Check if (alleged) mstate m has expected magic field */ |
---|
2448 | n/a | #define ok_magic(M) ((M)->magic == mparams.magic) |
---|
2449 | n/a | #else /* (FOOTERS && !INSECURE) */ |
---|
2450 | n/a | #define ok_magic(M) (1) |
---|
2451 | n/a | #endif /* (FOOTERS && !INSECURE) */ |
---|
2452 | n/a | |
---|
2453 | n/a | |
---|
2454 | n/a | /* In gcc, use __builtin_expect to minimize impact of checks */ |
---|
2455 | n/a | #if !INSECURE |
---|
2456 | n/a | #if defined(__GNUC__) && __GNUC__ >= 3 |
---|
2457 | n/a | #define RTCHECK(e) __builtin_expect(e, 1) |
---|
2458 | n/a | #else /* GNUC */ |
---|
2459 | n/a | #define RTCHECK(e) (e) |
---|
2460 | n/a | #endif /* GNUC */ |
---|
2461 | n/a | #else /* !INSECURE */ |
---|
2462 | n/a | #define RTCHECK(e) (1) |
---|
2463 | n/a | #endif /* !INSECURE */ |
---|
2464 | n/a | |
---|
2465 | n/a | /* macros to set up inuse chunks with or without footers */ |
---|
2466 | n/a | |
---|
2467 | n/a | #if !FOOTERS |
---|
2468 | n/a | |
---|
2469 | n/a | #define mark_inuse_foot(M,p,s) |
---|
2470 | n/a | |
---|
2471 | n/a | /* Set cinuse bit and pinuse bit of next chunk */ |
---|
2472 | n/a | #define set_inuse(M,p,s)\ |
---|
2473 | n/a | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
---|
2474 | n/a | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
---|
2475 | n/a | |
---|
2476 | n/a | /* Set cinuse and pinuse of this chunk and pinuse of next chunk */ |
---|
2477 | n/a | #define set_inuse_and_pinuse(M,p,s)\ |
---|
2478 | n/a | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
---|
2479 | n/a | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
---|
2480 | n/a | |
---|
2481 | n/a | /* Set size, cinuse and pinuse bit of this chunk */ |
---|
2482 | n/a | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
---|
2483 | n/a | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) |
---|
2484 | n/a | |
---|
2485 | n/a | #else /* FOOTERS */ |
---|
2486 | n/a | |
---|
2487 | n/a | /* Set foot of inuse chunk to be xor of mstate and seed */ |
---|
2488 | n/a | #define mark_inuse_foot(M,p,s)\ |
---|
2489 | n/a | (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) |
---|
2490 | n/a | |
---|
2491 | n/a | #define get_mstate_for(p)\ |
---|
2492 | n/a | ((mstate)(((mchunkptr)((char*)(p) +\ |
---|
2493 | n/a | (chunksize(p))))->prev_foot ^ mparams.magic)) |
---|
2494 | n/a | |
---|
2495 | n/a | #define set_inuse(M,p,s)\ |
---|
2496 | n/a | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
---|
2497 | n/a | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ |
---|
2498 | n/a | mark_inuse_foot(M,p,s)) |
---|
2499 | n/a | |
---|
2500 | n/a | #define set_inuse_and_pinuse(M,p,s)\ |
---|
2501 | n/a | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
---|
2502 | n/a | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ |
---|
2503 | n/a | mark_inuse_foot(M,p,s)) |
---|
2504 | n/a | |
---|
2505 | n/a | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
---|
2506 | n/a | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
---|
2507 | n/a | mark_inuse_foot(M, p, s)) |
---|
2508 | n/a | |
---|
2509 | n/a | #endif /* !FOOTERS */ |
---|
2510 | n/a | |
---|
2511 | n/a | /* ---------------------------- setting mparams -------------------------- */ |
---|
2512 | n/a | |
---|
2513 | n/a | /* Initialize mparams */ |
---|
2514 | n/a | static int init_mparams(void) { |
---|
2515 | n/a | if (mparams.page_size == 0) { |
---|
2516 | n/a | size_t s; |
---|
2517 | n/a | |
---|
2518 | n/a | mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
---|
2519 | n/a | mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; |
---|
2520 | n/a | #if MORECORE_CONTIGUOUS |
---|
2521 | n/a | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; |
---|
2522 | n/a | #else /* MORECORE_CONTIGUOUS */ |
---|
2523 | n/a | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; |
---|
2524 | n/a | #endif /* MORECORE_CONTIGUOUS */ |
---|
2525 | n/a | |
---|
2526 | n/a | #if (FOOTERS && !INSECURE) |
---|
2527 | n/a | { |
---|
2528 | n/a | #if USE_DEV_RANDOM |
---|
2529 | n/a | int fd; |
---|
2530 | n/a | unsigned char buf[sizeof(size_t)]; |
---|
2531 | n/a | /* Try to use /dev/urandom, else fall back on using time */ |
---|
2532 | n/a | if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && |
---|
2533 | n/a | read(fd, buf, sizeof(buf)) == sizeof(buf)) { |
---|
2534 | n/a | s = *((size_t *) buf); |
---|
2535 | n/a | close(fd); |
---|
2536 | n/a | } |
---|
2537 | n/a | else |
---|
2538 | n/a | #endif /* USE_DEV_RANDOM */ |
---|
2539 | n/a | s = (size_t)(time(0) ^ (size_t)0x55555555U); |
---|
2540 | n/a | |
---|
2541 | n/a | s |= (size_t)8U; /* ensure nonzero */ |
---|
2542 | n/a | s &= ~(size_t)7U; /* improve chances of fault for bad values */ |
---|
2543 | n/a | |
---|
2544 | n/a | } |
---|
2545 | n/a | #else /* (FOOTERS && !INSECURE) */ |
---|
2546 | n/a | s = (size_t)0x58585858U; |
---|
2547 | n/a | #endif /* (FOOTERS && !INSECURE) */ |
---|
2548 | n/a | ACQUIRE_MAGIC_INIT_LOCK(); |
---|
2549 | n/a | if (mparams.magic == 0) { |
---|
2550 | n/a | mparams.magic = s; |
---|
2551 | n/a | /* Set up lock for main malloc area */ |
---|
2552 | n/a | INITIAL_LOCK(&gm->mutex); |
---|
2553 | n/a | gm->mflags = mparams.default_mflags; |
---|
2554 | n/a | } |
---|
2555 | n/a | RELEASE_MAGIC_INIT_LOCK(); |
---|
2556 | n/a | |
---|
2557 | n/a | #if !defined(WIN32) && !defined(__OS2__) |
---|
2558 | n/a | mparams.page_size = malloc_getpagesize; |
---|
2559 | n/a | mparams.granularity = ((DEFAULT_GRANULARITY != 0)? |
---|
2560 | n/a | DEFAULT_GRANULARITY : mparams.page_size); |
---|
2561 | n/a | #elif defined (__OS2__) |
---|
2562 | n/a | /* if low-memory is used, os2munmap() would break |
---|
2563 | n/a | if it were anything other than 64k */ |
---|
2564 | n/a | mparams.page_size = 4096u; |
---|
2565 | n/a | mparams.granularity = 65536u; |
---|
2566 | n/a | #else /* WIN32 */ |
---|
2567 | n/a | { |
---|
2568 | n/a | SYSTEM_INFO system_info; |
---|
2569 | n/a | GetSystemInfo(&system_info); |
---|
2570 | n/a | mparams.page_size = system_info.dwPageSize; |
---|
2571 | n/a | mparams.granularity = system_info.dwAllocationGranularity; |
---|
2572 | n/a | } |
---|
2573 | n/a | #endif /* WIN32 */ |
---|
2574 | n/a | |
---|
2575 | n/a | /* Sanity-check configuration: |
---|
2576 | n/a | size_t must be unsigned and as wide as pointer type. |
---|
2577 | n/a | ints must be at least 4 bytes. |
---|
2578 | n/a | alignment must be at least 8. |
---|
2579 | n/a | Alignment, min chunk size, and page size must all be powers of 2. |
---|
2580 | n/a | */ |
---|
2581 | n/a | if ((sizeof(size_t) != sizeof(char*)) || |
---|
2582 | n/a | (MAX_SIZE_T < MIN_CHUNK_SIZE) || |
---|
2583 | n/a | (sizeof(int) < 4) || |
---|
2584 | n/a | (MALLOC_ALIGNMENT < (size_t)8U) || |
---|
2585 | n/a | ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || |
---|
2586 | n/a | ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || |
---|
2587 | n/a | ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) || |
---|
2588 | n/a | ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0)) |
---|
2589 | n/a | ABORT; |
---|
2590 | n/a | } |
---|
2591 | n/a | return 0; |
---|
2592 | n/a | } |
---|
2593 | n/a | |
---|
2594 | n/a | /* support for mallopt */ |
---|
2595 | n/a | static int change_mparam(int param_number, int value) { |
---|
2596 | n/a | size_t val = (size_t)value; |
---|
2597 | n/a | init_mparams(); |
---|
2598 | n/a | switch(param_number) { |
---|
2599 | n/a | case M_TRIM_THRESHOLD: |
---|
2600 | n/a | mparams.trim_threshold = val; |
---|
2601 | n/a | return 1; |
---|
2602 | n/a | case M_GRANULARITY: |
---|
2603 | n/a | if (val >= mparams.page_size && ((val & (val-1)) == 0)) { |
---|
2604 | n/a | mparams.granularity = val; |
---|
2605 | n/a | return 1; |
---|
2606 | n/a | } |
---|
2607 | n/a | else |
---|
2608 | n/a | return 0; |
---|
2609 | n/a | case M_MMAP_THRESHOLD: |
---|
2610 | n/a | mparams.mmap_threshold = val; |
---|
2611 | n/a | return 1; |
---|
2612 | n/a | default: |
---|
2613 | n/a | return 0; |
---|
2614 | n/a | } |
---|
2615 | n/a | } |
---|
2616 | n/a | |
---|
2617 | n/a | #if DEBUG |
---|
2618 | n/a | /* ------------------------- Debugging Support --------------------------- */ |
---|
2619 | n/a | |
---|
2620 | n/a | /* Check properties of any chunk, whether free, inuse, mmapped etc */ |
---|
2621 | n/a | static void do_check_any_chunk(mstate m, mchunkptr p) { |
---|
2622 | n/a | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
---|
2623 | n/a | assert(ok_address(m, p)); |
---|
2624 | n/a | } |
---|
2625 | n/a | |
---|
2626 | n/a | /* Check properties of top chunk */ |
---|
2627 | n/a | static void do_check_top_chunk(mstate m, mchunkptr p) { |
---|
2628 | n/a | msegmentptr sp = segment_holding(m, (char*)p); |
---|
2629 | n/a | size_t sz = chunksize(p); |
---|
2630 | n/a | assert(sp != 0); |
---|
2631 | n/a | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
---|
2632 | n/a | assert(ok_address(m, p)); |
---|
2633 | n/a | assert(sz == m->topsize); |
---|
2634 | n/a | assert(sz > 0); |
---|
2635 | n/a | assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); |
---|
2636 | n/a | assert(pinuse(p)); |
---|
2637 | n/a | assert(!next_pinuse(p)); |
---|
2638 | n/a | } |
---|
2639 | n/a | |
---|
2640 | n/a | /* Check properties of (inuse) mmapped chunks */ |
---|
2641 | n/a | static void do_check_mmapped_chunk(mstate m, mchunkptr p) { |
---|
2642 | n/a | size_t sz = chunksize(p); |
---|
2643 | n/a | size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD); |
---|
2644 | n/a | assert(is_mmapped(p)); |
---|
2645 | n/a | assert(use_mmap(m)); |
---|
2646 | n/a | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
---|
2647 | n/a | assert(ok_address(m, p)); |
---|
2648 | n/a | assert(!is_small(sz)); |
---|
2649 | n/a | assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); |
---|
2650 | n/a | assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); |
---|
2651 | n/a | assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); |
---|
2652 | n/a | } |
---|
2653 | n/a | |
---|
2654 | n/a | /* Check properties of inuse chunks */ |
---|
2655 | n/a | static void do_check_inuse_chunk(mstate m, mchunkptr p) { |
---|
2656 | n/a | do_check_any_chunk(m, p); |
---|
2657 | n/a | assert(cinuse(p)); |
---|
2658 | n/a | assert(next_pinuse(p)); |
---|
2659 | n/a | /* If not pinuse and not mmapped, previous chunk has OK offset */ |
---|
2660 | n/a | assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); |
---|
2661 | n/a | if (is_mmapped(p)) |
---|
2662 | n/a | do_check_mmapped_chunk(m, p); |
---|
2663 | n/a | } |
---|
2664 | n/a | |
---|
2665 | n/a | /* Check properties of free chunks */ |
---|
2666 | n/a | static void do_check_free_chunk(mstate m, mchunkptr p) { |
---|
2667 | n/a | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); |
---|
2668 | n/a | mchunkptr next = chunk_plus_offset(p, sz); |
---|
2669 | n/a | do_check_any_chunk(m, p); |
---|
2670 | n/a | assert(!cinuse(p)); |
---|
2671 | n/a | assert(!next_pinuse(p)); |
---|
2672 | n/a | assert (!is_mmapped(p)); |
---|
2673 | n/a | if (p != m->dv && p != m->top) { |
---|
2674 | n/a | if (sz >= MIN_CHUNK_SIZE) { |
---|
2675 | n/a | assert((sz & CHUNK_ALIGN_MASK) == 0); |
---|
2676 | n/a | assert(is_aligned(chunk2mem(p))); |
---|
2677 | n/a | assert(next->prev_foot == sz); |
---|
2678 | n/a | assert(pinuse(p)); |
---|
2679 | n/a | assert (next == m->top || cinuse(next)); |
---|
2680 | n/a | assert(p->fd->bk == p); |
---|
2681 | n/a | assert(p->bk->fd == p); |
---|
2682 | n/a | } |
---|
2683 | n/a | else /* markers are always of size SIZE_T_SIZE */ |
---|
2684 | n/a | assert(sz == SIZE_T_SIZE); |
---|
2685 | n/a | } |
---|
2686 | n/a | } |
---|
2687 | n/a | |
---|
2688 | n/a | /* Check properties of malloced chunks at the point they are malloced */ |
---|
2689 | n/a | static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { |
---|
2690 | n/a | if (mem != 0) { |
---|
2691 | n/a | mchunkptr p = mem2chunk(mem); |
---|
2692 | n/a | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); |
---|
2693 | n/a | do_check_inuse_chunk(m, p); |
---|
2694 | n/a | assert((sz & CHUNK_ALIGN_MASK) == 0); |
---|
2695 | n/a | assert(sz >= MIN_CHUNK_SIZE); |
---|
2696 | n/a | assert(sz >= s); |
---|
2697 | n/a | /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ |
---|
2698 | n/a | assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); |
---|
2699 | n/a | } |
---|
2700 | n/a | } |
---|
2701 | n/a | |
---|
2702 | n/a | /* Check a tree and its subtrees. */ |
---|
2703 | n/a | static void do_check_tree(mstate m, tchunkptr t) { |
---|
2704 | n/a | tchunkptr head = 0; |
---|
2705 | n/a | tchunkptr u = t; |
---|
2706 | n/a | bindex_t tindex = t->index; |
---|
2707 | n/a | size_t tsize = chunksize(t); |
---|
2708 | n/a | bindex_t idx; |
---|
2709 | n/a | compute_tree_index(tsize, idx); |
---|
2710 | n/a | assert(tindex == idx); |
---|
2711 | n/a | assert(tsize >= MIN_LARGE_SIZE); |
---|
2712 | n/a | assert(tsize >= minsize_for_tree_index(idx)); |
---|
2713 | n/a | assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); |
---|
2714 | n/a | |
---|
2715 | n/a | do { /* traverse through chain of same-sized nodes */ |
---|
2716 | n/a | do_check_any_chunk(m, ((mchunkptr)u)); |
---|
2717 | n/a | assert(u->index == tindex); |
---|
2718 | n/a | assert(chunksize(u) == tsize); |
---|
2719 | n/a | assert(!cinuse(u)); |
---|
2720 | n/a | assert(!next_pinuse(u)); |
---|
2721 | n/a | assert(u->fd->bk == u); |
---|
2722 | n/a | assert(u->bk->fd == u); |
---|
2723 | n/a | if (u->parent == 0) { |
---|
2724 | n/a | assert(u->child[0] == 0); |
---|
2725 | n/a | assert(u->child[1] == 0); |
---|
2726 | n/a | } |
---|
2727 | n/a | else { |
---|
2728 | n/a | assert(head == 0); /* only one node on chain has parent */ |
---|
2729 | n/a | head = u; |
---|
2730 | n/a | assert(u->parent != u); |
---|
2731 | n/a | assert (u->parent->child[0] == u || |
---|
2732 | n/a | u->parent->child[1] == u || |
---|
2733 | n/a | *((tbinptr*)(u->parent)) == u); |
---|
2734 | n/a | if (u->child[0] != 0) { |
---|
2735 | n/a | assert(u->child[0]->parent == u); |
---|
2736 | n/a | assert(u->child[0] != u); |
---|
2737 | n/a | do_check_tree(m, u->child[0]); |
---|
2738 | n/a | } |
---|
2739 | n/a | if (u->child[1] != 0) { |
---|
2740 | n/a | assert(u->child[1]->parent == u); |
---|
2741 | n/a | assert(u->child[1] != u); |
---|
2742 | n/a | do_check_tree(m, u->child[1]); |
---|
2743 | n/a | } |
---|
2744 | n/a | if (u->child[0] != 0 && u->child[1] != 0) { |
---|
2745 | n/a | assert(chunksize(u->child[0]) < chunksize(u->child[1])); |
---|
2746 | n/a | } |
---|
2747 | n/a | } |
---|
2748 | n/a | u = u->fd; |
---|
2749 | n/a | } while (u != t); |
---|
2750 | n/a | assert(head != 0); |
---|
2751 | n/a | } |
---|
2752 | n/a | |
---|
2753 | n/a | /* Check all the chunks in a treebin. */ |
---|
2754 | n/a | static void do_check_treebin(mstate m, bindex_t i) { |
---|
2755 | n/a | tbinptr* tb = treebin_at(m, i); |
---|
2756 | n/a | tchunkptr t = *tb; |
---|
2757 | n/a | int empty = (m->treemap & (1U << i)) == 0; |
---|
2758 | n/a | if (t == 0) |
---|
2759 | n/a | assert(empty); |
---|
2760 | n/a | if (!empty) |
---|
2761 | n/a | do_check_tree(m, t); |
---|
2762 | n/a | } |
---|
2763 | n/a | |
---|
2764 | n/a | /* Check all the chunks in a smallbin. */ |
---|
2765 | n/a | static void do_check_smallbin(mstate m, bindex_t i) { |
---|
2766 | n/a | sbinptr b = smallbin_at(m, i); |
---|
2767 | n/a | mchunkptr p = b->bk; |
---|
2768 | n/a | unsigned int empty = (m->smallmap & (1U << i)) == 0; |
---|
2769 | n/a | if (p == b) |
---|
2770 | n/a | assert(empty); |
---|
2771 | n/a | if (!empty) { |
---|
2772 | n/a | for (; p != b; p = p->bk) { |
---|
2773 | n/a | size_t size = chunksize(p); |
---|
2774 | n/a | mchunkptr q; |
---|
2775 | n/a | /* each chunk claims to be free */ |
---|
2776 | n/a | do_check_free_chunk(m, p); |
---|
2777 | n/a | /* chunk belongs in bin */ |
---|
2778 | n/a | assert(small_index(size) == i); |
---|
2779 | n/a | assert(p->bk == b || chunksize(p->bk) == chunksize(p)); |
---|
2780 | n/a | /* chunk is followed by an inuse chunk */ |
---|
2781 | n/a | q = next_chunk(p); |
---|
2782 | n/a | if (q->head != FENCEPOST_HEAD) |
---|
2783 | n/a | do_check_inuse_chunk(m, q); |
---|
2784 | n/a | } |
---|
2785 | n/a | } |
---|
2786 | n/a | } |
---|
2787 | n/a | |
---|
2788 | n/a | /* Find x in a bin. Used in other check functions. */ |
---|
2789 | n/a | static int bin_find(mstate m, mchunkptr x) { |
---|
2790 | n/a | size_t size = chunksize(x); |
---|
2791 | n/a | if (is_small(size)) { |
---|
2792 | n/a | bindex_t sidx = small_index(size); |
---|
2793 | n/a | sbinptr b = smallbin_at(m, sidx); |
---|
2794 | n/a | if (smallmap_is_marked(m, sidx)) { |
---|
2795 | n/a | mchunkptr p = b; |
---|
2796 | n/a | do { |
---|
2797 | n/a | if (p == x) |
---|
2798 | n/a | return 1; |
---|
2799 | n/a | } while ((p = p->fd) != b); |
---|
2800 | n/a | } |
---|
2801 | n/a | } |
---|
2802 | n/a | else { |
---|
2803 | n/a | bindex_t tidx; |
---|
2804 | n/a | compute_tree_index(size, tidx); |
---|
2805 | n/a | if (treemap_is_marked(m, tidx)) { |
---|
2806 | n/a | tchunkptr t = *treebin_at(m, tidx); |
---|
2807 | n/a | size_t sizebits = size << leftshift_for_tree_index(tidx); |
---|
2808 | n/a | while (t != 0 && chunksize(t) != size) { |
---|
2809 | n/a | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
---|
2810 | n/a | sizebits <<= 1; |
---|
2811 | n/a | } |
---|
2812 | n/a | if (t != 0) { |
---|
2813 | n/a | tchunkptr u = t; |
---|
2814 | n/a | do { |
---|
2815 | n/a | if (u == (tchunkptr)x) |
---|
2816 | n/a | return 1; |
---|
2817 | n/a | } while ((u = u->fd) != t); |
---|
2818 | n/a | } |
---|
2819 | n/a | } |
---|
2820 | n/a | } |
---|
2821 | n/a | return 0; |
---|
2822 | n/a | } |
---|
2823 | n/a | |
---|
2824 | n/a | /* Traverse each chunk and check it; return total */ |
---|
2825 | n/a | static size_t traverse_and_check(mstate m) { |
---|
2826 | n/a | size_t sum = 0; |
---|
2827 | n/a | if (is_initialized(m)) { |
---|
2828 | n/a | msegmentptr s = &m->seg; |
---|
2829 | n/a | sum += m->topsize + TOP_FOOT_SIZE; |
---|
2830 | n/a | while (s != 0) { |
---|
2831 | n/a | mchunkptr q = align_as_chunk(s->base); |
---|
2832 | n/a | mchunkptr lastq = 0; |
---|
2833 | n/a | assert(pinuse(q)); |
---|
2834 | n/a | while (segment_holds(s, q) && |
---|
2835 | n/a | q != m->top && q->head != FENCEPOST_HEAD) { |
---|
2836 | n/a | sum += chunksize(q); |
---|
2837 | n/a | if (cinuse(q)) { |
---|
2838 | n/a | assert(!bin_find(m, q)); |
---|
2839 | n/a | do_check_inuse_chunk(m, q); |
---|
2840 | n/a | } |
---|
2841 | n/a | else { |
---|
2842 | n/a | assert(q == m->dv || bin_find(m, q)); |
---|
2843 | n/a | assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */ |
---|
2844 | n/a | do_check_free_chunk(m, q); |
---|
2845 | n/a | } |
---|
2846 | n/a | lastq = q; |
---|
2847 | n/a | q = next_chunk(q); |
---|
2848 | n/a | } |
---|
2849 | n/a | s = s->next; |
---|
2850 | n/a | } |
---|
2851 | n/a | } |
---|
2852 | n/a | return sum; |
---|
2853 | n/a | } |
---|
2854 | n/a | |
---|
2855 | n/a | /* Check all properties of malloc_state. */ |
---|
2856 | n/a | static void do_check_malloc_state(mstate m) { |
---|
2857 | n/a | bindex_t i; |
---|
2858 | n/a | size_t total; |
---|
2859 | n/a | /* check bins */ |
---|
2860 | n/a | for (i = 0; i < NSMALLBINS; ++i) |
---|
2861 | n/a | do_check_smallbin(m, i); |
---|
2862 | n/a | for (i = 0; i < NTREEBINS; ++i) |
---|
2863 | n/a | do_check_treebin(m, i); |
---|
2864 | n/a | |
---|
2865 | n/a | if (m->dvsize != 0) { /* check dv chunk */ |
---|
2866 | n/a | do_check_any_chunk(m, m->dv); |
---|
2867 | n/a | assert(m->dvsize == chunksize(m->dv)); |
---|
2868 | n/a | assert(m->dvsize >= MIN_CHUNK_SIZE); |
---|
2869 | n/a | assert(bin_find(m, m->dv) == 0); |
---|
2870 | n/a | } |
---|
2871 | n/a | |
---|
2872 | n/a | if (m->top != 0) { /* check top chunk */ |
---|
2873 | n/a | do_check_top_chunk(m, m->top); |
---|
2874 | n/a | assert(m->topsize == chunksize(m->top)); |
---|
2875 | n/a | assert(m->topsize > 0); |
---|
2876 | n/a | assert(bin_find(m, m->top) == 0); |
---|
2877 | n/a | } |
---|
2878 | n/a | |
---|
2879 | n/a | total = traverse_and_check(m); |
---|
2880 | n/a | assert(total <= m->footprint); |
---|
2881 | n/a | assert(m->footprint <= m->max_footprint); |
---|
2882 | n/a | } |
---|
2883 | n/a | #endif /* DEBUG */ |
---|
2884 | n/a | |
---|
2885 | n/a | /* ----------------------------- statistics ------------------------------ */ |
---|
2886 | n/a | |
---|
2887 | n/a | #if !NO_MALLINFO |
---|
2888 | n/a | static struct mallinfo internal_mallinfo(mstate m) { |
---|
2889 | n/a | struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; |
---|
2890 | n/a | if (!PREACTION(m)) { |
---|
2891 | n/a | check_malloc_state(m); |
---|
2892 | n/a | if (is_initialized(m)) { |
---|
2893 | n/a | size_t nfree = SIZE_T_ONE; /* top always free */ |
---|
2894 | n/a | size_t mfree = m->topsize + TOP_FOOT_SIZE; |
---|
2895 | n/a | size_t sum = mfree; |
---|
2896 | n/a | msegmentptr s = &m->seg; |
---|
2897 | n/a | while (s != 0) { |
---|
2898 | n/a | mchunkptr q = align_as_chunk(s->base); |
---|
2899 | n/a | while (segment_holds(s, q) && |
---|
2900 | n/a | q != m->top && q->head != FENCEPOST_HEAD) { |
---|
2901 | n/a | size_t sz = chunksize(q); |
---|
2902 | n/a | sum += sz; |
---|
2903 | n/a | if (!cinuse(q)) { |
---|
2904 | n/a | mfree += sz; |
---|
2905 | n/a | ++nfree; |
---|
2906 | n/a | } |
---|
2907 | n/a | q = next_chunk(q); |
---|
2908 | n/a | } |
---|
2909 | n/a | s = s->next; |
---|
2910 | n/a | } |
---|
2911 | n/a | |
---|
2912 | n/a | nm.arena = sum; |
---|
2913 | n/a | nm.ordblks = nfree; |
---|
2914 | n/a | nm.hblkhd = m->footprint - sum; |
---|
2915 | n/a | nm.usmblks = m->max_footprint; |
---|
2916 | n/a | nm.uordblks = m->footprint - mfree; |
---|
2917 | n/a | nm.fordblks = mfree; |
---|
2918 | n/a | nm.keepcost = m->topsize; |
---|
2919 | n/a | } |
---|
2920 | n/a | |
---|
2921 | n/a | POSTACTION(m); |
---|
2922 | n/a | } |
---|
2923 | n/a | return nm; |
---|
2924 | n/a | } |
---|
2925 | n/a | #endif /* !NO_MALLINFO */ |
---|
2926 | n/a | |
---|
2927 | n/a | static void internal_malloc_stats(mstate m) { |
---|
2928 | n/a | if (!PREACTION(m)) { |
---|
2929 | n/a | size_t maxfp = 0; |
---|
2930 | n/a | size_t fp = 0; |
---|
2931 | n/a | size_t used = 0; |
---|
2932 | n/a | check_malloc_state(m); |
---|
2933 | n/a | if (is_initialized(m)) { |
---|
2934 | n/a | msegmentptr s = &m->seg; |
---|
2935 | n/a | maxfp = m->max_footprint; |
---|
2936 | n/a | fp = m->footprint; |
---|
2937 | n/a | used = fp - (m->topsize + TOP_FOOT_SIZE); |
---|
2938 | n/a | |
---|
2939 | n/a | while (s != 0) { |
---|
2940 | n/a | mchunkptr q = align_as_chunk(s->base); |
---|
2941 | n/a | while (segment_holds(s, q) && |
---|
2942 | n/a | q != m->top && q->head != FENCEPOST_HEAD) { |
---|
2943 | n/a | if (!cinuse(q)) |
---|
2944 | n/a | used -= chunksize(q); |
---|
2945 | n/a | q = next_chunk(q); |
---|
2946 | n/a | } |
---|
2947 | n/a | s = s->next; |
---|
2948 | n/a | } |
---|
2949 | n/a | } |
---|
2950 | n/a | |
---|
2951 | n/a | fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); |
---|
2952 | n/a | fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); |
---|
2953 | n/a | fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); |
---|
2954 | n/a | |
---|
2955 | n/a | POSTACTION(m); |
---|
2956 | n/a | } |
---|
2957 | n/a | } |
---|
2958 | n/a | |
---|
2959 | n/a | /* ----------------------- Operations on smallbins ----------------------- */ |
---|
2960 | n/a | |
---|
2961 | n/a | /* |
---|
2962 | n/a | Various forms of linking and unlinking are defined as macros. Even |
---|
2963 | n/a | the ones for trees, which are very long but have very short typical |
---|
2964 | n/a | paths. This is ugly but reduces reliance on inlining support of |
---|
2965 | n/a | compilers. |
---|
2966 | n/a | */ |
---|
2967 | n/a | |
---|
2968 | n/a | /* Link a free chunk into a smallbin */ |
---|
2969 | n/a | #define insert_small_chunk(M, P, S) {\ |
---|
2970 | n/a | bindex_t I = small_index(S);\ |
---|
2971 | n/a | mchunkptr B = smallbin_at(M, I);\ |
---|
2972 | n/a | mchunkptr F = B;\ |
---|
2973 | n/a | assert(S >= MIN_CHUNK_SIZE);\ |
---|
2974 | n/a | if (!smallmap_is_marked(M, I))\ |
---|
2975 | n/a | mark_smallmap(M, I);\ |
---|
2976 | n/a | else if (RTCHECK(ok_address(M, B->fd)))\ |
---|
2977 | n/a | F = B->fd;\ |
---|
2978 | n/a | else {\ |
---|
2979 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
2980 | n/a | }\ |
---|
2981 | n/a | B->fd = P;\ |
---|
2982 | n/a | F->bk = P;\ |
---|
2983 | n/a | P->fd = F;\ |
---|
2984 | n/a | P->bk = B;\ |
---|
2985 | n/a | } |
---|
2986 | n/a | |
---|
2987 | n/a | /* Unlink a chunk from a smallbin */ |
---|
2988 | n/a | #define unlink_small_chunk(M, P, S) {\ |
---|
2989 | n/a | mchunkptr F = P->fd;\ |
---|
2990 | n/a | mchunkptr B = P->bk;\ |
---|
2991 | n/a | bindex_t I = small_index(S);\ |
---|
2992 | n/a | assert(P != B);\ |
---|
2993 | n/a | assert(P != F);\ |
---|
2994 | n/a | assert(chunksize(P) == small_index2size(I));\ |
---|
2995 | n/a | if (F == B)\ |
---|
2996 | n/a | clear_smallmap(M, I);\ |
---|
2997 | n/a | else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\ |
---|
2998 | n/a | (B == smallbin_at(M,I) || ok_address(M, B)))) {\ |
---|
2999 | n/a | F->bk = B;\ |
---|
3000 | n/a | B->fd = F;\ |
---|
3001 | n/a | }\ |
---|
3002 | n/a | else {\ |
---|
3003 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3004 | n/a | }\ |
---|
3005 | n/a | } |
---|
3006 | n/a | |
---|
3007 | n/a | /* Unlink the first chunk from a smallbin */ |
---|
3008 | n/a | #define unlink_first_small_chunk(M, B, P, I) {\ |
---|
3009 | n/a | mchunkptr F = P->fd;\ |
---|
3010 | n/a | assert(P != B);\ |
---|
3011 | n/a | assert(P != F);\ |
---|
3012 | n/a | assert(chunksize(P) == small_index2size(I));\ |
---|
3013 | n/a | if (B == F)\ |
---|
3014 | n/a | clear_smallmap(M, I);\ |
---|
3015 | n/a | else if (RTCHECK(ok_address(M, F))) {\ |
---|
3016 | n/a | B->fd = F;\ |
---|
3017 | n/a | F->bk = B;\ |
---|
3018 | n/a | }\ |
---|
3019 | n/a | else {\ |
---|
3020 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3021 | n/a | }\ |
---|
3022 | n/a | } |
---|
3023 | n/a | |
---|
3024 | n/a | /* Replace dv node, binning the old one */ |
---|
3025 | n/a | /* Used only when dvsize known to be small */ |
---|
3026 | n/a | #define replace_dv(M, P, S) {\ |
---|
3027 | n/a | size_t DVS = M->dvsize;\ |
---|
3028 | n/a | if (DVS != 0) {\ |
---|
3029 | n/a | mchunkptr DV = M->dv;\ |
---|
3030 | n/a | assert(is_small(DVS));\ |
---|
3031 | n/a | insert_small_chunk(M, DV, DVS);\ |
---|
3032 | n/a | }\ |
---|
3033 | n/a | M->dvsize = S;\ |
---|
3034 | n/a | M->dv = P;\ |
---|
3035 | n/a | } |
---|
3036 | n/a | |
---|
3037 | n/a | /* ------------------------- Operations on trees ------------------------- */ |
---|
3038 | n/a | |
---|
3039 | n/a | /* Insert chunk into tree */ |
---|
3040 | n/a | #define insert_large_chunk(M, X, S) {\ |
---|
3041 | n/a | tbinptr* H;\ |
---|
3042 | n/a | bindex_t I;\ |
---|
3043 | n/a | compute_tree_index(S, I);\ |
---|
3044 | n/a | H = treebin_at(M, I);\ |
---|
3045 | n/a | X->index = I;\ |
---|
3046 | n/a | X->child[0] = X->child[1] = 0;\ |
---|
3047 | n/a | if (!treemap_is_marked(M, I)) {\ |
---|
3048 | n/a | mark_treemap(M, I);\ |
---|
3049 | n/a | *H = X;\ |
---|
3050 | n/a | X->parent = (tchunkptr)H;\ |
---|
3051 | n/a | X->fd = X->bk = X;\ |
---|
3052 | n/a | }\ |
---|
3053 | n/a | else {\ |
---|
3054 | n/a | tchunkptr T = *H;\ |
---|
3055 | n/a | size_t K = S << leftshift_for_tree_index(I);\ |
---|
3056 | n/a | for (;;) {\ |
---|
3057 | n/a | if (chunksize(T) != S) {\ |
---|
3058 | n/a | tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ |
---|
3059 | n/a | K <<= 1;\ |
---|
3060 | n/a | if (*C != 0)\ |
---|
3061 | n/a | T = *C;\ |
---|
3062 | n/a | else if (RTCHECK(ok_address(M, C))) {\ |
---|
3063 | n/a | *C = X;\ |
---|
3064 | n/a | X->parent = T;\ |
---|
3065 | n/a | X->fd = X->bk = X;\ |
---|
3066 | n/a | break;\ |
---|
3067 | n/a | }\ |
---|
3068 | n/a | else {\ |
---|
3069 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3070 | n/a | break;\ |
---|
3071 | n/a | }\ |
---|
3072 | n/a | }\ |
---|
3073 | n/a | else {\ |
---|
3074 | n/a | tchunkptr F = T->fd;\ |
---|
3075 | n/a | if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ |
---|
3076 | n/a | T->fd = F->bk = X;\ |
---|
3077 | n/a | X->fd = F;\ |
---|
3078 | n/a | X->bk = T;\ |
---|
3079 | n/a | X->parent = 0;\ |
---|
3080 | n/a | break;\ |
---|
3081 | n/a | }\ |
---|
3082 | n/a | else {\ |
---|
3083 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3084 | n/a | break;\ |
---|
3085 | n/a | }\ |
---|
3086 | n/a | }\ |
---|
3087 | n/a | }\ |
---|
3088 | n/a | }\ |
---|
3089 | n/a | } |
---|
3090 | n/a | |
---|
3091 | n/a | /* |
---|
3092 | n/a | Unlink steps: |
---|
3093 | n/a | |
---|
3094 | n/a | 1. If x is a chained node, unlink it from its same-sized fd/bk links |
---|
3095 | n/a | and choose its bk node as its replacement. |
---|
3096 | n/a | 2. If x was the last node of its size, but not a leaf node, it must |
---|
3097 | n/a | be replaced with a leaf node (not merely one with an open left or |
---|
3098 | n/a | right), to make sure that lefts and rights of descendants |
---|
3099 | n/a | correspond properly to bit masks. We use the rightmost descendant |
---|
3100 | n/a | of x. We could use any other leaf, but this is easy to locate and |
---|
3101 | n/a | tends to counteract removal of leftmosts elsewhere, and so keeps |
---|
3102 | n/a | paths shorter than minimally guaranteed. This doesn't loop much |
---|
3103 | n/a | because on average a node in a tree is near the bottom. |
---|
3104 | n/a | 3. If x is the base of a chain (i.e., has parent links) relink |
---|
3105 | n/a | x's parent and children to x's replacement (or null if none). |
---|
3106 | n/a | */ |
---|
3107 | n/a | |
---|
3108 | n/a | #define unlink_large_chunk(M, X) {\ |
---|
3109 | n/a | tchunkptr XP = X->parent;\ |
---|
3110 | n/a | tchunkptr R;\ |
---|
3111 | n/a | if (X->bk != X) {\ |
---|
3112 | n/a | tchunkptr F = X->fd;\ |
---|
3113 | n/a | R = X->bk;\ |
---|
3114 | n/a | if (RTCHECK(ok_address(M, F))) {\ |
---|
3115 | n/a | F->bk = R;\ |
---|
3116 | n/a | R->fd = F;\ |
---|
3117 | n/a | }\ |
---|
3118 | n/a | else {\ |
---|
3119 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3120 | n/a | }\ |
---|
3121 | n/a | }\ |
---|
3122 | n/a | else {\ |
---|
3123 | n/a | tchunkptr* RP;\ |
---|
3124 | n/a | if (((R = *(RP = &(X->child[1]))) != 0) ||\ |
---|
3125 | n/a | ((R = *(RP = &(X->child[0]))) != 0)) {\ |
---|
3126 | n/a | tchunkptr* CP;\ |
---|
3127 | n/a | while ((*(CP = &(R->child[1])) != 0) ||\ |
---|
3128 | n/a | (*(CP = &(R->child[0])) != 0)) {\ |
---|
3129 | n/a | R = *(RP = CP);\ |
---|
3130 | n/a | }\ |
---|
3131 | n/a | if (RTCHECK(ok_address(M, RP)))\ |
---|
3132 | n/a | *RP = 0;\ |
---|
3133 | n/a | else {\ |
---|
3134 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3135 | n/a | }\ |
---|
3136 | n/a | }\ |
---|
3137 | n/a | }\ |
---|
3138 | n/a | if (XP != 0) {\ |
---|
3139 | n/a | tbinptr* H = treebin_at(M, X->index);\ |
---|
3140 | n/a | if (X == *H) {\ |
---|
3141 | n/a | if ((*H = R) == 0) \ |
---|
3142 | n/a | clear_treemap(M, X->index);\ |
---|
3143 | n/a | }\ |
---|
3144 | n/a | else if (RTCHECK(ok_address(M, XP))) {\ |
---|
3145 | n/a | if (XP->child[0] == X) \ |
---|
3146 | n/a | XP->child[0] = R;\ |
---|
3147 | n/a | else \ |
---|
3148 | n/a | XP->child[1] = R;\ |
---|
3149 | n/a | }\ |
---|
3150 | n/a | else\ |
---|
3151 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3152 | n/a | if (R != 0) {\ |
---|
3153 | n/a | if (RTCHECK(ok_address(M, R))) {\ |
---|
3154 | n/a | tchunkptr C0, C1;\ |
---|
3155 | n/a | R->parent = XP;\ |
---|
3156 | n/a | if ((C0 = X->child[0]) != 0) {\ |
---|
3157 | n/a | if (RTCHECK(ok_address(M, C0))) {\ |
---|
3158 | n/a | R->child[0] = C0;\ |
---|
3159 | n/a | C0->parent = R;\ |
---|
3160 | n/a | }\ |
---|
3161 | n/a | else\ |
---|
3162 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3163 | n/a | }\ |
---|
3164 | n/a | if ((C1 = X->child[1]) != 0) {\ |
---|
3165 | n/a | if (RTCHECK(ok_address(M, C1))) {\ |
---|
3166 | n/a | R->child[1] = C1;\ |
---|
3167 | n/a | C1->parent = R;\ |
---|
3168 | n/a | }\ |
---|
3169 | n/a | else\ |
---|
3170 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3171 | n/a | }\ |
---|
3172 | n/a | }\ |
---|
3173 | n/a | else\ |
---|
3174 | n/a | CORRUPTION_ERROR_ACTION(M);\ |
---|
3175 | n/a | }\ |
---|
3176 | n/a | }\ |
---|
3177 | n/a | } |
---|
3178 | n/a | |
---|
3179 | n/a | /* Relays to large vs small bin operations */ |
---|
3180 | n/a | |
---|
3181 | n/a | #define insert_chunk(M, P, S)\ |
---|
3182 | n/a | if (is_small(S)) insert_small_chunk(M, P, S)\ |
---|
3183 | n/a | else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } |
---|
3184 | n/a | |
---|
3185 | n/a | #define unlink_chunk(M, P, S)\ |
---|
3186 | n/a | if (is_small(S)) unlink_small_chunk(M, P, S)\ |
---|
3187 | n/a | else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } |
---|
3188 | n/a | |
---|
3189 | n/a | |
---|
3190 | n/a | /* Relays to internal calls to malloc/free from realloc, memalign etc */ |
---|
3191 | n/a | |
---|
3192 | n/a | #if ONLY_MSPACES |
---|
3193 | n/a | #define internal_malloc(m, b) mspace_malloc(m, b) |
---|
3194 | n/a | #define internal_free(m, mem) mspace_free(m,mem); |
---|
3195 | n/a | #else /* ONLY_MSPACES */ |
---|
3196 | n/a | #if MSPACES |
---|
3197 | n/a | #define internal_malloc(m, b)\ |
---|
3198 | n/a | (m == gm)? dlmalloc(b) : mspace_malloc(m, b) |
---|
3199 | n/a | #define internal_free(m, mem)\ |
---|
3200 | n/a | if (m == gm) dlfree(mem); else mspace_free(m,mem); |
---|
3201 | n/a | #else /* MSPACES */ |
---|
3202 | n/a | #define internal_malloc(m, b) dlmalloc(b) |
---|
3203 | n/a | #define internal_free(m, mem) dlfree(mem) |
---|
3204 | n/a | #endif /* MSPACES */ |
---|
3205 | n/a | #endif /* ONLY_MSPACES */ |
---|
3206 | n/a | |
---|
3207 | n/a | /* ----------------------- Direct-mmapping chunks ----------------------- */ |
---|
3208 | n/a | |
---|
3209 | n/a | /* |
---|
3210 | n/a | Directly mmapped chunks are set up with an offset to the start of |
---|
3211 | n/a | the mmapped region stored in the prev_foot field of the chunk. This |
---|
3212 | n/a | allows reconstruction of the required argument to MUNMAP when freed, |
---|
3213 | n/a | and also allows adjustment of the returned chunk to meet alignment |
---|
3214 | n/a | requirements (especially in memalign). There is also enough space |
---|
3215 | n/a | allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain |
---|
3216 | n/a | the PINUSE bit so frees can be checked. |
---|
3217 | n/a | */ |
---|
3218 | n/a | |
---|
3219 | n/a | /* Malloc using mmap */ |
---|
3220 | n/a | static void* mmap_alloc(mstate m, size_t nb) { |
---|
3221 | n/a | size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
---|
3222 | n/a | if (mmsize > nb) { /* Check for wrap around 0 */ |
---|
3223 | n/a | char* mm = (char*)(DIRECT_MMAP(mmsize)); |
---|
3224 | n/a | if (mm != CMFAIL) { |
---|
3225 | n/a | size_t offset = align_offset(chunk2mem(mm)); |
---|
3226 | n/a | size_t psize = mmsize - offset - MMAP_FOOT_PAD; |
---|
3227 | n/a | mchunkptr p = (mchunkptr)(mm + offset); |
---|
3228 | n/a | p->prev_foot = offset | IS_MMAPPED_BIT; |
---|
3229 | n/a | (p)->head = (psize|CINUSE_BIT); |
---|
3230 | n/a | mark_inuse_foot(m, p, psize); |
---|
3231 | n/a | chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; |
---|
3232 | n/a | chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; |
---|
3233 | n/a | |
---|
3234 | n/a | if (mm < m->least_addr) |
---|
3235 | n/a | m->least_addr = mm; |
---|
3236 | n/a | if ((m->footprint += mmsize) > m->max_footprint) |
---|
3237 | n/a | m->max_footprint = m->footprint; |
---|
3238 | n/a | assert(is_aligned(chunk2mem(p))); |
---|
3239 | n/a | check_mmapped_chunk(m, p); |
---|
3240 | n/a | return chunk2mem(p); |
---|
3241 | n/a | } |
---|
3242 | n/a | } |
---|
3243 | n/a | return 0; |
---|
3244 | n/a | } |
---|
3245 | n/a | |
---|
3246 | n/a | /* Realloc using mmap */ |
---|
3247 | n/a | static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) { |
---|
3248 | n/a | size_t oldsize = chunksize(oldp); |
---|
3249 | n/a | if (is_small(nb)) /* Can't shrink mmap regions below small size */ |
---|
3250 | n/a | return 0; |
---|
3251 | n/a | /* Keep old chunk if big enough but not too big */ |
---|
3252 | n/a | if (oldsize >= nb + SIZE_T_SIZE && |
---|
3253 | n/a | (oldsize - nb) <= (mparams.granularity << 1)) |
---|
3254 | n/a | return oldp; |
---|
3255 | n/a | else { |
---|
3256 | n/a | size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT; |
---|
3257 | n/a | size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; |
---|
3258 | n/a | size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES + |
---|
3259 | n/a | CHUNK_ALIGN_MASK); |
---|
3260 | n/a | char* cp = (char*)CALL_MREMAP((char*)oldp - offset, |
---|
3261 | n/a | oldmmsize, newmmsize, 1); |
---|
3262 | n/a | if (cp != CMFAIL) { |
---|
3263 | n/a | mchunkptr newp = (mchunkptr)(cp + offset); |
---|
3264 | n/a | size_t psize = newmmsize - offset - MMAP_FOOT_PAD; |
---|
3265 | n/a | newp->head = (psize|CINUSE_BIT); |
---|
3266 | n/a | mark_inuse_foot(m, newp, psize); |
---|
3267 | n/a | chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; |
---|
3268 | n/a | chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; |
---|
3269 | n/a | |
---|
3270 | n/a | if (cp < m->least_addr) |
---|
3271 | n/a | m->least_addr = cp; |
---|
3272 | n/a | if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) |
---|
3273 | n/a | m->max_footprint = m->footprint; |
---|
3274 | n/a | check_mmapped_chunk(m, newp); |
---|
3275 | n/a | return newp; |
---|
3276 | n/a | } |
---|
3277 | n/a | } |
---|
3278 | n/a | return 0; |
---|
3279 | n/a | } |
---|
3280 | n/a | |
---|
3281 | n/a | /* -------------------------- mspace management -------------------------- */ |
---|
3282 | n/a | |
---|
3283 | n/a | /* Initialize top chunk and its size */ |
---|
3284 | n/a | static void init_top(mstate m, mchunkptr p, size_t psize) { |
---|
3285 | n/a | /* Ensure alignment */ |
---|
3286 | n/a | size_t offset = align_offset(chunk2mem(p)); |
---|
3287 | n/a | p = (mchunkptr)((char*)p + offset); |
---|
3288 | n/a | psize -= offset; |
---|
3289 | n/a | |
---|
3290 | n/a | m->top = p; |
---|
3291 | n/a | m->topsize = psize; |
---|
3292 | n/a | p->head = psize | PINUSE_BIT; |
---|
3293 | n/a | /* set size of fake trailing chunk holding overhead space only once */ |
---|
3294 | n/a | chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; |
---|
3295 | n/a | m->trim_check = mparams.trim_threshold; /* reset on each update */ |
---|
3296 | n/a | } |
---|
3297 | n/a | |
---|
3298 | n/a | /* Initialize bins for a new mstate that is otherwise zeroed out */ |
---|
3299 | n/a | static void init_bins(mstate m) { |
---|
3300 | n/a | /* Establish circular links for smallbins */ |
---|
3301 | n/a | bindex_t i; |
---|
3302 | n/a | for (i = 0; i < NSMALLBINS; ++i) { |
---|
3303 | n/a | sbinptr bin = smallbin_at(m,i); |
---|
3304 | n/a | bin->fd = bin->bk = bin; |
---|
3305 | n/a | } |
---|
3306 | n/a | } |
---|
3307 | n/a | |
---|
3308 | n/a | #if PROCEED_ON_ERROR |
---|
3309 | n/a | |
---|
3310 | n/a | /* default corruption action */ |
---|
3311 | n/a | static void reset_on_error(mstate m) { |
---|
3312 | n/a | int i; |
---|
3313 | n/a | ++malloc_corruption_error_count; |
---|
3314 | n/a | /* Reinitialize fields to forget about all memory */ |
---|
3315 | n/a | m->smallbins = m->treebins = 0; |
---|
3316 | n/a | m->dvsize = m->topsize = 0; |
---|
3317 | n/a | m->seg.base = 0; |
---|
3318 | n/a | m->seg.size = 0; |
---|
3319 | n/a | m->seg.next = 0; |
---|
3320 | n/a | m->top = m->dv = 0; |
---|
3321 | n/a | for (i = 0; i < NTREEBINS; ++i) |
---|
3322 | n/a | *treebin_at(m, i) = 0; |
---|
3323 | n/a | init_bins(m); |
---|
3324 | n/a | } |
---|
3325 | n/a | #endif /* PROCEED_ON_ERROR */ |
---|
3326 | n/a | |
---|
3327 | n/a | /* Allocate chunk and prepend remainder with chunk in successor base. */ |
---|
3328 | n/a | static void* prepend_alloc(mstate m, char* newbase, char* oldbase, |
---|
3329 | n/a | size_t nb) { |
---|
3330 | n/a | mchunkptr p = align_as_chunk(newbase); |
---|
3331 | n/a | mchunkptr oldfirst = align_as_chunk(oldbase); |
---|
3332 | n/a | size_t psize = (char*)oldfirst - (char*)p; |
---|
3333 | n/a | mchunkptr q = chunk_plus_offset(p, nb); |
---|
3334 | n/a | size_t qsize = psize - nb; |
---|
3335 | n/a | set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
---|
3336 | n/a | |
---|
3337 | n/a | assert((char*)oldfirst > (char*)q); |
---|
3338 | n/a | assert(pinuse(oldfirst)); |
---|
3339 | n/a | assert(qsize >= MIN_CHUNK_SIZE); |
---|
3340 | n/a | |
---|
3341 | n/a | /* consolidate remainder with first chunk of old base */ |
---|
3342 | n/a | if (oldfirst == m->top) { |
---|
3343 | n/a | size_t tsize = m->topsize += qsize; |
---|
3344 | n/a | m->top = q; |
---|
3345 | n/a | q->head = tsize | PINUSE_BIT; |
---|
3346 | n/a | check_top_chunk(m, q); |
---|
3347 | n/a | } |
---|
3348 | n/a | else if (oldfirst == m->dv) { |
---|
3349 | n/a | size_t dsize = m->dvsize += qsize; |
---|
3350 | n/a | m->dv = q; |
---|
3351 | n/a | set_size_and_pinuse_of_free_chunk(q, dsize); |
---|
3352 | n/a | } |
---|
3353 | n/a | else { |
---|
3354 | n/a | if (!cinuse(oldfirst)) { |
---|
3355 | n/a | size_t nsize = chunksize(oldfirst); |
---|
3356 | n/a | unlink_chunk(m, oldfirst, nsize); |
---|
3357 | n/a | oldfirst = chunk_plus_offset(oldfirst, nsize); |
---|
3358 | n/a | qsize += nsize; |
---|
3359 | n/a | } |
---|
3360 | n/a | set_free_with_pinuse(q, qsize, oldfirst); |
---|
3361 | n/a | insert_chunk(m, q, qsize); |
---|
3362 | n/a | check_free_chunk(m, q); |
---|
3363 | n/a | } |
---|
3364 | n/a | |
---|
3365 | n/a | check_malloced_chunk(m, chunk2mem(p), nb); |
---|
3366 | n/a | return chunk2mem(p); |
---|
3367 | n/a | } |
---|
3368 | n/a | |
---|
3369 | n/a | |
---|
3370 | n/a | /* Add a segment to hold a new noncontiguous region */ |
---|
3371 | n/a | static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { |
---|
3372 | n/a | /* Determine locations and sizes of segment, fenceposts, old top */ |
---|
3373 | n/a | char* old_top = (char*)m->top; |
---|
3374 | n/a | msegmentptr oldsp = segment_holding(m, old_top); |
---|
3375 | n/a | char* old_end = oldsp->base + oldsp->size; |
---|
3376 | n/a | size_t ssize = pad_request(sizeof(struct malloc_segment)); |
---|
3377 | n/a | char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
---|
3378 | n/a | size_t offset = align_offset(chunk2mem(rawsp)); |
---|
3379 | n/a | char* asp = rawsp + offset; |
---|
3380 | n/a | char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; |
---|
3381 | n/a | mchunkptr sp = (mchunkptr)csp; |
---|
3382 | n/a | msegmentptr ss = (msegmentptr)(chunk2mem(sp)); |
---|
3383 | n/a | mchunkptr tnext = chunk_plus_offset(sp, ssize); |
---|
3384 | n/a | mchunkptr p = tnext; |
---|
3385 | n/a | int nfences = 0; |
---|
3386 | n/a | |
---|
3387 | n/a | /* reset top to new space */ |
---|
3388 | n/a | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
---|
3389 | n/a | |
---|
3390 | n/a | /* Set up segment record */ |
---|
3391 | n/a | assert(is_aligned(ss)); |
---|
3392 | n/a | set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); |
---|
3393 | n/a | *ss = m->seg; /* Push current record */ |
---|
3394 | n/a | m->seg.base = tbase; |
---|
3395 | n/a | m->seg.size = tsize; |
---|
3396 | n/a | (void)set_segment_flags(&m->seg, mmapped); |
---|
3397 | n/a | m->seg.next = ss; |
---|
3398 | n/a | |
---|
3399 | n/a | /* Insert trailing fenceposts */ |
---|
3400 | n/a | for (;;) { |
---|
3401 | n/a | mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); |
---|
3402 | n/a | p->head = FENCEPOST_HEAD; |
---|
3403 | n/a | ++nfences; |
---|
3404 | n/a | if ((char*)(&(nextp->head)) < old_end) |
---|
3405 | n/a | p = nextp; |
---|
3406 | n/a | else |
---|
3407 | n/a | break; |
---|
3408 | n/a | } |
---|
3409 | n/a | assert(nfences >= 2); |
---|
3410 | n/a | |
---|
3411 | n/a | /* Insert the rest of old top into a bin as an ordinary free chunk */ |
---|
3412 | n/a | if (csp != old_top) { |
---|
3413 | n/a | mchunkptr q = (mchunkptr)old_top; |
---|
3414 | n/a | size_t psize = csp - old_top; |
---|
3415 | n/a | mchunkptr tn = chunk_plus_offset(q, psize); |
---|
3416 | n/a | set_free_with_pinuse(q, psize, tn); |
---|
3417 | n/a | insert_chunk(m, q, psize); |
---|
3418 | n/a | } |
---|
3419 | n/a | |
---|
3420 | n/a | check_top_chunk(m, m->top); |
---|
3421 | n/a | } |
---|
3422 | n/a | |
---|
3423 | n/a | /* -------------------------- System allocation -------------------------- */ |
---|
3424 | n/a | |
---|
3425 | n/a | /* Get memory from system using MORECORE or MMAP */ |
---|
3426 | n/a | static void* sys_alloc(mstate m, size_t nb) { |
---|
3427 | n/a | char* tbase = CMFAIL; |
---|
3428 | n/a | size_t tsize = 0; |
---|
3429 | n/a | flag_t mmap_flag = 0; |
---|
3430 | n/a | |
---|
3431 | n/a | init_mparams(); |
---|
3432 | n/a | |
---|
3433 | n/a | /* Directly map large chunks */ |
---|
3434 | n/a | if (use_mmap(m) && nb >= mparams.mmap_threshold) { |
---|
3435 | n/a | void* mem = mmap_alloc(m, nb); |
---|
3436 | n/a | if (mem != 0) |
---|
3437 | n/a | return mem; |
---|
3438 | n/a | } |
---|
3439 | n/a | |
---|
3440 | n/a | /* |
---|
3441 | n/a | Try getting memory in any of three ways (in most-preferred to |
---|
3442 | n/a | least-preferred order): |
---|
3443 | n/a | 1. A call to MORECORE that can normally contiguously extend memory. |
---|
3444 | n/a | (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or |
---|
3445 | n/a | or main space is mmapped or a previous contiguous call failed) |
---|
3446 | n/a | 2. A call to MMAP new space (disabled if not HAVE_MMAP). |
---|
3447 | n/a | Note that under the default settings, if MORECORE is unable to |
---|
3448 | n/a | fulfill a request, and HAVE_MMAP is true, then mmap is |
---|
3449 | n/a | used as a noncontiguous system allocator. This is a useful backup |
---|
3450 | n/a | strategy for systems with holes in address spaces -- in this case |
---|
3451 | n/a | sbrk cannot contiguously expand the heap, but mmap may be able to |
---|
3452 | n/a | find space. |
---|
3453 | n/a | 3. A call to MORECORE that cannot usually contiguously extend memory. |
---|
3454 | n/a | (disabled if not HAVE_MORECORE) |
---|
3455 | n/a | */ |
---|
3456 | n/a | |
---|
3457 | n/a | if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { |
---|
3458 | n/a | char* br = CMFAIL; |
---|
3459 | n/a | msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); |
---|
3460 | n/a | size_t asize = 0; |
---|
3461 | n/a | ACQUIRE_MORECORE_LOCK(); |
---|
3462 | n/a | |
---|
3463 | n/a | if (ss == 0) { /* First time through or recovery */ |
---|
3464 | n/a | char* base = (char*)CALL_MORECORE(0); |
---|
3465 | n/a | if (base != CMFAIL) { |
---|
3466 | n/a | asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE); |
---|
3467 | n/a | /* Adjust to end on a page boundary */ |
---|
3468 | n/a | if (!is_page_aligned(base)) |
---|
3469 | n/a | asize += (page_align((size_t)base) - (size_t)base); |
---|
3470 | n/a | /* Can't call MORECORE if size is negative when treated as signed */ |
---|
3471 | n/a | if (asize < HALF_MAX_SIZE_T && |
---|
3472 | n/a | (br = (char*)(CALL_MORECORE(asize))) == base) { |
---|
3473 | n/a | tbase = base; |
---|
3474 | n/a | tsize = asize; |
---|
3475 | n/a | } |
---|
3476 | n/a | } |
---|
3477 | n/a | } |
---|
3478 | n/a | else { |
---|
3479 | n/a | /* Subtract out existing available top space from MORECORE request. */ |
---|
3480 | n/a | asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE); |
---|
3481 | n/a | /* Use mem here only if it did continuously extend old space */ |
---|
3482 | n/a | if (asize < HALF_MAX_SIZE_T && |
---|
3483 | n/a | (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) { |
---|
3484 | n/a | tbase = br; |
---|
3485 | n/a | tsize = asize; |
---|
3486 | n/a | } |
---|
3487 | n/a | } |
---|
3488 | n/a | |
---|
3489 | n/a | if (tbase == CMFAIL) { /* Cope with partial failure */ |
---|
3490 | n/a | if (br != CMFAIL) { /* Try to use/extend the space we did get */ |
---|
3491 | n/a | if (asize < HALF_MAX_SIZE_T && |
---|
3492 | n/a | asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) { |
---|
3493 | n/a | size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize); |
---|
3494 | n/a | if (esize < HALF_MAX_SIZE_T) { |
---|
3495 | n/a | char* end = (char*)CALL_MORECORE(esize); |
---|
3496 | n/a | if (end != CMFAIL) |
---|
3497 | n/a | asize += esize; |
---|
3498 | n/a | else { /* Can't use; try to release */ |
---|
3499 | n/a | (void)CALL_MORECORE(-asize); |
---|
3500 | n/a | br = CMFAIL; |
---|
3501 | n/a | } |
---|
3502 | n/a | } |
---|
3503 | n/a | } |
---|
3504 | n/a | } |
---|
3505 | n/a | if (br != CMFAIL) { /* Use the space we did get */ |
---|
3506 | n/a | tbase = br; |
---|
3507 | n/a | tsize = asize; |
---|
3508 | n/a | } |
---|
3509 | n/a | else |
---|
3510 | n/a | disable_contiguous(m); /* Don't try contiguous path in the future */ |
---|
3511 | n/a | } |
---|
3512 | n/a | |
---|
3513 | n/a | RELEASE_MORECORE_LOCK(); |
---|
3514 | n/a | } |
---|
3515 | n/a | |
---|
3516 | n/a | if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ |
---|
3517 | n/a | size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE; |
---|
3518 | n/a | size_t rsize = granularity_align(req); |
---|
3519 | n/a | if (rsize > nb) { /* Fail if wraps around zero */ |
---|
3520 | n/a | char* mp = (char*)(CALL_MMAP(rsize)); |
---|
3521 | n/a | if (mp != CMFAIL) { |
---|
3522 | n/a | tbase = mp; |
---|
3523 | n/a | tsize = rsize; |
---|
3524 | n/a | mmap_flag = IS_MMAPPED_BIT; |
---|
3525 | n/a | } |
---|
3526 | n/a | } |
---|
3527 | n/a | } |
---|
3528 | n/a | |
---|
3529 | n/a | if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ |
---|
3530 | n/a | size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE); |
---|
3531 | n/a | if (asize < HALF_MAX_SIZE_T) { |
---|
3532 | n/a | char* br = CMFAIL; |
---|
3533 | n/a | char* end = CMFAIL; |
---|
3534 | n/a | ACQUIRE_MORECORE_LOCK(); |
---|
3535 | n/a | br = (char*)(CALL_MORECORE(asize)); |
---|
3536 | n/a | end = (char*)(CALL_MORECORE(0)); |
---|
3537 | n/a | RELEASE_MORECORE_LOCK(); |
---|
3538 | n/a | if (br != CMFAIL && end != CMFAIL && br < end) { |
---|
3539 | n/a | size_t ssize = end - br; |
---|
3540 | n/a | if (ssize > nb + TOP_FOOT_SIZE) { |
---|
3541 | n/a | tbase = br; |
---|
3542 | n/a | tsize = ssize; |
---|
3543 | n/a | } |
---|
3544 | n/a | } |
---|
3545 | n/a | } |
---|
3546 | n/a | } |
---|
3547 | n/a | |
---|
3548 | n/a | if (tbase != CMFAIL) { |
---|
3549 | n/a | |
---|
3550 | n/a | if ((m->footprint += tsize) > m->max_footprint) |
---|
3551 | n/a | m->max_footprint = m->footprint; |
---|
3552 | n/a | |
---|
3553 | n/a | if (!is_initialized(m)) { /* first-time initialization */ |
---|
3554 | n/a | m->seg.base = m->least_addr = tbase; |
---|
3555 | n/a | m->seg.size = tsize; |
---|
3556 | n/a | (void)set_segment_flags(&m->seg, mmap_flag); |
---|
3557 | n/a | m->magic = mparams.magic; |
---|
3558 | n/a | init_bins(m); |
---|
3559 | n/a | if (is_global(m)) |
---|
3560 | n/a | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
---|
3561 | n/a | else { |
---|
3562 | n/a | /* Offset top by embedded malloc_state */ |
---|
3563 | n/a | mchunkptr mn = next_chunk(mem2chunk(m)); |
---|
3564 | n/a | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); |
---|
3565 | n/a | } |
---|
3566 | n/a | } |
---|
3567 | n/a | |
---|
3568 | n/a | else { |
---|
3569 | n/a | /* Try to merge with an existing segment */ |
---|
3570 | n/a | msegmentptr sp = &m->seg; |
---|
3571 | n/a | while (sp != 0 && tbase != sp->base + sp->size) |
---|
3572 | n/a | sp = sp->next; |
---|
3573 | n/a | if (sp != 0 && |
---|
3574 | n/a | !is_extern_segment(sp) && |
---|
3575 | n/a | check_segment_merge(sp, tbase, tsize) && |
---|
3576 | n/a | (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag && |
---|
3577 | n/a | segment_holds(sp, m->top)) { /* append */ |
---|
3578 | n/a | sp->size += tsize; |
---|
3579 | n/a | init_top(m, m->top, m->topsize + tsize); |
---|
3580 | n/a | } |
---|
3581 | n/a | else { |
---|
3582 | n/a | if (tbase < m->least_addr) |
---|
3583 | n/a | m->least_addr = tbase; |
---|
3584 | n/a | sp = &m->seg; |
---|
3585 | n/a | while (sp != 0 && sp->base != tbase + tsize) |
---|
3586 | n/a | sp = sp->next; |
---|
3587 | n/a | if (sp != 0 && |
---|
3588 | n/a | !is_extern_segment(sp) && |
---|
3589 | n/a | check_segment_merge(sp, tbase, tsize) && |
---|
3590 | n/a | (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) { |
---|
3591 | n/a | char* oldbase = sp->base; |
---|
3592 | n/a | sp->base = tbase; |
---|
3593 | n/a | sp->size += tsize; |
---|
3594 | n/a | return prepend_alloc(m, tbase, oldbase, nb); |
---|
3595 | n/a | } |
---|
3596 | n/a | else |
---|
3597 | n/a | add_segment(m, tbase, tsize, mmap_flag); |
---|
3598 | n/a | } |
---|
3599 | n/a | } |
---|
3600 | n/a | |
---|
3601 | n/a | if (nb < m->topsize) { /* Allocate from new or extended top space */ |
---|
3602 | n/a | size_t rsize = m->topsize -= nb; |
---|
3603 | n/a | mchunkptr p = m->top; |
---|
3604 | n/a | mchunkptr r = m->top = chunk_plus_offset(p, nb); |
---|
3605 | n/a | r->head = rsize | PINUSE_BIT; |
---|
3606 | n/a | set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
---|
3607 | n/a | check_top_chunk(m, m->top); |
---|
3608 | n/a | check_malloced_chunk(m, chunk2mem(p), nb); |
---|
3609 | n/a | return chunk2mem(p); |
---|
3610 | n/a | } |
---|
3611 | n/a | } |
---|
3612 | n/a | |
---|
3613 | n/a | MALLOC_FAILURE_ACTION; |
---|
3614 | n/a | return 0; |
---|
3615 | n/a | } |
---|
3616 | n/a | |
---|
3617 | n/a | /* ----------------------- system deallocation -------------------------- */ |
---|
3618 | n/a | |
---|
3619 | n/a | /* Unmap and unlink any mmapped segments that don't contain used chunks */ |
---|
3620 | n/a | static size_t release_unused_segments(mstate m) { |
---|
3621 | n/a | size_t released = 0; |
---|
3622 | n/a | msegmentptr pred = &m->seg; |
---|
3623 | n/a | msegmentptr sp = pred->next; |
---|
3624 | n/a | while (sp != 0) { |
---|
3625 | n/a | char* base = sp->base; |
---|
3626 | n/a | size_t size = sp->size; |
---|
3627 | n/a | msegmentptr next = sp->next; |
---|
3628 | n/a | if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { |
---|
3629 | n/a | mchunkptr p = align_as_chunk(base); |
---|
3630 | n/a | size_t psize = chunksize(p); |
---|
3631 | n/a | /* Can unmap if first chunk holds entire segment and not pinned */ |
---|
3632 | n/a | if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { |
---|
3633 | n/a | tchunkptr tp = (tchunkptr)p; |
---|
3634 | n/a | assert(segment_holds(sp, (char*)sp)); |
---|
3635 | n/a | if (p == m->dv) { |
---|
3636 | n/a | m->dv = 0; |
---|
3637 | n/a | m->dvsize = 0; |
---|
3638 | n/a | } |
---|
3639 | n/a | else { |
---|
3640 | n/a | unlink_large_chunk(m, tp); |
---|
3641 | n/a | } |
---|
3642 | n/a | if (CALL_MUNMAP(base, size) == 0) { |
---|
3643 | n/a | released += size; |
---|
3644 | n/a | m->footprint -= size; |
---|
3645 | n/a | /* unlink obsoleted record */ |
---|
3646 | n/a | sp = pred; |
---|
3647 | n/a | sp->next = next; |
---|
3648 | n/a | } |
---|
3649 | n/a | else { /* back out if cannot unmap */ |
---|
3650 | n/a | insert_large_chunk(m, tp, psize); |
---|
3651 | n/a | } |
---|
3652 | n/a | } |
---|
3653 | n/a | } |
---|
3654 | n/a | pred = sp; |
---|
3655 | n/a | sp = next; |
---|
3656 | n/a | } |
---|
3657 | n/a | return released; |
---|
3658 | n/a | } |
---|
3659 | n/a | |
---|
3660 | n/a | static int sys_trim(mstate m, size_t pad) { |
---|
3661 | n/a | size_t released = 0; |
---|
3662 | n/a | if (pad < MAX_REQUEST && is_initialized(m)) { |
---|
3663 | n/a | pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ |
---|
3664 | n/a | |
---|
3665 | n/a | if (m->topsize > pad) { |
---|
3666 | n/a | /* Shrink top space in granularity-size units, keeping at least one */ |
---|
3667 | n/a | size_t unit = mparams.granularity; |
---|
3668 | n/a | size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - |
---|
3669 | n/a | SIZE_T_ONE) * unit; |
---|
3670 | n/a | msegmentptr sp = segment_holding(m, (char*)m->top); |
---|
3671 | n/a | |
---|
3672 | n/a | if (!is_extern_segment(sp)) { |
---|
3673 | n/a | if (is_mmapped_segment(sp)) { |
---|
3674 | n/a | if (HAVE_MMAP && |
---|
3675 | n/a | sp->size >= extra && |
---|
3676 | n/a | !has_segment_link(m, sp)) { /* can't shrink if pinned */ |
---|
3677 | n/a | size_t newsize = sp->size - extra; |
---|
3678 | n/a | /* Prefer mremap, fall back to munmap */ |
---|
3679 | n/a | if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || |
---|
3680 | n/a | (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { |
---|
3681 | n/a | released = extra; |
---|
3682 | n/a | } |
---|
3683 | n/a | } |
---|
3684 | n/a | } |
---|
3685 | n/a | else if (HAVE_MORECORE) { |
---|
3686 | n/a | if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ |
---|
3687 | n/a | extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; |
---|
3688 | n/a | ACQUIRE_MORECORE_LOCK(); |
---|
3689 | n/a | { |
---|
3690 | n/a | /* Make sure end of memory is where we last set it. */ |
---|
3691 | n/a | char* old_br = (char*)(CALL_MORECORE(0)); |
---|
3692 | n/a | if (old_br == sp->base + sp->size) { |
---|
3693 | n/a | char* rel_br = (char*)(CALL_MORECORE(-extra)); |
---|
3694 | n/a | char* new_br = (char*)(CALL_MORECORE(0)); |
---|
3695 | n/a | if (rel_br != CMFAIL && new_br < old_br) |
---|
3696 | n/a | released = old_br - new_br; |
---|
3697 | n/a | } |
---|
3698 | n/a | } |
---|
3699 | n/a | RELEASE_MORECORE_LOCK(); |
---|
3700 | n/a | } |
---|
3701 | n/a | } |
---|
3702 | n/a | |
---|
3703 | n/a | if (released != 0) { |
---|
3704 | n/a | sp->size -= released; |
---|
3705 | n/a | m->footprint -= released; |
---|
3706 | n/a | init_top(m, m->top, m->topsize - released); |
---|
3707 | n/a | check_top_chunk(m, m->top); |
---|
3708 | n/a | } |
---|
3709 | n/a | } |
---|
3710 | n/a | |
---|
3711 | n/a | /* Unmap any unused mmapped segments */ |
---|
3712 | n/a | if (HAVE_MMAP) |
---|
3713 | n/a | released += release_unused_segments(m); |
---|
3714 | n/a | |
---|
3715 | n/a | /* On failure, disable autotrim to avoid repeated failed future calls */ |
---|
3716 | n/a | if (released == 0) |
---|
3717 | n/a | m->trim_check = MAX_SIZE_T; |
---|
3718 | n/a | } |
---|
3719 | n/a | |
---|
3720 | n/a | return (released != 0)? 1 : 0; |
---|
3721 | n/a | } |
---|
3722 | n/a | |
---|
3723 | n/a | /* ---------------------------- malloc support --------------------------- */ |
---|
3724 | n/a | |
---|
3725 | n/a | /* allocate a large request from the best fitting chunk in a treebin */ |
---|
3726 | n/a | static void* tmalloc_large(mstate m, size_t nb) { |
---|
3727 | n/a | tchunkptr v = 0; |
---|
3728 | n/a | size_t rsize = -nb; /* Unsigned negation */ |
---|
3729 | n/a | tchunkptr t; |
---|
3730 | n/a | bindex_t idx; |
---|
3731 | n/a | compute_tree_index(nb, idx); |
---|
3732 | n/a | |
---|
3733 | n/a | if ((t = *treebin_at(m, idx)) != 0) { |
---|
3734 | n/a | /* Traverse tree for this bin looking for node with size == nb */ |
---|
3735 | n/a | size_t sizebits = nb << leftshift_for_tree_index(idx); |
---|
3736 | n/a | tchunkptr rst = 0; /* The deepest untaken right subtree */ |
---|
3737 | n/a | for (;;) { |
---|
3738 | n/a | tchunkptr rt; |
---|
3739 | n/a | size_t trem = chunksize(t) - nb; |
---|
3740 | n/a | if (trem < rsize) { |
---|
3741 | n/a | v = t; |
---|
3742 | n/a | if ((rsize = trem) == 0) |
---|
3743 | n/a | break; |
---|
3744 | n/a | } |
---|
3745 | n/a | rt = t->child[1]; |
---|
3746 | n/a | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
---|
3747 | n/a | if (rt != 0 && rt != t) |
---|
3748 | n/a | rst = rt; |
---|
3749 | n/a | if (t == 0) { |
---|
3750 | n/a | t = rst; /* set t to least subtree holding sizes > nb */ |
---|
3751 | n/a | break; |
---|
3752 | n/a | } |
---|
3753 | n/a | sizebits <<= 1; |
---|
3754 | n/a | } |
---|
3755 | n/a | } |
---|
3756 | n/a | |
---|
3757 | n/a | if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ |
---|
3758 | n/a | binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; |
---|
3759 | n/a | if (leftbits != 0) { |
---|
3760 | n/a | bindex_t i; |
---|
3761 | n/a | binmap_t leastbit = least_bit(leftbits); |
---|
3762 | n/a | compute_bit2idx(leastbit, i); |
---|
3763 | n/a | t = *treebin_at(m, i); |
---|
3764 | n/a | } |
---|
3765 | n/a | } |
---|
3766 | n/a | |
---|
3767 | n/a | while (t != 0) { /* find smallest of tree or subtree */ |
---|
3768 | n/a | size_t trem = chunksize(t) - nb; |
---|
3769 | n/a | if (trem < rsize) { |
---|
3770 | n/a | rsize = trem; |
---|
3771 | n/a | v = t; |
---|
3772 | n/a | } |
---|
3773 | n/a | t = leftmost_child(t); |
---|
3774 | n/a | } |
---|
3775 | n/a | |
---|
3776 | n/a | /* If dv is a better fit, return 0 so malloc will use it */ |
---|
3777 | n/a | if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { |
---|
3778 | n/a | if (RTCHECK(ok_address(m, v))) { /* split */ |
---|
3779 | n/a | mchunkptr r = chunk_plus_offset(v, nb); |
---|
3780 | n/a | assert(chunksize(v) == rsize + nb); |
---|
3781 | n/a | if (RTCHECK(ok_next(v, r))) { |
---|
3782 | n/a | unlink_large_chunk(m, v); |
---|
3783 | n/a | if (rsize < MIN_CHUNK_SIZE) |
---|
3784 | n/a | set_inuse_and_pinuse(m, v, (rsize + nb)); |
---|
3785 | n/a | else { |
---|
3786 | n/a | set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
---|
3787 | n/a | set_size_and_pinuse_of_free_chunk(r, rsize); |
---|
3788 | n/a | insert_chunk(m, r, rsize); |
---|
3789 | n/a | } |
---|
3790 | n/a | return chunk2mem(v); |
---|
3791 | n/a | } |
---|
3792 | n/a | } |
---|
3793 | n/a | CORRUPTION_ERROR_ACTION(m); |
---|
3794 | n/a | } |
---|
3795 | n/a | return 0; |
---|
3796 | n/a | } |
---|
3797 | n/a | |
---|
3798 | n/a | /* allocate a small request from the best fitting chunk in a treebin */ |
---|
3799 | n/a | static void* tmalloc_small(mstate m, size_t nb) { |
---|
3800 | n/a | tchunkptr t, v; |
---|
3801 | n/a | size_t rsize; |
---|
3802 | n/a | bindex_t i; |
---|
3803 | n/a | binmap_t leastbit = least_bit(m->treemap); |
---|
3804 | n/a | compute_bit2idx(leastbit, i); |
---|
3805 | n/a | |
---|
3806 | n/a | v = t = *treebin_at(m, i); |
---|
3807 | n/a | rsize = chunksize(t) - nb; |
---|
3808 | n/a | |
---|
3809 | n/a | while ((t = leftmost_child(t)) != 0) { |
---|
3810 | n/a | size_t trem = chunksize(t) - nb; |
---|
3811 | n/a | if (trem < rsize) { |
---|
3812 | n/a | rsize = trem; |
---|
3813 | n/a | v = t; |
---|
3814 | n/a | } |
---|
3815 | n/a | } |
---|
3816 | n/a | |
---|
3817 | n/a | if (RTCHECK(ok_address(m, v))) { |
---|
3818 | n/a | mchunkptr r = chunk_plus_offset(v, nb); |
---|
3819 | n/a | assert(chunksize(v) == rsize + nb); |
---|
3820 | n/a | if (RTCHECK(ok_next(v, r))) { |
---|
3821 | n/a | unlink_large_chunk(m, v); |
---|
3822 | n/a | if (rsize < MIN_CHUNK_SIZE) |
---|
3823 | n/a | set_inuse_and_pinuse(m, v, (rsize + nb)); |
---|
3824 | n/a | else { |
---|
3825 | n/a | set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
---|
3826 | n/a | set_size_and_pinuse_of_free_chunk(r, rsize); |
---|
3827 | n/a | replace_dv(m, r, rsize); |
---|
3828 | n/a | } |
---|
3829 | n/a | return chunk2mem(v); |
---|
3830 | n/a | } |
---|
3831 | n/a | } |
---|
3832 | n/a | |
---|
3833 | n/a | CORRUPTION_ERROR_ACTION(m); |
---|
3834 | n/a | return 0; |
---|
3835 | n/a | } |
---|
3836 | n/a | |
---|
3837 | n/a | /* --------------------------- realloc support --------------------------- */ |
---|
3838 | n/a | |
---|
3839 | n/a | static void* internal_realloc(mstate m, void* oldmem, size_t bytes) { |
---|
3840 | n/a | if (bytes >= MAX_REQUEST) { |
---|
3841 | n/a | MALLOC_FAILURE_ACTION; |
---|
3842 | n/a | return 0; |
---|
3843 | n/a | } |
---|
3844 | n/a | if (!PREACTION(m)) { |
---|
3845 | n/a | mchunkptr oldp = mem2chunk(oldmem); |
---|
3846 | n/a | size_t oldsize = chunksize(oldp); |
---|
3847 | n/a | mchunkptr next = chunk_plus_offset(oldp, oldsize); |
---|
3848 | n/a | mchunkptr newp = 0; |
---|
3849 | n/a | void* extra = 0; |
---|
3850 | n/a | |
---|
3851 | n/a | /* Try to either shrink or extend into top. Else malloc-copy-free */ |
---|
3852 | n/a | |
---|
3853 | n/a | if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) && |
---|
3854 | n/a | ok_next(oldp, next) && ok_pinuse(next))) { |
---|
3855 | n/a | size_t nb = request2size(bytes); |
---|
3856 | n/a | if (is_mmapped(oldp)) |
---|
3857 | n/a | newp = mmap_resize(m, oldp, nb); |
---|
3858 | n/a | else if (oldsize >= nb) { /* already big enough */ |
---|
3859 | n/a | size_t rsize = oldsize - nb; |
---|
3860 | n/a | newp = oldp; |
---|
3861 | n/a | if (rsize >= MIN_CHUNK_SIZE) { |
---|
3862 | n/a | mchunkptr remainder = chunk_plus_offset(newp, nb); |
---|
3863 | n/a | set_inuse(m, newp, nb); |
---|
3864 | n/a | set_inuse(m, remainder, rsize); |
---|
3865 | n/a | extra = chunk2mem(remainder); |
---|
3866 | n/a | } |
---|
3867 | n/a | } |
---|
3868 | n/a | else if (next == m->top && oldsize + m->topsize > nb) { |
---|
3869 | n/a | /* Expand into top */ |
---|
3870 | n/a | size_t newsize = oldsize + m->topsize; |
---|
3871 | n/a | size_t newtopsize = newsize - nb; |
---|
3872 | n/a | mchunkptr newtop = chunk_plus_offset(oldp, nb); |
---|
3873 | n/a | set_inuse(m, oldp, nb); |
---|
3874 | n/a | newtop->head = newtopsize |PINUSE_BIT; |
---|
3875 | n/a | m->top = newtop; |
---|
3876 | n/a | m->topsize = newtopsize; |
---|
3877 | n/a | newp = oldp; |
---|
3878 | n/a | } |
---|
3879 | n/a | } |
---|
3880 | n/a | else { |
---|
3881 | n/a | USAGE_ERROR_ACTION(m, oldmem); |
---|
3882 | n/a | POSTACTION(m); |
---|
3883 | n/a | return 0; |
---|
3884 | n/a | } |
---|
3885 | n/a | |
---|
3886 | n/a | POSTACTION(m); |
---|
3887 | n/a | |
---|
3888 | n/a | if (newp != 0) { |
---|
3889 | n/a | if (extra != 0) { |
---|
3890 | n/a | internal_free(m, extra); |
---|
3891 | n/a | } |
---|
3892 | n/a | check_inuse_chunk(m, newp); |
---|
3893 | n/a | return chunk2mem(newp); |
---|
3894 | n/a | } |
---|
3895 | n/a | else { |
---|
3896 | n/a | void* newmem = internal_malloc(m, bytes); |
---|
3897 | n/a | if (newmem != 0) { |
---|
3898 | n/a | size_t oc = oldsize - overhead_for(oldp); |
---|
3899 | n/a | memcpy(newmem, oldmem, (oc < bytes)? oc : bytes); |
---|
3900 | n/a | internal_free(m, oldmem); |
---|
3901 | n/a | } |
---|
3902 | n/a | return newmem; |
---|
3903 | n/a | } |
---|
3904 | n/a | } |
---|
3905 | n/a | return 0; |
---|
3906 | n/a | } |
---|
3907 | n/a | |
---|
3908 | n/a | /* --------------------------- memalign support -------------------------- */ |
---|
3909 | n/a | |
---|
3910 | n/a | static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { |
---|
3911 | n/a | if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */ |
---|
3912 | n/a | return internal_malloc(m, bytes); |
---|
3913 | n/a | if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ |
---|
3914 | n/a | alignment = MIN_CHUNK_SIZE; |
---|
3915 | n/a | if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ |
---|
3916 | n/a | size_t a = MALLOC_ALIGNMENT << 1; |
---|
3917 | n/a | while (a < alignment) a <<= 1; |
---|
3918 | n/a | alignment = a; |
---|
3919 | n/a | } |
---|
3920 | n/a | |
---|
3921 | n/a | if (bytes >= MAX_REQUEST - alignment) { |
---|
3922 | n/a | if (m != 0) { /* Test isn't needed but avoids compiler warning */ |
---|
3923 | n/a | MALLOC_FAILURE_ACTION; |
---|
3924 | n/a | } |
---|
3925 | n/a | } |
---|
3926 | n/a | else { |
---|
3927 | n/a | size_t nb = request2size(bytes); |
---|
3928 | n/a | size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; |
---|
3929 | n/a | char* mem = (char*)internal_malloc(m, req); |
---|
3930 | n/a | if (mem != 0) { |
---|
3931 | n/a | void* leader = 0; |
---|
3932 | n/a | void* trailer = 0; |
---|
3933 | n/a | mchunkptr p = mem2chunk(mem); |
---|
3934 | n/a | |
---|
3935 | n/a | if (PREACTION(m)) return 0; |
---|
3936 | n/a | if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */ |
---|
3937 | n/a | /* |
---|
3938 | n/a | Find an aligned spot inside chunk. Since we need to give |
---|
3939 | n/a | back leading space in a chunk of at least MIN_CHUNK_SIZE, if |
---|
3940 | n/a | the first calculation places us at a spot with less than |
---|
3941 | n/a | MIN_CHUNK_SIZE leader, we can move to the next aligned spot. |
---|
3942 | n/a | We've allocated enough total room so that this is always |
---|
3943 | n/a | possible. |
---|
3944 | n/a | */ |
---|
3945 | n/a | char* br = (char*)mem2chunk((size_t)(((size_t)(mem + |
---|
3946 | n/a | alignment - |
---|
3947 | n/a | SIZE_T_ONE)) & |
---|
3948 | n/a | -alignment)); |
---|
3949 | n/a | char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? |
---|
3950 | n/a | br : br+alignment; |
---|
3951 | n/a | mchunkptr newp = (mchunkptr)pos; |
---|
3952 | n/a | size_t leadsize = pos - (char*)(p); |
---|
3953 | n/a | size_t newsize = chunksize(p) - leadsize; |
---|
3954 | n/a | |
---|
3955 | n/a | if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ |
---|
3956 | n/a | newp->prev_foot = p->prev_foot + leadsize; |
---|
3957 | n/a | newp->head = (newsize|CINUSE_BIT); |
---|
3958 | n/a | } |
---|
3959 | n/a | else { /* Otherwise, give back leader, use the rest */ |
---|
3960 | n/a | set_inuse(m, newp, newsize); |
---|
3961 | n/a | set_inuse(m, p, leadsize); |
---|
3962 | n/a | leader = chunk2mem(p); |
---|
3963 | n/a | } |
---|
3964 | n/a | p = newp; |
---|
3965 | n/a | } |
---|
3966 | n/a | |
---|
3967 | n/a | /* Give back spare room at the end */ |
---|
3968 | n/a | if (!is_mmapped(p)) { |
---|
3969 | n/a | size_t size = chunksize(p); |
---|
3970 | n/a | if (size > nb + MIN_CHUNK_SIZE) { |
---|
3971 | n/a | size_t remainder_size = size - nb; |
---|
3972 | n/a | mchunkptr remainder = chunk_plus_offset(p, nb); |
---|
3973 | n/a | set_inuse(m, p, nb); |
---|
3974 | n/a | set_inuse(m, remainder, remainder_size); |
---|
3975 | n/a | trailer = chunk2mem(remainder); |
---|
3976 | n/a | } |
---|
3977 | n/a | } |
---|
3978 | n/a | |
---|
3979 | n/a | assert (chunksize(p) >= nb); |
---|
3980 | n/a | assert((((size_t)(chunk2mem(p))) % alignment) == 0); |
---|
3981 | n/a | check_inuse_chunk(m, p); |
---|
3982 | n/a | POSTACTION(m); |
---|
3983 | n/a | if (leader != 0) { |
---|
3984 | n/a | internal_free(m, leader); |
---|
3985 | n/a | } |
---|
3986 | n/a | if (trailer != 0) { |
---|
3987 | n/a | internal_free(m, trailer); |
---|
3988 | n/a | } |
---|
3989 | n/a | return chunk2mem(p); |
---|
3990 | n/a | } |
---|
3991 | n/a | } |
---|
3992 | n/a | return 0; |
---|
3993 | n/a | } |
---|
3994 | n/a | |
---|
3995 | n/a | /* ------------------------ comalloc/coalloc support --------------------- */ |
---|
3996 | n/a | |
---|
3997 | n/a | static void** ialloc(mstate m, |
---|
3998 | n/a | size_t n_elements, |
---|
3999 | n/a | size_t* sizes, |
---|
4000 | n/a | int opts, |
---|
4001 | n/a | void* chunks[]) { |
---|
4002 | n/a | /* |
---|
4003 | n/a | This provides common support for independent_X routines, handling |
---|
4004 | n/a | all of the combinations that can result. |
---|
4005 | n/a | |
---|
4006 | n/a | The opts arg has: |
---|
4007 | n/a | bit 0 set if all elements are same size (using sizes[0]) |
---|
4008 | n/a | bit 1 set if elements should be zeroed |
---|
4009 | n/a | */ |
---|
4010 | n/a | |
---|
4011 | n/a | size_t element_size; /* chunksize of each element, if all same */ |
---|
4012 | n/a | size_t contents_size; /* total size of elements */ |
---|
4013 | n/a | size_t array_size; /* request size of pointer array */ |
---|
4014 | n/a | void* mem; /* malloced aggregate space */ |
---|
4015 | n/a | mchunkptr p; /* corresponding chunk */ |
---|
4016 | n/a | size_t remainder_size; /* remaining bytes while splitting */ |
---|
4017 | n/a | void** marray; /* either "chunks" or malloced ptr array */ |
---|
4018 | n/a | mchunkptr array_chunk; /* chunk for malloced ptr array */ |
---|
4019 | n/a | flag_t was_enabled; /* to disable mmap */ |
---|
4020 | n/a | size_t size; |
---|
4021 | n/a | size_t i; |
---|
4022 | n/a | |
---|
4023 | n/a | /* compute array length, if needed */ |
---|
4024 | n/a | if (chunks != 0) { |
---|
4025 | n/a | if (n_elements == 0) |
---|
4026 | n/a | return chunks; /* nothing to do */ |
---|
4027 | n/a | marray = chunks; |
---|
4028 | n/a | array_size = 0; |
---|
4029 | n/a | } |
---|
4030 | n/a | else { |
---|
4031 | n/a | /* if empty req, must still return chunk representing empty array */ |
---|
4032 | n/a | if (n_elements == 0) |
---|
4033 | n/a | return (void**)internal_malloc(m, 0); |
---|
4034 | n/a | marray = 0; |
---|
4035 | n/a | array_size = request2size(n_elements * (sizeof(void*))); |
---|
4036 | n/a | } |
---|
4037 | n/a | |
---|
4038 | n/a | /* compute total element size */ |
---|
4039 | n/a | if (opts & 0x1) { /* all-same-size */ |
---|
4040 | n/a | element_size = request2size(*sizes); |
---|
4041 | n/a | contents_size = n_elements * element_size; |
---|
4042 | n/a | } |
---|
4043 | n/a | else { /* add up all the sizes */ |
---|
4044 | n/a | element_size = 0; |
---|
4045 | n/a | contents_size = 0; |
---|
4046 | n/a | for (i = 0; i != n_elements; ++i) |
---|
4047 | n/a | contents_size += request2size(sizes[i]); |
---|
4048 | n/a | } |
---|
4049 | n/a | |
---|
4050 | n/a | size = contents_size + array_size; |
---|
4051 | n/a | |
---|
4052 | n/a | /* |
---|
4053 | n/a | Allocate the aggregate chunk. First disable direct-mmapping so |
---|
4054 | n/a | malloc won't use it, since we would not be able to later |
---|
4055 | n/a | free/realloc space internal to a segregated mmap region. |
---|
4056 | n/a | */ |
---|
4057 | n/a | was_enabled = use_mmap(m); |
---|
4058 | n/a | disable_mmap(m); |
---|
4059 | n/a | mem = internal_malloc(m, size - CHUNK_OVERHEAD); |
---|
4060 | n/a | if (was_enabled) |
---|
4061 | n/a | enable_mmap(m); |
---|
4062 | n/a | if (mem == 0) |
---|
4063 | n/a | return 0; |
---|
4064 | n/a | |
---|
4065 | n/a | if (PREACTION(m)) return 0; |
---|
4066 | n/a | p = mem2chunk(mem); |
---|
4067 | n/a | remainder_size = chunksize(p); |
---|
4068 | n/a | |
---|
4069 | n/a | assert(!is_mmapped(p)); |
---|
4070 | n/a | |
---|
4071 | n/a | if (opts & 0x2) { /* optionally clear the elements */ |
---|
4072 | n/a | memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); |
---|
4073 | n/a | } |
---|
4074 | n/a | |
---|
4075 | n/a | /* If not provided, allocate the pointer array as final part of chunk */ |
---|
4076 | n/a | if (marray == 0) { |
---|
4077 | n/a | size_t array_chunk_size; |
---|
4078 | n/a | array_chunk = chunk_plus_offset(p, contents_size); |
---|
4079 | n/a | array_chunk_size = remainder_size - contents_size; |
---|
4080 | n/a | marray = (void**) (chunk2mem(array_chunk)); |
---|
4081 | n/a | set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); |
---|
4082 | n/a | remainder_size = contents_size; |
---|
4083 | n/a | } |
---|
4084 | n/a | |
---|
4085 | n/a | /* split out elements */ |
---|
4086 | n/a | for (i = 0; ; ++i) { |
---|
4087 | n/a | marray[i] = chunk2mem(p); |
---|
4088 | n/a | if (i != n_elements-1) { |
---|
4089 | n/a | if (element_size != 0) |
---|
4090 | n/a | size = element_size; |
---|
4091 | n/a | else |
---|
4092 | n/a | size = request2size(sizes[i]); |
---|
4093 | n/a | remainder_size -= size; |
---|
4094 | n/a | set_size_and_pinuse_of_inuse_chunk(m, p, size); |
---|
4095 | n/a | p = chunk_plus_offset(p, size); |
---|
4096 | n/a | } |
---|
4097 | n/a | else { /* the final element absorbs any overallocation slop */ |
---|
4098 | n/a | set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); |
---|
4099 | n/a | break; |
---|
4100 | n/a | } |
---|
4101 | n/a | } |
---|
4102 | n/a | |
---|
4103 | n/a | #if DEBUG |
---|
4104 | n/a | if (marray != chunks) { |
---|
4105 | n/a | /* final element must have exactly exhausted chunk */ |
---|
4106 | n/a | if (element_size != 0) { |
---|
4107 | n/a | assert(remainder_size == element_size); |
---|
4108 | n/a | } |
---|
4109 | n/a | else { |
---|
4110 | n/a | assert(remainder_size == request2size(sizes[i])); |
---|
4111 | n/a | } |
---|
4112 | n/a | check_inuse_chunk(m, mem2chunk(marray)); |
---|
4113 | n/a | } |
---|
4114 | n/a | for (i = 0; i != n_elements; ++i) |
---|
4115 | n/a | check_inuse_chunk(m, mem2chunk(marray[i])); |
---|
4116 | n/a | |
---|
4117 | n/a | #endif /* DEBUG */ |
---|
4118 | n/a | |
---|
4119 | n/a | POSTACTION(m); |
---|
4120 | n/a | return marray; |
---|
4121 | n/a | } |
---|
4122 | n/a | |
---|
4123 | n/a | |
---|
4124 | n/a | /* -------------------------- public routines ---------------------------- */ |
---|
4125 | n/a | |
---|
4126 | n/a | #if !ONLY_MSPACES |
---|
4127 | n/a | |
---|
4128 | n/a | void* dlmalloc(size_t bytes) { |
---|
4129 | n/a | /* |
---|
4130 | n/a | Basic algorithm: |
---|
4131 | n/a | If a small request (< 256 bytes minus per-chunk overhead): |
---|
4132 | n/a | 1. If one exists, use a remainderless chunk in associated smallbin. |
---|
4133 | n/a | (Remainderless means that there are too few excess bytes to |
---|
4134 | n/a | represent as a chunk.) |
---|
4135 | n/a | 2. If it is big enough, use the dv chunk, which is normally the |
---|
4136 | n/a | chunk adjacent to the one used for the most recent small request. |
---|
4137 | n/a | 3. If one exists, split the smallest available chunk in a bin, |
---|
4138 | n/a | saving remainder in dv. |
---|
4139 | n/a | 4. If it is big enough, use the top chunk. |
---|
4140 | n/a | 5. If available, get memory from system and use it |
---|
4141 | n/a | Otherwise, for a large request: |
---|
4142 | n/a | 1. Find the smallest available binned chunk that fits, and use it |
---|
4143 | n/a | if it is better fitting than dv chunk, splitting if necessary. |
---|
4144 | n/a | 2. If better fitting than any binned chunk, use the dv chunk. |
---|
4145 | n/a | 3. If it is big enough, use the top chunk. |
---|
4146 | n/a | 4. If request size >= mmap threshold, try to directly mmap this chunk. |
---|
4147 | n/a | 5. If available, get memory from system and use it |
---|
4148 | n/a | |
---|
4149 | n/a | The ugly goto's here ensure that postaction occurs along all paths. |
---|
4150 | n/a | */ |
---|
4151 | n/a | |
---|
4152 | n/a | if (!PREACTION(gm)) { |
---|
4153 | n/a | void* mem; |
---|
4154 | n/a | size_t nb; |
---|
4155 | n/a | if (bytes <= MAX_SMALL_REQUEST) { |
---|
4156 | n/a | bindex_t idx; |
---|
4157 | n/a | binmap_t smallbits; |
---|
4158 | n/a | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
---|
4159 | n/a | idx = small_index(nb); |
---|
4160 | n/a | smallbits = gm->smallmap >> idx; |
---|
4161 | n/a | |
---|
4162 | n/a | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
---|
4163 | n/a | mchunkptr b, p; |
---|
4164 | n/a | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
---|
4165 | n/a | b = smallbin_at(gm, idx); |
---|
4166 | n/a | p = b->fd; |
---|
4167 | n/a | assert(chunksize(p) == small_index2size(idx)); |
---|
4168 | n/a | unlink_first_small_chunk(gm, b, p, idx); |
---|
4169 | n/a | set_inuse_and_pinuse(gm, p, small_index2size(idx)); |
---|
4170 | n/a | mem = chunk2mem(p); |
---|
4171 | n/a | check_malloced_chunk(gm, mem, nb); |
---|
4172 | n/a | goto postaction; |
---|
4173 | n/a | } |
---|
4174 | n/a | |
---|
4175 | n/a | else if (nb > gm->dvsize) { |
---|
4176 | n/a | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
---|
4177 | n/a | mchunkptr b, p, r; |
---|
4178 | n/a | size_t rsize; |
---|
4179 | n/a | bindex_t i; |
---|
4180 | n/a | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
---|
4181 | n/a | binmap_t leastbit = least_bit(leftbits); |
---|
4182 | n/a | compute_bit2idx(leastbit, i); |
---|
4183 | n/a | b = smallbin_at(gm, i); |
---|
4184 | n/a | p = b->fd; |
---|
4185 | n/a | assert(chunksize(p) == small_index2size(i)); |
---|
4186 | n/a | unlink_first_small_chunk(gm, b, p, i); |
---|
4187 | n/a | rsize = small_index2size(i) - nb; |
---|
4188 | n/a | /* Fit here cannot be remainderless if 4byte sizes */ |
---|
4189 | n/a | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
---|
4190 | n/a | set_inuse_and_pinuse(gm, p, small_index2size(i)); |
---|
4191 | n/a | else { |
---|
4192 | n/a | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
---|
4193 | n/a | r = chunk_plus_offset(p, nb); |
---|
4194 | n/a | set_size_and_pinuse_of_free_chunk(r, rsize); |
---|
4195 | n/a | replace_dv(gm, r, rsize); |
---|
4196 | n/a | } |
---|
4197 | n/a | mem = chunk2mem(p); |
---|
4198 | n/a | check_malloced_chunk(gm, mem, nb); |
---|
4199 | n/a | goto postaction; |
---|
4200 | n/a | } |
---|
4201 | n/a | |
---|
4202 | n/a | else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { |
---|
4203 | n/a | check_malloced_chunk(gm, mem, nb); |
---|
4204 | n/a | goto postaction; |
---|
4205 | n/a | } |
---|
4206 | n/a | } |
---|
4207 | n/a | } |
---|
4208 | n/a | else if (bytes >= MAX_REQUEST) |
---|
4209 | n/a | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
---|
4210 | n/a | else { |
---|
4211 | n/a | nb = pad_request(bytes); |
---|
4212 | n/a | if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { |
---|
4213 | n/a | check_malloced_chunk(gm, mem, nb); |
---|
4214 | n/a | goto postaction; |
---|
4215 | n/a | } |
---|
4216 | n/a | } |
---|
4217 | n/a | |
---|
4218 | n/a | if (nb <= gm->dvsize) { |
---|
4219 | n/a | size_t rsize = gm->dvsize - nb; |
---|
4220 | n/a | mchunkptr p = gm->dv; |
---|
4221 | n/a | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
---|
4222 | n/a | mchunkptr r = gm->dv = chunk_plus_offset(p, nb); |
---|
4223 | n/a | gm->dvsize = rsize; |
---|
4224 | n/a | set_size_and_pinuse_of_free_chunk(r, rsize); |
---|
4225 | n/a | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
---|
4226 | n/a | } |
---|
4227 | n/a | else { /* exhaust dv */ |
---|
4228 | n/a | size_t dvs = gm->dvsize; |
---|
4229 | n/a | gm->dvsize = 0; |
---|
4230 | n/a | gm->dv = 0; |
---|
4231 | n/a | set_inuse_and_pinuse(gm, p, dvs); |
---|
4232 | n/a | } |
---|
4233 | n/a | mem = chunk2mem(p); |
---|
4234 | n/a | check_malloced_chunk(gm, mem, nb); |
---|
4235 | n/a | goto postaction; |
---|
4236 | n/a | } |
---|
4237 | n/a | |
---|
4238 | n/a | else if (nb < gm->topsize) { /* Split top */ |
---|
4239 | n/a | size_t rsize = gm->topsize -= nb; |
---|
4240 | n/a | mchunkptr p = gm->top; |
---|
4241 | n/a | mchunkptr r = gm->top = chunk_plus_offset(p, nb); |
---|
4242 | n/a | r->head = rsize | PINUSE_BIT; |
---|
4243 | n/a | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
---|
4244 | n/a | mem = chunk2mem(p); |
---|
4245 | n/a | check_top_chunk(gm, gm->top); |
---|
4246 | n/a | check_malloced_chunk(gm, mem, nb); |
---|
4247 | n/a | goto postaction; |
---|
4248 | n/a | } |
---|
4249 | n/a | |
---|
4250 | n/a | mem = sys_alloc(gm, nb); |
---|
4251 | n/a | |
---|
4252 | n/a | postaction: |
---|
4253 | n/a | POSTACTION(gm); |
---|
4254 | n/a | return mem; |
---|
4255 | n/a | } |
---|
4256 | n/a | |
---|
4257 | n/a | return 0; |
---|
4258 | n/a | } |
---|
4259 | n/a | |
---|
4260 | n/a | void dlfree(void* mem) { |
---|
4261 | n/a | /* |
---|
4262 | n/a | Consolidate freed chunks with preceding or succeeding bordering |
---|
4263 | n/a | free chunks, if they exist, and then place in a bin. Intermixed |
---|
4264 | n/a | with special cases for top, dv, mmapped chunks, and usage errors. |
---|
4265 | n/a | */ |
---|
4266 | n/a | |
---|
4267 | n/a | if (mem != 0) { |
---|
4268 | n/a | mchunkptr p = mem2chunk(mem); |
---|
4269 | n/a | #if FOOTERS |
---|
4270 | n/a | mstate fm = get_mstate_for(p); |
---|
4271 | n/a | if (!ok_magic(fm)) { |
---|
4272 | n/a | USAGE_ERROR_ACTION(fm, p); |
---|
4273 | n/a | return; |
---|
4274 | n/a | } |
---|
4275 | n/a | #else /* FOOTERS */ |
---|
4276 | n/a | #define fm gm |
---|
4277 | n/a | #endif /* FOOTERS */ |
---|
4278 | n/a | if (!PREACTION(fm)) { |
---|
4279 | n/a | check_inuse_chunk(fm, p); |
---|
4280 | n/a | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { |
---|
4281 | n/a | size_t psize = chunksize(p); |
---|
4282 | n/a | mchunkptr next = chunk_plus_offset(p, psize); |
---|
4283 | n/a | if (!pinuse(p)) { |
---|
4284 | n/a | size_t prevsize = p->prev_foot; |
---|
4285 | n/a | if ((prevsize & IS_MMAPPED_BIT) != 0) { |
---|
4286 | n/a | prevsize &= ~IS_MMAPPED_BIT; |
---|
4287 | n/a | psize += prevsize + MMAP_FOOT_PAD; |
---|
4288 | n/a | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
---|
4289 | n/a | fm->footprint -= psize; |
---|
4290 | n/a | goto postaction; |
---|
4291 | n/a | } |
---|
4292 | n/a | else { |
---|
4293 | n/a | mchunkptr prev = chunk_minus_offset(p, prevsize); |
---|
4294 | n/a | psize += prevsize; |
---|
4295 | n/a | p = prev; |
---|
4296 | n/a | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
---|
4297 | n/a | if (p != fm->dv) { |
---|
4298 | n/a | unlink_chunk(fm, p, prevsize); |
---|
4299 | n/a | } |
---|
4300 | n/a | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
---|
4301 | n/a | fm->dvsize = psize; |
---|
4302 | n/a | set_free_with_pinuse(p, psize, next); |
---|
4303 | n/a | goto postaction; |
---|
4304 | n/a | } |
---|
4305 | n/a | } |
---|
4306 | n/a | else |
---|
4307 | n/a | goto erroraction; |
---|
4308 | n/a | } |
---|
4309 | n/a | } |
---|
4310 | n/a | |
---|
4311 | n/a | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
---|
4312 | n/a | if (!cinuse(next)) { /* consolidate forward */ |
---|
4313 | n/a | if (next == fm->top) { |
---|
4314 | n/a | size_t tsize = fm->topsize += psize; |
---|
4315 | n/a | fm->top = p; |
---|
4316 | n/a | p->head = tsize | PINUSE_BIT; |
---|
4317 | n/a | if (p == fm->dv) { |
---|
4318 | n/a | fm->dv = 0; |
---|
4319 | n/a | fm->dvsize = 0; |
---|
4320 | n/a | } |
---|
4321 | n/a | if (should_trim(fm, tsize)) |
---|
4322 | n/a | sys_trim(fm, 0); |
---|
4323 | n/a | goto postaction; |
---|
4324 | n/a | } |
---|
4325 | n/a | else if (next == fm->dv) { |
---|
4326 | n/a | size_t dsize = fm->dvsize += psize; |
---|
4327 | n/a | fm->dv = p; |
---|
4328 | n/a | set_size_and_pinuse_of_free_chunk(p, dsize); |
---|
4329 | n/a | goto postaction; |
---|
4330 | n/a | } |
---|
4331 | n/a | else { |
---|
4332 | n/a | size_t nsize = chunksize(next); |
---|
4333 | n/a | psize += nsize; |
---|
4334 | n/a | unlink_chunk(fm, next, nsize); |
---|
4335 | n/a | set_size_and_pinuse_of_free_chunk(p, psize); |
---|
4336 | n/a | if (p == fm->dv) { |
---|
4337 | n/a | fm->dvsize = psize; |
---|
4338 | n/a | goto postaction; |
---|
4339 | n/a | } |
---|
4340 | n/a | } |
---|
4341 | n/a | } |
---|
4342 | n/a | else |
---|
4343 | n/a | set_free_with_pinuse(p, psize, next); |
---|
4344 | n/a | insert_chunk(fm, p, psize); |
---|
4345 | n/a | check_free_chunk(fm, p); |
---|
4346 | n/a | goto postaction; |
---|
4347 | n/a | } |
---|
4348 | n/a | } |
---|
4349 | n/a | erroraction: |
---|
4350 | n/a | USAGE_ERROR_ACTION(fm, p); |
---|
4351 | n/a | postaction: |
---|
4352 | n/a | POSTACTION(fm); |
---|
4353 | n/a | } |
---|
4354 | n/a | } |
---|
4355 | n/a | #if !FOOTERS |
---|
4356 | n/a | #undef fm |
---|
4357 | n/a | #endif /* FOOTERS */ |
---|
4358 | n/a | } |
---|
4359 | n/a | |
---|
4360 | n/a | void* dlcalloc(size_t n_elements, size_t elem_size) { |
---|
4361 | n/a | void* mem; |
---|
4362 | n/a | size_t req = 0; |
---|
4363 | n/a | if (n_elements != 0) { |
---|
4364 | n/a | req = n_elements * elem_size; |
---|
4365 | n/a | if (((n_elements | elem_size) & ~(size_t)0xffff) && |
---|
4366 | n/a | (req / n_elements != elem_size)) |
---|
4367 | n/a | req = MAX_SIZE_T; /* force downstream failure on overflow */ |
---|
4368 | n/a | } |
---|
4369 | n/a | mem = dlmalloc(req); |
---|
4370 | n/a | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
---|
4371 | n/a | memset(mem, 0, req); |
---|
4372 | n/a | return mem; |
---|
4373 | n/a | } |
---|
4374 | n/a | |
---|
4375 | n/a | void* dlrealloc(void* oldmem, size_t bytes) { |
---|
4376 | n/a | if (oldmem == 0) |
---|
4377 | n/a | return dlmalloc(bytes); |
---|
4378 | n/a | #ifdef REALLOC_ZERO_BYTES_FREES |
---|
4379 | n/a | if (bytes == 0) { |
---|
4380 | n/a | dlfree(oldmem); |
---|
4381 | n/a | return 0; |
---|
4382 | n/a | } |
---|
4383 | n/a | #endif /* REALLOC_ZERO_BYTES_FREES */ |
---|
4384 | n/a | else { |
---|
4385 | n/a | #if ! FOOTERS |
---|
4386 | n/a | mstate m = gm; |
---|
4387 | n/a | #else /* FOOTERS */ |
---|
4388 | n/a | mstate m = get_mstate_for(mem2chunk(oldmem)); |
---|
4389 | n/a | if (!ok_magic(m)) { |
---|
4390 | n/a | USAGE_ERROR_ACTION(m, oldmem); |
---|
4391 | n/a | return 0; |
---|
4392 | n/a | } |
---|
4393 | n/a | #endif /* FOOTERS */ |
---|
4394 | n/a | return internal_realloc(m, oldmem, bytes); |
---|
4395 | n/a | } |
---|
4396 | n/a | } |
---|
4397 | n/a | |
---|
4398 | n/a | void* dlmemalign(size_t alignment, size_t bytes) { |
---|
4399 | n/a | return internal_memalign(gm, alignment, bytes); |
---|
4400 | n/a | } |
---|
4401 | n/a | |
---|
4402 | n/a | void** dlindependent_calloc(size_t n_elements, size_t elem_size, |
---|
4403 | n/a | void* chunks[]) { |
---|
4404 | n/a | size_t sz = elem_size; /* serves as 1-element array */ |
---|
4405 | n/a | return ialloc(gm, n_elements, &sz, 3, chunks); |
---|
4406 | n/a | } |
---|
4407 | n/a | |
---|
4408 | n/a | void** dlindependent_comalloc(size_t n_elements, size_t sizes[], |
---|
4409 | n/a | void* chunks[]) { |
---|
4410 | n/a | return ialloc(gm, n_elements, sizes, 0, chunks); |
---|
4411 | n/a | } |
---|
4412 | n/a | |
---|
4413 | n/a | void* dlvalloc(size_t bytes) { |
---|
4414 | n/a | size_t pagesz; |
---|
4415 | n/a | init_mparams(); |
---|
4416 | n/a | pagesz = mparams.page_size; |
---|
4417 | n/a | return dlmemalign(pagesz, bytes); |
---|
4418 | n/a | } |
---|
4419 | n/a | |
---|
4420 | n/a | void* dlpvalloc(size_t bytes) { |
---|
4421 | n/a | size_t pagesz; |
---|
4422 | n/a | init_mparams(); |
---|
4423 | n/a | pagesz = mparams.page_size; |
---|
4424 | n/a | return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); |
---|
4425 | n/a | } |
---|
4426 | n/a | |
---|
4427 | n/a | int dlmalloc_trim(size_t pad) { |
---|
4428 | n/a | int result = 0; |
---|
4429 | n/a | if (!PREACTION(gm)) { |
---|
4430 | n/a | result = sys_trim(gm, pad); |
---|
4431 | n/a | POSTACTION(gm); |
---|
4432 | n/a | } |
---|
4433 | n/a | return result; |
---|
4434 | n/a | } |
---|
4435 | n/a | |
---|
4436 | n/a | size_t dlmalloc_footprint(void) { |
---|
4437 | n/a | return gm->footprint; |
---|
4438 | n/a | } |
---|
4439 | n/a | |
---|
4440 | n/a | size_t dlmalloc_max_footprint(void) { |
---|
4441 | n/a | return gm->max_footprint; |
---|
4442 | n/a | } |
---|
4443 | n/a | |
---|
4444 | n/a | #if !NO_MALLINFO |
---|
4445 | n/a | struct mallinfo dlmallinfo(void) { |
---|
4446 | n/a | return internal_mallinfo(gm); |
---|
4447 | n/a | } |
---|
4448 | n/a | #endif /* NO_MALLINFO */ |
---|
4449 | n/a | |
---|
4450 | n/a | void dlmalloc_stats() { |
---|
4451 | n/a | internal_malloc_stats(gm); |
---|
4452 | n/a | } |
---|
4453 | n/a | |
---|
4454 | n/a | size_t dlmalloc_usable_size(void* mem) { |
---|
4455 | n/a | if (mem != 0) { |
---|
4456 | n/a | mchunkptr p = mem2chunk(mem); |
---|
4457 | n/a | if (cinuse(p)) |
---|
4458 | n/a | return chunksize(p) - overhead_for(p); |
---|
4459 | n/a | } |
---|
4460 | n/a | return 0; |
---|
4461 | n/a | } |
---|
4462 | n/a | |
---|
4463 | n/a | int dlmallopt(int param_number, int value) { |
---|
4464 | n/a | return change_mparam(param_number, value); |
---|
4465 | n/a | } |
---|
4466 | n/a | |
---|
4467 | n/a | #endif /* !ONLY_MSPACES */ |
---|
4468 | n/a | |
---|
4469 | n/a | /* ----------------------------- user mspaces ---------------------------- */ |
---|
4470 | n/a | |
---|
4471 | n/a | #if MSPACES |
---|
4472 | n/a | |
---|
4473 | n/a | static mstate init_user_mstate(char* tbase, size_t tsize) { |
---|
4474 | n/a | size_t msize = pad_request(sizeof(struct malloc_state)); |
---|
4475 | n/a | mchunkptr mn; |
---|
4476 | n/a | mchunkptr msp = align_as_chunk(tbase); |
---|
4477 | n/a | mstate m = (mstate)(chunk2mem(msp)); |
---|
4478 | n/a | memset(m, 0, msize); |
---|
4479 | n/a | INITIAL_LOCK(&m->mutex); |
---|
4480 | n/a | msp->head = (msize|PINUSE_BIT|CINUSE_BIT); |
---|
4481 | n/a | m->seg.base = m->least_addr = tbase; |
---|
4482 | n/a | m->seg.size = m->footprint = m->max_footprint = tsize; |
---|
4483 | n/a | m->magic = mparams.magic; |
---|
4484 | n/a | m->mflags = mparams.default_mflags; |
---|
4485 | n/a | disable_contiguous(m); |
---|
4486 | n/a | init_bins(m); |
---|
4487 | n/a | mn = next_chunk(mem2chunk(m)); |
---|
4488 | n/a | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE); |
---|
4489 | n/a | check_top_chunk(m, m->top); |
---|
4490 | n/a | return m; |
---|
4491 | n/a | } |
---|
4492 | n/a | |
---|
4493 | n/a | mspace create_mspace(size_t capacity, int locked) { |
---|
4494 | n/a | mstate m = 0; |
---|
4495 | n/a | size_t msize = pad_request(sizeof(struct malloc_state)); |
---|
4496 | n/a | init_mparams(); /* Ensure pagesize etc initialized */ |
---|
4497 | n/a | |
---|
4498 | n/a | if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { |
---|
4499 | n/a | size_t rs = ((capacity == 0)? mparams.granularity : |
---|
4500 | n/a | (capacity + TOP_FOOT_SIZE + msize)); |
---|
4501 | n/a | size_t tsize = granularity_align(rs); |
---|
4502 | n/a | char* tbase = (char*)(CALL_MMAP(tsize)); |
---|
4503 | n/a | if (tbase != CMFAIL) { |
---|
4504 | n/a | m = init_user_mstate(tbase, tsize); |
---|
4505 | n/a | (void)set_segment_flags(&m->seg, IS_MMAPPED_BIT); |
---|
4506 | n/a | set_lock(m, locked); |
---|
4507 | n/a | } |
---|
4508 | n/a | } |
---|
4509 | n/a | return (mspace)m; |
---|
4510 | n/a | } |
---|
4511 | n/a | |
---|
4512 | n/a | mspace create_mspace_with_base(void* base, size_t capacity, int locked) { |
---|
4513 | n/a | mstate m = 0; |
---|
4514 | n/a | size_t msize = pad_request(sizeof(struct malloc_state)); |
---|
4515 | n/a | init_mparams(); /* Ensure pagesize etc initialized */ |
---|
4516 | n/a | |
---|
4517 | n/a | if (capacity > msize + TOP_FOOT_SIZE && |
---|
4518 | n/a | capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { |
---|
4519 | n/a | m = init_user_mstate((char*)base, capacity); |
---|
4520 | n/a | (void)set_segment_flags(&m->seg, EXTERN_BIT); |
---|
4521 | n/a | set_lock(m, locked); |
---|
4522 | n/a | } |
---|
4523 | n/a | return (mspace)m; |
---|
4524 | n/a | } |
---|
4525 | n/a | |
---|
4526 | n/a | size_t destroy_mspace(mspace msp) { |
---|
4527 | n/a | size_t freed = 0; |
---|
4528 | n/a | mstate ms = (mstate)msp; |
---|
4529 | n/a | if (ok_magic(ms)) { |
---|
4530 | n/a | msegmentptr sp = &ms->seg; |
---|
4531 | n/a | while (sp != 0) { |
---|
4532 | n/a | char* base = sp->base; |
---|
4533 | n/a | size_t size = sp->size; |
---|
4534 | n/a | flag_t flag = get_segment_flags(sp); |
---|
4535 | n/a | sp = sp->next; |
---|
4536 | n/a | if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) && |
---|
4537 | n/a | CALL_MUNMAP(base, size) == 0) |
---|
4538 | n/a | freed += size; |
---|
4539 | n/a | } |
---|
4540 | n/a | } |
---|
4541 | n/a | else { |
---|
4542 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4543 | n/a | } |
---|
4544 | n/a | return freed; |
---|
4545 | n/a | } |
---|
4546 | n/a | |
---|
4547 | n/a | /* |
---|
4548 | n/a | mspace versions of routines are near-clones of the global |
---|
4549 | n/a | versions. This is not so nice but better than the alternatives. |
---|
4550 | n/a | */ |
---|
4551 | n/a | |
---|
4552 | n/a | |
---|
4553 | n/a | void* mspace_malloc(mspace msp, size_t bytes) { |
---|
4554 | n/a | mstate ms = (mstate)msp; |
---|
4555 | n/a | if (!ok_magic(ms)) { |
---|
4556 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4557 | n/a | return 0; |
---|
4558 | n/a | } |
---|
4559 | n/a | if (!PREACTION(ms)) { |
---|
4560 | n/a | void* mem; |
---|
4561 | n/a | size_t nb; |
---|
4562 | n/a | if (bytes <= MAX_SMALL_REQUEST) { |
---|
4563 | n/a | bindex_t idx; |
---|
4564 | n/a | binmap_t smallbits; |
---|
4565 | n/a | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
---|
4566 | n/a | idx = small_index(nb); |
---|
4567 | n/a | smallbits = ms->smallmap >> idx; |
---|
4568 | n/a | |
---|
4569 | n/a | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
---|
4570 | n/a | mchunkptr b, p; |
---|
4571 | n/a | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
---|
4572 | n/a | b = smallbin_at(ms, idx); |
---|
4573 | n/a | p = b->fd; |
---|
4574 | n/a | assert(chunksize(p) == small_index2size(idx)); |
---|
4575 | n/a | unlink_first_small_chunk(ms, b, p, idx); |
---|
4576 | n/a | set_inuse_and_pinuse(ms, p, small_index2size(idx)); |
---|
4577 | n/a | mem = chunk2mem(p); |
---|
4578 | n/a | check_malloced_chunk(ms, mem, nb); |
---|
4579 | n/a | goto postaction; |
---|
4580 | n/a | } |
---|
4581 | n/a | |
---|
4582 | n/a | else if (nb > ms->dvsize) { |
---|
4583 | n/a | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
---|
4584 | n/a | mchunkptr b, p, r; |
---|
4585 | n/a | size_t rsize; |
---|
4586 | n/a | bindex_t i; |
---|
4587 | n/a | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
---|
4588 | n/a | binmap_t leastbit = least_bit(leftbits); |
---|
4589 | n/a | compute_bit2idx(leastbit, i); |
---|
4590 | n/a | b = smallbin_at(ms, i); |
---|
4591 | n/a | p = b->fd; |
---|
4592 | n/a | assert(chunksize(p) == small_index2size(i)); |
---|
4593 | n/a | unlink_first_small_chunk(ms, b, p, i); |
---|
4594 | n/a | rsize = small_index2size(i) - nb; |
---|
4595 | n/a | /* Fit here cannot be remainderless if 4byte sizes */ |
---|
4596 | n/a | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
---|
4597 | n/a | set_inuse_and_pinuse(ms, p, small_index2size(i)); |
---|
4598 | n/a | else { |
---|
4599 | n/a | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
---|
4600 | n/a | r = chunk_plus_offset(p, nb); |
---|
4601 | n/a | set_size_and_pinuse_of_free_chunk(r, rsize); |
---|
4602 | n/a | replace_dv(ms, r, rsize); |
---|
4603 | n/a | } |
---|
4604 | n/a | mem = chunk2mem(p); |
---|
4605 | n/a | check_malloced_chunk(ms, mem, nb); |
---|
4606 | n/a | goto postaction; |
---|
4607 | n/a | } |
---|
4608 | n/a | |
---|
4609 | n/a | else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) { |
---|
4610 | n/a | check_malloced_chunk(ms, mem, nb); |
---|
4611 | n/a | goto postaction; |
---|
4612 | n/a | } |
---|
4613 | n/a | } |
---|
4614 | n/a | } |
---|
4615 | n/a | else if (bytes >= MAX_REQUEST) |
---|
4616 | n/a | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
---|
4617 | n/a | else { |
---|
4618 | n/a | nb = pad_request(bytes); |
---|
4619 | n/a | if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) { |
---|
4620 | n/a | check_malloced_chunk(ms, mem, nb); |
---|
4621 | n/a | goto postaction; |
---|
4622 | n/a | } |
---|
4623 | n/a | } |
---|
4624 | n/a | |
---|
4625 | n/a | if (nb <= ms->dvsize) { |
---|
4626 | n/a | size_t rsize = ms->dvsize - nb; |
---|
4627 | n/a | mchunkptr p = ms->dv; |
---|
4628 | n/a | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
---|
4629 | n/a | mchunkptr r = ms->dv = chunk_plus_offset(p, nb); |
---|
4630 | n/a | ms->dvsize = rsize; |
---|
4631 | n/a | set_size_and_pinuse_of_free_chunk(r, rsize); |
---|
4632 | n/a | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
---|
4633 | n/a | } |
---|
4634 | n/a | else { /* exhaust dv */ |
---|
4635 | n/a | size_t dvs = ms->dvsize; |
---|
4636 | n/a | ms->dvsize = 0; |
---|
4637 | n/a | ms->dv = 0; |
---|
4638 | n/a | set_inuse_and_pinuse(ms, p, dvs); |
---|
4639 | n/a | } |
---|
4640 | n/a | mem = chunk2mem(p); |
---|
4641 | n/a | check_malloced_chunk(ms, mem, nb); |
---|
4642 | n/a | goto postaction; |
---|
4643 | n/a | } |
---|
4644 | n/a | |
---|
4645 | n/a | else if (nb < ms->topsize) { /* Split top */ |
---|
4646 | n/a | size_t rsize = ms->topsize -= nb; |
---|
4647 | n/a | mchunkptr p = ms->top; |
---|
4648 | n/a | mchunkptr r = ms->top = chunk_plus_offset(p, nb); |
---|
4649 | n/a | r->head = rsize | PINUSE_BIT; |
---|
4650 | n/a | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
---|
4651 | n/a | mem = chunk2mem(p); |
---|
4652 | n/a | check_top_chunk(ms, ms->top); |
---|
4653 | n/a | check_malloced_chunk(ms, mem, nb); |
---|
4654 | n/a | goto postaction; |
---|
4655 | n/a | } |
---|
4656 | n/a | |
---|
4657 | n/a | mem = sys_alloc(ms, nb); |
---|
4658 | n/a | |
---|
4659 | n/a | postaction: |
---|
4660 | n/a | POSTACTION(ms); |
---|
4661 | n/a | return mem; |
---|
4662 | n/a | } |
---|
4663 | n/a | |
---|
4664 | n/a | return 0; |
---|
4665 | n/a | } |
---|
4666 | n/a | |
---|
4667 | n/a | void mspace_free(mspace msp, void* mem) { |
---|
4668 | n/a | if (mem != 0) { |
---|
4669 | n/a | mchunkptr p = mem2chunk(mem); |
---|
4670 | n/a | #if FOOTERS |
---|
4671 | n/a | mstate fm = get_mstate_for(p); |
---|
4672 | n/a | #else /* FOOTERS */ |
---|
4673 | n/a | mstate fm = (mstate)msp; |
---|
4674 | n/a | #endif /* FOOTERS */ |
---|
4675 | n/a | if (!ok_magic(fm)) { |
---|
4676 | n/a | USAGE_ERROR_ACTION(fm, p); |
---|
4677 | n/a | return; |
---|
4678 | n/a | } |
---|
4679 | n/a | if (!PREACTION(fm)) { |
---|
4680 | n/a | check_inuse_chunk(fm, p); |
---|
4681 | n/a | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { |
---|
4682 | n/a | size_t psize = chunksize(p); |
---|
4683 | n/a | mchunkptr next = chunk_plus_offset(p, psize); |
---|
4684 | n/a | if (!pinuse(p)) { |
---|
4685 | n/a | size_t prevsize = p->prev_foot; |
---|
4686 | n/a | if ((prevsize & IS_MMAPPED_BIT) != 0) { |
---|
4687 | n/a | prevsize &= ~IS_MMAPPED_BIT; |
---|
4688 | n/a | psize += prevsize + MMAP_FOOT_PAD; |
---|
4689 | n/a | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
---|
4690 | n/a | fm->footprint -= psize; |
---|
4691 | n/a | goto postaction; |
---|
4692 | n/a | } |
---|
4693 | n/a | else { |
---|
4694 | n/a | mchunkptr prev = chunk_minus_offset(p, prevsize); |
---|
4695 | n/a | psize += prevsize; |
---|
4696 | n/a | p = prev; |
---|
4697 | n/a | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
---|
4698 | n/a | if (p != fm->dv) { |
---|
4699 | n/a | unlink_chunk(fm, p, prevsize); |
---|
4700 | n/a | } |
---|
4701 | n/a | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
---|
4702 | n/a | fm->dvsize = psize; |
---|
4703 | n/a | set_free_with_pinuse(p, psize, next); |
---|
4704 | n/a | goto postaction; |
---|
4705 | n/a | } |
---|
4706 | n/a | } |
---|
4707 | n/a | else |
---|
4708 | n/a | goto erroraction; |
---|
4709 | n/a | } |
---|
4710 | n/a | } |
---|
4711 | n/a | |
---|
4712 | n/a | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
---|
4713 | n/a | if (!cinuse(next)) { /* consolidate forward */ |
---|
4714 | n/a | if (next == fm->top) { |
---|
4715 | n/a | size_t tsize = fm->topsize += psize; |
---|
4716 | n/a | fm->top = p; |
---|
4717 | n/a | p->head = tsize | PINUSE_BIT; |
---|
4718 | n/a | if (p == fm->dv) { |
---|
4719 | n/a | fm->dv = 0; |
---|
4720 | n/a | fm->dvsize = 0; |
---|
4721 | n/a | } |
---|
4722 | n/a | if (should_trim(fm, tsize)) |
---|
4723 | n/a | sys_trim(fm, 0); |
---|
4724 | n/a | goto postaction; |
---|
4725 | n/a | } |
---|
4726 | n/a | else if (next == fm->dv) { |
---|
4727 | n/a | size_t dsize = fm->dvsize += psize; |
---|
4728 | n/a | fm->dv = p; |
---|
4729 | n/a | set_size_and_pinuse_of_free_chunk(p, dsize); |
---|
4730 | n/a | goto postaction; |
---|
4731 | n/a | } |
---|
4732 | n/a | else { |
---|
4733 | n/a | size_t nsize = chunksize(next); |
---|
4734 | n/a | psize += nsize; |
---|
4735 | n/a | unlink_chunk(fm, next, nsize); |
---|
4736 | n/a | set_size_and_pinuse_of_free_chunk(p, psize); |
---|
4737 | n/a | if (p == fm->dv) { |
---|
4738 | n/a | fm->dvsize = psize; |
---|
4739 | n/a | goto postaction; |
---|
4740 | n/a | } |
---|
4741 | n/a | } |
---|
4742 | n/a | } |
---|
4743 | n/a | else |
---|
4744 | n/a | set_free_with_pinuse(p, psize, next); |
---|
4745 | n/a | insert_chunk(fm, p, psize); |
---|
4746 | n/a | check_free_chunk(fm, p); |
---|
4747 | n/a | goto postaction; |
---|
4748 | n/a | } |
---|
4749 | n/a | } |
---|
4750 | n/a | erroraction: |
---|
4751 | n/a | USAGE_ERROR_ACTION(fm, p); |
---|
4752 | n/a | postaction: |
---|
4753 | n/a | POSTACTION(fm); |
---|
4754 | n/a | } |
---|
4755 | n/a | } |
---|
4756 | n/a | } |
---|
4757 | n/a | |
---|
4758 | n/a | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) { |
---|
4759 | n/a | void* mem; |
---|
4760 | n/a | size_t req = 0; |
---|
4761 | n/a | mstate ms = (mstate)msp; |
---|
4762 | n/a | if (!ok_magic(ms)) { |
---|
4763 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4764 | n/a | return 0; |
---|
4765 | n/a | } |
---|
4766 | n/a | if (n_elements != 0) { |
---|
4767 | n/a | req = n_elements * elem_size; |
---|
4768 | n/a | if (((n_elements | elem_size) & ~(size_t)0xffff) && |
---|
4769 | n/a | (req / n_elements != elem_size)) |
---|
4770 | n/a | req = MAX_SIZE_T; /* force downstream failure on overflow */ |
---|
4771 | n/a | } |
---|
4772 | n/a | mem = internal_malloc(ms, req); |
---|
4773 | n/a | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
---|
4774 | n/a | memset(mem, 0, req); |
---|
4775 | n/a | return mem; |
---|
4776 | n/a | } |
---|
4777 | n/a | |
---|
4778 | n/a | void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) { |
---|
4779 | n/a | if (oldmem == 0) |
---|
4780 | n/a | return mspace_malloc(msp, bytes); |
---|
4781 | n/a | #ifdef REALLOC_ZERO_BYTES_FREES |
---|
4782 | n/a | if (bytes == 0) { |
---|
4783 | n/a | mspace_free(msp, oldmem); |
---|
4784 | n/a | return 0; |
---|
4785 | n/a | } |
---|
4786 | n/a | #endif /* REALLOC_ZERO_BYTES_FREES */ |
---|
4787 | n/a | else { |
---|
4788 | n/a | #if FOOTERS |
---|
4789 | n/a | mchunkptr p = mem2chunk(oldmem); |
---|
4790 | n/a | mstate ms = get_mstate_for(p); |
---|
4791 | n/a | #else /* FOOTERS */ |
---|
4792 | n/a | mstate ms = (mstate)msp; |
---|
4793 | n/a | #endif /* FOOTERS */ |
---|
4794 | n/a | if (!ok_magic(ms)) { |
---|
4795 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4796 | n/a | return 0; |
---|
4797 | n/a | } |
---|
4798 | n/a | return internal_realloc(ms, oldmem, bytes); |
---|
4799 | n/a | } |
---|
4800 | n/a | } |
---|
4801 | n/a | |
---|
4802 | n/a | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) { |
---|
4803 | n/a | mstate ms = (mstate)msp; |
---|
4804 | n/a | if (!ok_magic(ms)) { |
---|
4805 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4806 | n/a | return 0; |
---|
4807 | n/a | } |
---|
4808 | n/a | return internal_memalign(ms, alignment, bytes); |
---|
4809 | n/a | } |
---|
4810 | n/a | |
---|
4811 | n/a | void** mspace_independent_calloc(mspace msp, size_t n_elements, |
---|
4812 | n/a | size_t elem_size, void* chunks[]) { |
---|
4813 | n/a | size_t sz = elem_size; /* serves as 1-element array */ |
---|
4814 | n/a | mstate ms = (mstate)msp; |
---|
4815 | n/a | if (!ok_magic(ms)) { |
---|
4816 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4817 | n/a | return 0; |
---|
4818 | n/a | } |
---|
4819 | n/a | return ialloc(ms, n_elements, &sz, 3, chunks); |
---|
4820 | n/a | } |
---|
4821 | n/a | |
---|
4822 | n/a | void** mspace_independent_comalloc(mspace msp, size_t n_elements, |
---|
4823 | n/a | size_t sizes[], void* chunks[]) { |
---|
4824 | n/a | mstate ms = (mstate)msp; |
---|
4825 | n/a | if (!ok_magic(ms)) { |
---|
4826 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4827 | n/a | return 0; |
---|
4828 | n/a | } |
---|
4829 | n/a | return ialloc(ms, n_elements, sizes, 0, chunks); |
---|
4830 | n/a | } |
---|
4831 | n/a | |
---|
4832 | n/a | int mspace_trim(mspace msp, size_t pad) { |
---|
4833 | n/a | int result = 0; |
---|
4834 | n/a | mstate ms = (mstate)msp; |
---|
4835 | n/a | if (ok_magic(ms)) { |
---|
4836 | n/a | if (!PREACTION(ms)) { |
---|
4837 | n/a | result = sys_trim(ms, pad); |
---|
4838 | n/a | POSTACTION(ms); |
---|
4839 | n/a | } |
---|
4840 | n/a | } |
---|
4841 | n/a | else { |
---|
4842 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4843 | n/a | } |
---|
4844 | n/a | return result; |
---|
4845 | n/a | } |
---|
4846 | n/a | |
---|
4847 | n/a | void mspace_malloc_stats(mspace msp) { |
---|
4848 | n/a | mstate ms = (mstate)msp; |
---|
4849 | n/a | if (ok_magic(ms)) { |
---|
4850 | n/a | internal_malloc_stats(ms); |
---|
4851 | n/a | } |
---|
4852 | n/a | else { |
---|
4853 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4854 | n/a | } |
---|
4855 | n/a | } |
---|
4856 | n/a | |
---|
4857 | n/a | size_t mspace_footprint(mspace msp) { |
---|
4858 | n/a | size_t result; |
---|
4859 | n/a | mstate ms = (mstate)msp; |
---|
4860 | n/a | if (ok_magic(ms)) { |
---|
4861 | n/a | result = ms->footprint; |
---|
4862 | n/a | } |
---|
4863 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4864 | n/a | return result; |
---|
4865 | n/a | } |
---|
4866 | n/a | |
---|
4867 | n/a | |
---|
4868 | n/a | size_t mspace_max_footprint(mspace msp) { |
---|
4869 | n/a | size_t result; |
---|
4870 | n/a | mstate ms = (mstate)msp; |
---|
4871 | n/a | if (ok_magic(ms)) { |
---|
4872 | n/a | result = ms->max_footprint; |
---|
4873 | n/a | } |
---|
4874 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4875 | n/a | return result; |
---|
4876 | n/a | } |
---|
4877 | n/a | |
---|
4878 | n/a | |
---|
4879 | n/a | #if !NO_MALLINFO |
---|
4880 | n/a | struct mallinfo mspace_mallinfo(mspace msp) { |
---|
4881 | n/a | mstate ms = (mstate)msp; |
---|
4882 | n/a | if (!ok_magic(ms)) { |
---|
4883 | n/a | USAGE_ERROR_ACTION(ms,ms); |
---|
4884 | n/a | } |
---|
4885 | n/a | return internal_mallinfo(ms); |
---|
4886 | n/a | } |
---|
4887 | n/a | #endif /* NO_MALLINFO */ |
---|
4888 | n/a | |
---|
4889 | n/a | int mspace_mallopt(int param_number, int value) { |
---|
4890 | n/a | return change_mparam(param_number, value); |
---|
4891 | n/a | } |
---|
4892 | n/a | |
---|
4893 | n/a | #endif /* MSPACES */ |
---|
4894 | n/a | |
---|
4895 | n/a | /* -------------------- Alternative MORECORE functions ------------------- */ |
---|
4896 | n/a | |
---|
4897 | n/a | /* |
---|
4898 | n/a | Guidelines for creating a custom version of MORECORE: |
---|
4899 | n/a | |
---|
4900 | n/a | * For best performance, MORECORE should allocate in multiples of pagesize. |
---|
4901 | n/a | * MORECORE may allocate more memory than requested. (Or even less, |
---|
4902 | n/a | but this will usually result in a malloc failure.) |
---|
4903 | n/a | * MORECORE must not allocate memory when given argument zero, but |
---|
4904 | n/a | instead return one past the end address of memory from previous |
---|
4905 | n/a | nonzero call. |
---|
4906 | n/a | * For best performance, consecutive calls to MORECORE with positive |
---|
4907 | n/a | arguments should return increasing addresses, indicating that |
---|
4908 | n/a | space has been contiguously extended. |
---|
4909 | n/a | * Even though consecutive calls to MORECORE need not return contiguous |
---|
4910 | n/a | addresses, it must be OK for malloc'ed chunks to span multiple |
---|
4911 | n/a | regions in those cases where they do happen to be contiguous. |
---|
4912 | n/a | * MORECORE need not handle negative arguments -- it may instead |
---|
4913 | n/a | just return MFAIL when given negative arguments. |
---|
4914 | n/a | Negative arguments are always multiples of pagesize. MORECORE |
---|
4915 | n/a | must not misinterpret negative args as large positive unsigned |
---|
4916 | n/a | args. You can suppress all such calls from even occurring by defining |
---|
4917 | n/a | MORECORE_CANNOT_TRIM, |
---|
4918 | n/a | |
---|
4919 | n/a | As an example alternative MORECORE, here is a custom allocator |
---|
4920 | n/a | kindly contributed for pre-OSX macOS. It uses virtually but not |
---|
4921 | n/a | necessarily physically contiguous non-paged memory (locked in, |
---|
4922 | n/a | present and won't get swapped out). You can use it by uncommenting |
---|
4923 | n/a | this section, adding some #includes, and setting up the appropriate |
---|
4924 | n/a | defines above: |
---|
4925 | n/a | |
---|
4926 | n/a | #define MORECORE osMoreCore |
---|
4927 | n/a | |
---|
4928 | n/a | There is also a shutdown routine that should somehow be called for |
---|
4929 | n/a | cleanup upon program exit. |
---|
4930 | n/a | |
---|
4931 | n/a | #define MAX_POOL_ENTRIES 100 |
---|
4932 | n/a | #define MINIMUM_MORECORE_SIZE (64 * 1024U) |
---|
4933 | n/a | static int next_os_pool; |
---|
4934 | n/a | void *our_os_pools[MAX_POOL_ENTRIES]; |
---|
4935 | n/a | |
---|
4936 | n/a | void *osMoreCore(int size) |
---|
4937 | n/a | { |
---|
4938 | n/a | void *ptr = 0; |
---|
4939 | n/a | static void *sbrk_top = 0; |
---|
4940 | n/a | |
---|
4941 | n/a | if (size > 0) |
---|
4942 | n/a | { |
---|
4943 | n/a | if (size < MINIMUM_MORECORE_SIZE) |
---|
4944 | n/a | size = MINIMUM_MORECORE_SIZE; |
---|
4945 | n/a | if (CurrentExecutionLevel() == kTaskLevel) |
---|
4946 | n/a | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); |
---|
4947 | n/a | if (ptr == 0) |
---|
4948 | n/a | { |
---|
4949 | n/a | return (void *) MFAIL; |
---|
4950 | n/a | } |
---|
4951 | n/a | // save ptrs so they can be freed during cleanup |
---|
4952 | n/a | our_os_pools[next_os_pool] = ptr; |
---|
4953 | n/a | next_os_pool++; |
---|
4954 | n/a | ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); |
---|
4955 | n/a | sbrk_top = (char *) ptr + size; |
---|
4956 | n/a | return ptr; |
---|
4957 | n/a | } |
---|
4958 | n/a | else if (size < 0) |
---|
4959 | n/a | { |
---|
4960 | n/a | // we don't currently support shrink behavior |
---|
4961 | n/a | return (void *) MFAIL; |
---|
4962 | n/a | } |
---|
4963 | n/a | else |
---|
4964 | n/a | { |
---|
4965 | n/a | return sbrk_top; |
---|
4966 | n/a | } |
---|
4967 | n/a | } |
---|
4968 | n/a | |
---|
4969 | n/a | // cleanup any allocated memory pools |
---|
4970 | n/a | // called as last thing before shutting down driver |
---|
4971 | n/a | |
---|
4972 | n/a | void osCleanupMem(void) |
---|
4973 | n/a | { |
---|
4974 | n/a | void **ptr; |
---|
4975 | n/a | |
---|
4976 | n/a | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) |
---|
4977 | n/a | if (*ptr) |
---|
4978 | n/a | { |
---|
4979 | n/a | PoolDeallocate(*ptr); |
---|
4980 | n/a | *ptr = 0; |
---|
4981 | n/a | } |
---|
4982 | n/a | } |
---|
4983 | n/a | |
---|
4984 | n/a | */ |
---|
4985 | n/a | |
---|
4986 | n/a | |
---|
4987 | n/a | /* ----------------------------------------------------------------------- |
---|
4988 | n/a | History: |
---|
4989 | n/a | V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) |
---|
4990 | n/a | * Add max_footprint functions |
---|
4991 | n/a | * Ensure all appropriate literals are size_t |
---|
4992 | n/a | * Fix conditional compilation problem for some #define settings |
---|
4993 | n/a | * Avoid concatenating segments with the one provided |
---|
4994 | n/a | in create_mspace_with_base |
---|
4995 | n/a | * Rename some variables to avoid compiler shadowing warnings |
---|
4996 | n/a | * Use explicit lock initialization. |
---|
4997 | n/a | * Better handling of sbrk interference. |
---|
4998 | n/a | * Simplify and fix segment insertion, trimming and mspace_destroy |
---|
4999 | n/a | * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x |
---|
5000 | n/a | * Thanks especially to Dennis Flanagan for help on these. |
---|
5001 | n/a | |
---|
5002 | n/a | V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) |
---|
5003 | n/a | * Fix memalign brace error. |
---|
5004 | n/a | |
---|
5005 | n/a | V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) |
---|
5006 | n/a | * Fix improper #endif nesting in C++ |
---|
5007 | n/a | * Add explicit casts needed for C++ |
---|
5008 | n/a | |
---|
5009 | n/a | V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) |
---|
5010 | n/a | * Use trees for large bins |
---|
5011 | n/a | * Support mspaces |
---|
5012 | n/a | * Use segments to unify sbrk-based and mmap-based system allocation, |
---|
5013 | n/a | removing need for emulation on most platforms without sbrk. |
---|
5014 | n/a | * Default safety checks |
---|
5015 | n/a | * Optional footer checks. Thanks to William Robertson for the idea. |
---|
5016 | n/a | * Internal code refactoring |
---|
5017 | n/a | * Incorporate suggestions and platform-specific changes. |
---|
5018 | n/a | Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, |
---|
5019 | n/a | Aaron Bachmann, Emery Berger, and others. |
---|
5020 | n/a | * Speed up non-fastbin processing enough to remove fastbins. |
---|
5021 | n/a | * Remove useless cfree() to avoid conflicts with other apps. |
---|
5022 | n/a | * Remove internal memcpy, memset. Compilers handle builtins better. |
---|
5023 | n/a | * Remove some options that no one ever used and rename others. |
---|
5024 | n/a | |
---|
5025 | n/a | V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) |
---|
5026 | n/a | * Fix malloc_state bitmap array misdeclaration |
---|
5027 | n/a | |
---|
5028 | n/a | V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) |
---|
5029 | n/a | * Allow tuning of FIRST_SORTED_BIN_SIZE |
---|
5030 | n/a | * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. |
---|
5031 | n/a | * Better detection and support for non-contiguousness of MORECORE. |
---|
5032 | n/a | Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger |
---|
5033 | n/a | * Bypass most of malloc if no frees. Thanks To Emery Berger. |
---|
5034 | n/a | * Fix freeing of old top non-contiguous chunk im sysmalloc. |
---|
5035 | n/a | * Raised default trim and map thresholds to 256K. |
---|
5036 | n/a | * Fix mmap-related #defines. Thanks to Lubos Lunak. |
---|
5037 | n/a | * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. |
---|
5038 | n/a | * Branch-free bin calculation |
---|
5039 | n/a | * Default trim and mmap thresholds now 256K. |
---|
5040 | n/a | |
---|
5041 | n/a | V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
---|
5042 | n/a | * Introduce independent_comalloc and independent_calloc. |
---|
5043 | n/a | Thanks to Michael Pachos for motivation and help. |
---|
5044 | n/a | * Make optional .h file available |
---|
5045 | n/a | * Allow > 2GB requests on 32bit systems. |
---|
5046 | n/a | * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. |
---|
5047 | n/a | Thanks also to Andreas Mueller <a.mueller at paradatec.de>, |
---|
5048 | n/a | and Anonymous. |
---|
5049 | n/a | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for |
---|
5050 | n/a | helping test this.) |
---|
5051 | n/a | * memalign: check alignment arg |
---|
5052 | n/a | * realloc: don't try to shift chunks backwards, since this |
---|
5053 | n/a | leads to more fragmentation in some programs and doesn't |
---|
5054 | n/a | seem to help in any others. |
---|
5055 | n/a | * Collect all cases in malloc requiring system memory into sysmalloc |
---|
5056 | n/a | * Use mmap as backup to sbrk |
---|
5057 | n/a | * Place all internal state in malloc_state |
---|
5058 | n/a | * Introduce fastbins (although similar to 2.5.1) |
---|
5059 | n/a | * Many minor tunings and cosmetic improvements |
---|
5060 | n/a | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK |
---|
5061 | n/a | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS |
---|
5062 | n/a | Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. |
---|
5063 | n/a | * Include errno.h to support default failure action. |
---|
5064 | n/a | |
---|
5065 | n/a | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) |
---|
5066 | n/a | * return null for negative arguments |
---|
5067 | n/a | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> |
---|
5068 | n/a | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' |
---|
5069 | n/a | (e.g. WIN32 platforms) |
---|
5070 | n/a | * Cleanup header file inclusion for WIN32 platforms |
---|
5071 | n/a | * Cleanup code to avoid Microsoft Visual C++ compiler complaints |
---|
5072 | n/a | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing |
---|
5073 | n/a | memory allocation routines |
---|
5074 | n/a | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) |
---|
5075 | n/a | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to |
---|
5076 | n/a | usage of 'assert' in non-WIN32 code |
---|
5077 | n/a | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to |
---|
5078 | n/a | avoid infinite loop |
---|
5079 | n/a | * Always call 'fREe()' rather than 'free()' |
---|
5080 | n/a | |
---|
5081 | n/a | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) |
---|
5082 | n/a | * Fixed ordering problem with boundary-stamping |
---|
5083 | n/a | |
---|
5084 | n/a | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) |
---|
5085 | n/a | * Added pvalloc, as recommended by H.J. Liu |
---|
5086 | n/a | * Added 64bit pointer support mainly from Wolfram Gloger |
---|
5087 | n/a | * Added anonymously donated WIN32 sbrk emulation |
---|
5088 | n/a | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen |
---|
5089 | n/a | * malloc_extend_top: fix mask error that caused wastage after |
---|
5090 | n/a | foreign sbrks |
---|
5091 | n/a | * Add linux mremap support code from HJ Liu |
---|
5092 | n/a | |
---|
5093 | n/a | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) |
---|
5094 | n/a | * Integrated most documentation with the code. |
---|
5095 | n/a | * Add support for mmap, with help from |
---|
5096 | n/a | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
---|
5097 | n/a | * Use last_remainder in more cases. |
---|
5098 | n/a | * Pack bins using idea from colin@nyx10.cs.du.edu |
---|
5099 | n/a | * Use ordered bins instead of best-fit threshold |
---|
5100 | n/a | * Eliminate block-local decls to simplify tracing and debugging. |
---|
5101 | n/a | * Support another case of realloc via move into top |
---|
5102 | n/a | * Fix error occurring when initial sbrk_base not word-aligned. |
---|
5103 | n/a | * Rely on page size for units instead of SBRK_UNIT to |
---|
5104 | n/a | avoid surprises about sbrk alignment conventions. |
---|
5105 | n/a | * Add mallinfo, mallopt. Thanks to Raymond Nijssen |
---|
5106 | n/a | (raymond@es.ele.tue.nl) for the suggestion. |
---|
5107 | n/a | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. |
---|
5108 | n/a | * More precautions for cases where other routines call sbrk, |
---|
5109 | n/a | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
---|
5110 | n/a | * Added macros etc., allowing use in linux libc from |
---|
5111 | n/a | H.J. Lu (hjl@gnu.ai.mit.edu) |
---|
5112 | n/a | * Inverted this history list |
---|
5113 | n/a | |
---|
5114 | n/a | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) |
---|
5115 | n/a | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. |
---|
5116 | n/a | * Removed all preallocation code since under current scheme |
---|
5117 | n/a | the work required to undo bad preallocations exceeds |
---|
5118 | n/a | the work saved in good cases for most test programs. |
---|
5119 | n/a | * No longer use return list or unconsolidated bins since |
---|
5120 | n/a | no scheme using them consistently outperforms those that don't |
---|
5121 | n/a | given above changes. |
---|
5122 | n/a | * Use best fit for very large chunks to prevent some worst-cases. |
---|
5123 | n/a | * Added some support for debugging |
---|
5124 | n/a | |
---|
5125 | n/a | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) |
---|
5126 | n/a | * Removed footers when chunks are in use. Thanks to |
---|
5127 | n/a | Paul Wilson (wilson@cs.texas.edu) for the suggestion. |
---|
5128 | n/a | |
---|
5129 | n/a | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) |
---|
5130 | n/a | * Added malloc_trim, with help from Wolfram Gloger |
---|
5131 | n/a | (wmglo@Dent.MED.Uni-Muenchen.DE). |
---|
5132 | n/a | |
---|
5133 | n/a | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) |
---|
5134 | n/a | |
---|
5135 | n/a | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) |
---|
5136 | n/a | * realloc: try to expand in both directions |
---|
5137 | n/a | * malloc: swap order of clean-bin strategy; |
---|
5138 | n/a | * realloc: only conditionally expand backwards |
---|
5139 | n/a | * Try not to scavenge used bins |
---|
5140 | n/a | * Use bin counts as a guide to preallocation |
---|
5141 | n/a | * Occasionally bin return list chunks in first scan |
---|
5142 | n/a | * Add a few optimizations from colin@nyx10.cs.du.edu |
---|
5143 | n/a | |
---|
5144 | n/a | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) |
---|
5145 | n/a | * faster bin computation & slightly different binning |
---|
5146 | n/a | * merged all consolidations to one part of malloc proper |
---|
5147 | n/a | (eliminating old malloc_find_space & malloc_clean_bin) |
---|
5148 | n/a | * Scan 2 returns chunks (not just 1) |
---|
5149 | n/a | * Propagate failure in realloc if malloc returns 0 |
---|
5150 | n/a | * Add stuff to allow compilation on non-ANSI compilers |
---|
5151 | n/a | from kpv@research.att.com |
---|
5152 | n/a | |
---|
5153 | n/a | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) |
---|
5154 | n/a | * removed potential for odd address access in prev_chunk |
---|
5155 | n/a | * removed dependency on getpagesize.h |
---|
5156 | n/a | * misc cosmetics and a bit more internal documentation |
---|
5157 | n/a | * anticosmetics: mangled names in macros to evade debugger strangeness |
---|
5158 | n/a | * tested on sparc, hp-700, dec-mips, rs6000 |
---|
5159 | n/a | with gcc & native cc (hp, dec only) allowing |
---|
5160 | n/a | Detlefs & Zorn comparison study (in SIGPLAN Notices.) |
---|
5161 | n/a | |
---|
5162 | n/a | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) |
---|
5163 | n/a | * Based loosely on libg++-1.2X malloc. (It retains some of the overall |
---|
5164 | n/a | structure of old version, but most details differ.) |
---|
5165 | n/a | |
---|
5166 | n/a | */ |
---|