1
$PostgreSQL: pgsql/src/backend/access/hash/README,v 1.4 2003-11-29 19:51:40 pgsql Exp $
3
This directory contains an implementation of hash indexing for Postgres.
5
A hash index consists of two or more "buckets", into which tuples are
6
placed whenever their hash key maps to the bucket number. The
7
key-to-bucket-number mapping is chosen so that the index can be
8
incrementally expanded. When a new bucket is to be added to the index,
9
exactly one existing bucket will need to be "split", with some of its
10
tuples being transferred to the new bucket according to the updated
11
key-to-bucket-number mapping. This is essentially the same hash table
12
management technique embodied in src/backend/utils/hash/dynahash.c for
13
in-memory hash tables.
15
Each bucket in the hash index comprises one or more index pages. The
16
bucket's first page is permanently assigned to it when the bucket is
17
created. Additional pages, called "overflow pages", are added if the
18
bucket receives too many tuples to fit in the primary bucket page.
19
The pages of a bucket are chained together in a doubly-linked list
20
using fields in the index page special space.
22
There is currently no provision to shrink a hash index, other than by
23
rebuilding it with REINDEX. Overflow pages can be recycled for reuse
24
in other buckets, but we never give them back to the operating system.
25
There is no provision for reducing the number of buckets, either.
31
There are four kinds of pages in a hash index: the meta page (page zero),
32
which contains statically allocated control information; primary bucket
33
pages; overflow pages; and bitmap pages, which keep track of overflow
34
pages that have been freed and are available for re-use. For addressing
35
purposes, bitmap pages are regarded as a subset of the overflow pages.
37
Primary bucket pages and overflow pages are allocated independently (since
38
any given index might need more or fewer overflow pages relative to its
39
number of buckets). The hash code uses an interesting set of addressing
40
rules to support a variable number of overflow pages while not having to
41
move primary bucket pages around after they are created.
43
Primary bucket pages (henceforth just "bucket pages") are allocated in
44
power-of-2 groups, called "split points" in the code. Buckets 0 and 1
45
are created when the index is initialized. At the first split, buckets 2
46
and 3 are allocated; when bucket 4 is needed, buckets 4-7 are allocated;
47
when bucket 8 is needed, buckets 8-15 are allocated; etc. All the bucket
48
pages of a power-of-2 group appear consecutively in the index. This
49
addressing scheme allows the physical location of a bucket page to be
50
computed from the bucket number relatively easily, using only a small
51
amount of control information. We take the log2() of the bucket number
52
to determine which split point S the bucket belongs to, and then simply
53
add "hashm_spares[S] + 1" (where hashm_spares[] is an array stored in the
54
metapage) to compute the physical address. hashm_spares[S] can be
55
interpreted as the total number of overflow pages that have been allocated
56
before the bucket pages of splitpoint S. hashm_spares[0] is always 0,
57
so that buckets 0 and 1 (which belong to splitpoint 0) always appear at
58
block numbers 1 and 2, just after the meta page. We always have
59
hashm_spares[N] <= hashm_spares[N+1], since the latter count includes the
60
former. The difference between the two represents the number of overflow
61
pages appearing between the bucket page groups of splitpoints N and N+1.
63
When S splitpoints exist altogether, the array entries hashm_spares[0]
64
through hashm_spares[S] are valid; hashm_spares[S] records the current
65
total number of overflow pages. New overflow pages are created as needed
66
at the end of the index, and recorded by incrementing hashm_spares[S].
67
When it is time to create a new splitpoint's worth of bucket pages, we
68
copy hashm_spares[S] into hashm_spares[S+1] and increment S (which is
69
stored in the hashm_ovflpoint field of the meta page). This has the
70
effect of reserving the correct number of bucket pages at the end of the
71
index, and preparing to allocate additional overflow pages after those
72
bucket pages. hashm_spares[] entries before S cannot change anymore,
73
since that would require moving already-created bucket pages.
75
Since overflow pages may be recycled if enough tuples are deleted from
76
their bucket, we need a way to keep track of currently-free overflow
77
pages. The state of each overflow page (0 = available, 1 = not available)
78
is recorded in "bitmap" pages dedicated to this purpose. The entries in
79
the bitmap are indexed by "bit number", a zero-based count in which every
80
overflow page has a unique entry. We can convert between an overflow
81
page's physical block number and its bit number using the information in
82
hashm_spares[] (see hashovfl.c for details). The bit number sequence
83
includes the bitmap pages, which is the reason for saying that bitmap
84
pages are a subset of the overflow pages. It turns out in fact that each
85
bitmap page's first bit represents itself --- this is not an essential
86
property, but falls out of the fact that we only allocate another bitmap
87
page when we really need one. Bit number zero always corresponds to block
88
number 3, which is the first bitmap page and is allocated during index
95
We use both lmgr locks ("heavyweight" locks) and buffer context locks
96
(LWLocks) to control access to a hash index. lmgr locks are needed for
97
long-term locking since there is a (small) risk of deadlock, which we must
98
be able to detect. Buffer context locks are used for short-term access
99
control to individual pages of the index.
101
We define the following lmgr locks for a hash index:
103
LockPage(rel, 0) represents the right to modify the hash-code-to-bucket
104
mapping. A process attempting to enlarge the hash table by splitting a
105
bucket must exclusive-lock this lock before modifying the metapage data
106
representing the mapping. Processes intending to access a particular
107
bucket must share-lock this lock until they have acquired lock on the
108
correct target bucket.
110
LockPage(rel, page), where page is the page number of a hash bucket page,
111
represents the right to split or compact an individual bucket. A process
112
splitting a bucket must exclusive-lock both old and new halves of the
113
bucket until it is done. A process doing VACUUM must exclusive-lock the
114
bucket it is currently purging tuples from. Processes doing scans or
115
insertions must share-lock the bucket they are scanning or inserting into.
116
(It is okay to allow concurrent scans and insertions.)
118
The lmgr lock IDs corresponding to overflow pages are currently unused.
119
These are available for possible future refinements.
121
Note that these lock definitions are conceptually distinct from any sort
122
of lock on the pages whose numbers they share. A process must also obtain
123
read or write buffer lock on the metapage or bucket page before accessing
126
Processes performing hash index scans must hold share lock on the bucket
127
they are scanning throughout the scan. This seems to be essential, since
128
there is no reasonable way for a scan to cope with its bucket being split
129
underneath it. This creates a possibility of deadlock external to the
130
hash index code, since a process holding one of these locks could block
131
waiting for an unrelated lock held by another process. If that process
132
then does something that requires exclusive lock on the bucket, we have
133
deadlock. Therefore the bucket locks must be lmgr locks so that deadlock
134
can be detected and recovered from. This also forces the page-zero lock
135
to be an lmgr lock, because as we'll see below it is held while attempting
136
to acquire a bucket lock, and so it could also participate in a deadlock.
138
Processes must obtain read (share) buffer context lock on any hash index
139
page while reading it, and write (exclusive) lock while modifying it.
140
To prevent deadlock we enforce these coding rules: no buffer lock may be
141
held long term (across index AM calls), nor may any buffer lock be held
142
while waiting for an lmgr lock, nor may more than one buffer lock
143
be held at a time by any one process. (The third restriction is probably
144
stronger than necessary, but it makes the proof of no deadlock obvious.)
147
Pseudocode algorithms
148
---------------------
150
The operations we need to support are: readers scanning the index for
151
entries of a particular hash code (which by definition are all in the same
152
bucket); insertion of a new tuple into the correct bucket; enlarging the
153
hash table by splitting an existing bucket; and garbage collection
154
(deletion of dead tuples and compaction of buckets). Bucket splitting is
155
done at conclusion of any insertion that leaves the hash table more full
156
than the target load factor, but it is convenient to consider it as an
157
independent operation. Note that we do not have a bucket-merge operation
158
--- the number of buckets never shrinks. Insertion, splitting, and
159
garbage collection may all need access to freelist management, which keeps
160
track of available overflow pages.
162
The reader algorithm is:
164
share-lock page 0 (to prevent active split)
165
read/sharelock meta page
166
compute bucket number for target hash key
168
share-lock bucket page (to prevent split/compact of this bucket)
169
release page 0 share-lock
170
-- then, per read request:
171
read/sharelock current page of bucket
172
step to next page if necessary (no chaining of locks)
176
release bucket share-lock
178
By holding the page-zero lock until lock on the target bucket is obtained,
179
the reader ensures that the target bucket calculation is valid (otherwise
180
the bucket might be split before the reader arrives at it, and the target
181
entries might go into the new bucket). Holding the bucket sharelock for
182
the remainder of the scan prevents the reader's current-tuple pointer from
183
being invalidated by other processes. Notice though that the reader need
184
not prevent other buckets from being split or compacted.
186
The insertion algorithm is rather similar:
188
share-lock page 0 (to prevent active split)
189
read/sharelock meta page
190
compute bucket number for target hash key
192
share-lock bucket page (to prevent split/compact of this bucket)
193
release page 0 share-lock
194
-- (so far same as reader)
195
read/exclusive-lock current page of bucket
196
if full, release, read/exclusive-lock next page; repeat as needed
197
>> see below if no space in any page of bucket
199
write/release current page
200
release bucket share-lock
201
read/exclusive-lock meta page
202
increment tuple count, decide if split needed
203
write/release meta page
204
done if no split needed, else enter Split algorithm below
206
It is okay for an insertion to take place in a bucket that is being
207
actively scanned, because it does not change the position of any existing
208
item in the bucket, so scan states are not invalidated. We only need the
209
short-term buffer locks to ensure that readers do not see a
210
partially-updated page.
212
It is clearly impossible for readers and inserters to deadlock, and in
213
fact this algorithm allows them a very high degree of concurrency.
214
(The exclusive metapage lock taken to update the tuple count is stronger
215
than necessary, since readers do not care about the tuple count, but the
216
lock is held for such a short time that this is probably not an issue.)
218
When an inserter cannot find space in any existing page of a bucket, it
219
must obtain an overflow page and add that page to the bucket's chain.
220
Details of that part of the algorithm appear later.
222
The page split algorithm is entered whenever an inserter observes that the
223
index is overfull (has a higher-than-wanted ratio of tuples to buckets).
224
The algorithm attempts, but does not necessarily succeed, to split one
225
existing bucket in two, thereby lowering the fill ratio:
227
exclusive-lock page 0 (assert the right to begin a split)
228
read/exclusive-lock meta page
229
check split still needed
230
if split not needed anymore, drop locks and exit
231
decide which bucket to split
232
Attempt to X-lock old bucket number (definitely could fail)
233
Attempt to X-lock new bucket number (shouldn't fail, but...)
234
if above fail, drop locks and exit
235
update meta page to reflect new number of buckets
236
write/release meta page
237
release X-lock on page 0
238
-- now, accesses to all other buckets can proceed.
239
Perform actual split of bucket, moving tuples as needed
240
>> see below about acquiring needed extra space
241
Release X-locks of old and new buckets
243
Note the page zero and metapage locks are not held while the actual tuple
244
rearrangement is performed, so accesses to other buckets can proceed in
245
parallel; in fact, it's possible for multiple bucket splits to proceed
248
Split's attempt to X-lock the old bucket number could fail if another
249
process holds S-lock on it. We do not want to wait if that happens, first
250
because we don't want to wait while holding the metapage exclusive-lock,
251
and second because it could very easily result in deadlock. (The other
252
process might be out of the hash AM altogether, and could do something
253
that blocks on another lock this process holds; so even if the hash
254
algorithm itself is deadlock-free, a user-induced deadlock could occur.)
255
So, this is a conditional LockAcquire operation, and if it fails we just
256
abandon the attempt to split. This is all right since the index is
257
overfull but perfectly functional. Every subsequent inserter will try to
258
split, and eventually one will succeed. If multiple inserters failed to
259
split, the index might still be overfull, but eventually, the index will
260
not be overfull and split attempts will stop. (We could make a successful
261
splitter loop to see if the index is still overfull, but it seems better to
262
distribute the split overhead across successive insertions.)
264
A problem is that if a split fails partway through (eg due to insufficient
265
disk space) the index is left corrupt. The probability of that could be
266
made quite low if we grab a free page or two before we update the meta
267
page, but the only real solution is to treat a split as a WAL-loggable,
268
must-complete action. I'm not planning to teach hash about WAL in this
271
The fourth operation is garbage collection (bulk deletion):
274
read/sharelock meta page
275
fetch current max bucket number
277
while next bucket <= max bucket do
278
Acquire X lock on target bucket
279
Scan and remove tuples, compact free space as needed
283
exclusive-lock meta page
284
check if number of buckets changed
285
if so, release lock and return to for-each-bucket loop
286
else update metapage tuple count
287
write/release meta page
289
Note that this is designed to allow concurrent splits. If a split occurs,
290
tuples relocated into the new bucket will be visited twice by the scan,
291
but that does no harm. (We must however be careful about the statistics
292
reported by the VACUUM operation. What we can do is count the number of
293
tuples scanned, and believe this in preference to the stored tuple count
294
if the stored tuple count and number of buckets did *not* change at any
295
time during the scan. This provides a way of correcting the stored tuple
296
count if it gets out of sync for some reason. But if a split or insertion
297
does occur concurrently, the scan count is untrustworthy; instead,
298
subtract the number of tuples deleted from the stored tuple count and
301
The exclusive lock request could deadlock in some strange scenarios, but
302
we can just error out without any great harm being done.
305
Free space management
306
---------------------
308
Free space management consists of two sub-algorithms, one for reserving
309
an overflow page to add to a bucket chain, and one for returning an empty
310
overflow page to the free pool.
312
Obtaining an overflow page:
314
read/exclusive-lock meta page
315
determine next bitmap page number; if none, exit loop
316
release meta page lock
317
read/exclusive-lock bitmap page
318
search for a free page (zero bit in bitmap)
321
write/release bitmap page
322
read/exclusive-lock meta page
323
if first-free-bit value did not change,
324
update it and write meta page
329
loop back to try next bitmap page, if any
330
-- here when we have checked all bitmap pages; we hold meta excl. lock
331
extend index to add another overflow page; update meta information
332
write/release meta page
335
It is slightly annoying to release and reacquire the metapage lock
336
multiple times, but it seems best to do it that way to minimize loss of
337
concurrency against processes just entering the index. We don't want
338
to hold the metapage exclusive lock while reading in a bitmap page.
339
(We can at least avoid repeated buffer pin/unpin here.)
341
The normal path for extending the index does not require doing I/O while
342
holding the metapage lock. We do have to do I/O when the extension
343
requires adding a new bitmap page as well as the required overflow page
344
... but that is an infrequent case, so the loss of concurrency seems
347
The portion of tuple insertion that calls the above subroutine looks
350
-- having determined that no space is free in the target bucket:
351
remember last page of bucket, drop write lock on it
352
call free-page-acquire routine
353
re-write-lock last page of bucket
354
if it is not last anymore, step to the last page
355
update (former) last page to point to new page
356
write-lock and initialize new page, with back link to former last page
357
write and release former last page
358
insert tuple into new page
361
Notice this handles the case where two concurrent inserters try to extend
362
the same bucket. They will end up with a valid, though perhaps
363
space-inefficient, configuration: two overflow pages will be added to the
364
bucket, each containing one tuple.
366
The last part of this violates the rule about holding write lock on two
367
pages concurrently, but it should be okay to write-lock the previously
368
free page; there can be no other process holding lock on it.
370
Bucket splitting uses a similar algorithm if it has to extend the new
371
bucket, but it need not worry about concurrent extension since it has
372
exclusive lock on the new bucket.
374
Freeing an overflow page is done by garbage collection and by bucket
375
splitting (the old bucket may contain no-longer-needed overflow pages).
376
In both cases, the process holds exclusive lock on the containing bucket,
377
so need not worry about other accessors of pages in the bucket. The
380
delink overflow page from bucket chain
381
(this requires read/update/write/release of fore and aft siblings)
382
read/share-lock meta page
383
determine which bitmap page contains the free space bit for page
385
read/exclusive-lock bitmap page
387
write/release bitmap page
388
if page number is less than what we saw as first-free-bit in meta:
389
read/exclusive-lock meta page
390
if page number is still less than first-free-bit,
391
update first-free-bit field and write meta page
394
We have to do it this way because we must clear the bitmap bit before
395
changing the first-free-bit field (hashm_firstfree). It is possible that
396
we set first-free-bit too small (because someone has already reused the
397
page we just freed), but that is okay; the only cost is the next overflow
398
page acquirer will scan more bitmap bits than he needs to. What must be
399
avoided is having first-free-bit greater than the actual first free bit,
400
because then that free page would never be found by searchers.
402
All the freespace operations should be called while holding no buffer
403
locks. Since they need no lmgr locks, deadlock is not possible.
409
All the shenanigans with locking prevent a split occurring while *another*
410
process is stopped in a given bucket. They do not ensure that one of
411
our *own* backend's scans is not stopped in the bucket, because lmgr
412
doesn't consider a process's own locks to conflict. So the Split
413
algorithm must check for that case separately before deciding it can go
414
ahead with the split. VACUUM does not have this problem since nothing
415
else can be happening within the vacuuming backend.
417
Should we instead try to fix the state of any conflicting local scan?
418
Seems mighty ugly --- got to move the held bucket S-lock as well as lots
419
of other messiness. For now, just punt and don't split.