11
* $PostgreSQL: pgsql/src/backend/access/heap/pruneheap.c,v 1.17 2009/01/01 17:23:35 momjian Exp $
11
* $PostgreSQL: pgsql/src/backend/access/heap/pruneheap.c,v 1.18 2009/06/11 14:48:53 momjian Exp $
13
13
*-------------------------------------------------------------------------
32
32
TransactionId new_prune_xid; /* new prune hint value for page */
33
int nredirected; /* numbers of entries in arrays below */
33
int nredirected; /* numbers of entries in arrays below */
36
36
/* arrays that accumulate indexes of items to be changed */
161
161
* Our strategy is to scan the page and make lists of items to change,
162
* then apply the changes within a critical section. This keeps as
163
* much logic as possible out of the critical section, and also ensures
164
* that WAL replay will work the same as the normal case.
162
* then apply the changes within a critical section. This keeps as much
163
* logic as possible out of the critical section, and also ensures that
164
* WAL replay will work the same as the normal case.
166
* First, inform inval.c that upcoming CacheInvalidateHeapTuple calls
167
* are nontransactional.
166
* First, inform inval.c that upcoming CacheInvalidateHeapTuple calls are
169
169
if (redirect_move)
170
170
BeginNonTransactionalInvalidation();
173
* Initialize the new pd_prune_xid value to zero (indicating no
174
* prunable tuples). If we find any tuples which may soon become
175
* prunable, we will save the lowest relevant XID in new_prune_xid.
176
* Also initialize the rest of our working state.
173
* Initialize the new pd_prune_xid value to zero (indicating no prunable
174
* tuples). If we find any tuples which may soon become prunable, we will
175
* save the lowest relevant XID in new_prune_xid. Also initialize the rest
176
* of our working state.
178
178
prstate.new_prune_xid = InvalidTransactionId;
179
179
prstate.nredirected = prstate.ndead = prstate.nunused = 0;
207
* Send invalidation messages for any tuples we are about to move.
208
* It is safe to do this now, even though we could theoretically still
209
* fail before making the actual page update, because a useless cache
207
* Send invalidation messages for any tuples we are about to move. It is
208
* safe to do this now, even though we could theoretically still fail
209
* before making the actual page update, because a useless cache
210
210
* invalidation doesn't hurt anything. Also, no one else can reload the
211
211
* tuples while we have exclusive buffer lock, so it's not too early to
212
212
* send the invals. This avoids sending the invals while inside the
222
222
if (prstate.nredirected > 0 || prstate.ndead > 0 || prstate.nunused > 0)
225
* Apply the planned item changes, then repair page fragmentation,
226
* and update the page's hint bit about whether it has free line
225
* Apply the planned item changes, then repair page fragmentation, and
226
* update the page's hint bit about whether it has free line pointers.
229
228
heap_page_prune_execute(buffer,
230
229
prstate.redirected, prstate.nredirected,
270
269
* If we didn't prune anything, but have found a new value for the
271
* pd_prune_xid field, update it and mark the buffer dirty.
272
* This is treated as a non-WAL-logged hint.
270
* pd_prune_xid field, update it and mark the buffer dirty. This is
271
* treated as a non-WAL-logged hint.
274
273
* Also clear the "page is full" flag if it is set, since there's no
275
274
* point in repeating the prune/defrag process until something else
334
333
* OldestXmin is the cutoff XID used to identify dead tuples.
336
335
* We don't actually change the page here, except perhaps for hint-bit updates
337
* caused by HeapTupleSatisfiesVacuum. We just add entries to the arrays in
338
* prstate showing the changes to be made. Items to be redirected are added
336
* caused by HeapTupleSatisfiesVacuum. We just add entries to the arrays in
337
* prstate showing the changes to be made. Items to be redirected are added
339
338
* to the redirected[] array (two entries per redirection); items to be set to
340
339
* LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED
341
340
* state are added to nowunused[].
598
597
else if (redirect_move && ItemIdIsRedirected(rootlp))
601
* If we desire to eliminate LP_REDIRECT items by moving tuples,
602
* make a redirection entry for each redirected root item; this
603
* will cause heap_page_prune_execute to actually do the move.
604
* (We get here only when there are no DEAD tuples in the chain;
605
* otherwise the redirection entry was made above.)
600
* If we desire to eliminate LP_REDIRECT items by moving tuples, make
601
* a redirection entry for each redirected root item; this will cause
602
* heap_page_prune_execute to actually do the move. (We get here only
603
* when there are no DEAD tuples in the chain; otherwise the
604
* redirection entry was made above.)
607
606
heap_prune_record_redirect(prstate, rootoffnum, chainitems[1]);
608
607
redirect_target = chainitems[1];
612
* If we are going to implement a redirect by moving tuples, we have
613
* to issue a cache invalidation against the redirection target tuple,
611
* If we are going to implement a redirect by moving tuples, we have to
612
* issue a cache invalidation against the redirection target tuple,
614
613
* because its CTID will be effectively changed by the move. Note that
615
614
* CacheInvalidateHeapTuple only queues the request, it doesn't send it;
616
615
* if we fail before reaching EndNonTransactionalInvalidation, nothing
693
692
* buffer, and is inside a critical section.
695
694
* This is split out because it is also used by heap_xlog_clean()
696
* to replay the WAL record when needed after a crash. Note that the
695
* to replay the WAL record when needed after a crash. Note that the
697
696
* arguments are identical to those of log_heap_clean().