2036
2036
heaptup = heap_prepare_insert(relation, tup, xid, cid, options);
2039
* We're about to do the actual insert -- but check for conflict first, to
2040
* avoid possibly having to roll back work we've just done.
2042
* For a heap insert, we only need to check for table-level SSI locks. Our
2043
* new tuple can't possibly conflict with existing tuple locks, and heap
2044
* page locks are only consolidated versions of tuple locks; they do not
2045
* lock "gaps" as index page locks do. So we don't need to identify a
2046
* buffer before making the call.
2048
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
2051
2039
* Find buffer to insert this tuple into. If the page is all visible,
2052
2040
* this will also pin the requisite visibility map page.
2055
2043
InvalidBuffer, options, bistate,
2056
2044
&vmbuffer, NULL);
2047
* We're about to do the actual insert -- but check for conflict first, to
2048
* avoid possibly having to roll back work we've just done.
2050
* This is safe without a recheck as long as there is no possibility of
2051
* another process scanning the page between this check and the insert
2052
* being visible to the scan (i.e., an exclusive buffer content lock is
2053
* continuously held from this point until the tuple insert is visible).
2055
* For a heap insert, we only need to check for table-level SSI locks. Our
2056
* new tuple can't possibly conflict with existing tuple locks, and heap
2057
* page locks are only consolidated versions of tuple locks; they do not
2058
* lock "gaps" as index page locks do. So we don't need to specify a
2059
* buffer when making the call, which makes for a faster check.
2061
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
2058
2063
/* NO EREPORT(ERROR) from here till changes are logged */
2059
2064
START_CRIT_SECTION();
2280
2285
* We're about to do the actual inserts -- but check for conflict first,
2281
* to avoid possibly having to roll back work we've just done.
2283
* For a heap insert, we only need to check for table-level SSI locks. Our
2284
* new tuple can't possibly conflict with existing tuple locks, and heap
2286
* to minimize the possibility of having to roll back work we've just
2289
* A check here does not definitively prevent a serialization anomaly;
2290
* that check MUST be done at least past the point of acquiring an
2291
* exclusive buffer content lock on every buffer that will be affected,
2292
* and MAY be done after all inserts are reflected in the buffers and
2293
* those locks are released; otherwise there race condition. Since
2294
* multiple buffers can be locked and unlocked in the loop below, and it
2295
* would not be feasible to identify and lock all of those buffers before
2296
* the loop, we must do a final check at the end.
2298
* The check here could be omitted with no loss of correctness; it is
2299
* present strictly as an optimization.
2301
* For heap inserts, we only need to check for table-level SSI locks. Our
2302
* new tuples can't possibly conflict with existing tuple locks, and heap
2285
2303
* page locks are only consolidated versions of tuple locks; they do not
2286
* lock "gaps" as index page locks do. So we don't need to identify a
2287
* buffer before making the call.
2304
* lock "gaps" as index page locks do. So we don't need to specify a
2305
* buffer when making the call, which makes for a faster check.
2289
2307
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
2466
* We're done with the actual inserts. Check for conflicts again, to
2467
* ensure that all rw-conflicts in to these inserts are detected. Without
2468
* this final check, a sequential scan of the heap may have locked the
2469
* table after the "before" check, missing one opportunity to detect the
2470
* conflict, and then scanned the table before the new tuples were there,
2471
* missing the other chance to detect the conflict.
2473
* For heap inserts, we only need to check for table-level SSI locks. Our
2474
* new tuples can't possibly conflict with existing tuple locks, and heap
2475
* page locks are only consolidated versions of tuple locks; they do not
2476
* lock "gaps" as index page locks do. So we don't need to specify a
2477
* buffer when making the call.
2479
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
2448
2482
* If tuples are cachable, mark them for invalidation from the caches in
2449
2483
* case we abort. Note it is OK to do this after releasing the buffer,
2450
2484
* because the heaptuples data structure is all in local memory, not in
2731
2765
* We're about to do the actual delete -- check for conflict first, to
2732
2766
* avoid possibly having to roll back work we've just done.
2768
* This is safe without a recheck as long as there is no possibility of
2769
* another process scanning the page between this check and the delete
2770
* being visible to the scan (i.e., an exclusive buffer content lock is
2771
* continuously held from this point until the tuple delete is visible).
2734
2773
CheckForSerializableConflictIn(relation, &tp, buffer);
3303
* We're about to do the actual update -- check for conflict first, to
3304
* avoid possibly having to roll back work we've just done.
3306
CheckForSerializableConflictIn(relation, &oldtup, buffer);
3308
3341
/* Fill in transaction status data */
3496
* We're about to create the new tuple -- check for conflict first, to
3529
* We're about to do the actual update -- check for conflict first, to
3497
3530
* avoid possibly having to roll back work we've just done.
3499
* NOTE: For a tuple insert, we only need to check for table locks, since
3500
* predicate locking at the index level will cover ranges for anything
3501
* except a table scan. Therefore, only provide the relation.
3532
* This is safe without a recheck as long as there is no possibility of
3533
* another process scanning the pages between this check and the update
3534
* being visible to the scan (i.e., exclusive buffer content lock(s) are
3535
* continuously held from this point until the tuple update is visible).
3537
* For the new tuple the only check needed is at the relation level, but
3538
* since both tuples are in the same relation and the check for oldtup
3539
* will include checking the relation level, there is no benefit to a
3540
* separate check for the new tuple.
3503
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
3542
CheckForSerializableConflictIn(relation, &oldtup, buffer);
3506
3545
* At this point newbuf and buffer are both pinned and locked, and newbuf
5116
5155
* The initial tuple is assumed to be already locked.
5118
* This function doesn't check visibility, it just inconditionally marks the
5157
* This function doesn't check visibility, it just unconditionally marks the
5119
5158
* tuple(s) as locked. If any tuple in the updated chain is being deleted
5120
5159
* concurrently (or updated with the key being modified), sleep until the
5121
5160
* transaction doing it is finished.
5609
5648
* NB -- some of these transformations are only valid because
5610
5649
* we know the return Xid is a tuple updater (i.e. not merely a
5611
* locker.) Also note that the only reason we don't explicitely
5650
* locker.) Also note that the only reason we don't explicitly
5612
5651
* worry about HEAP_KEYS_UPDATED is because it lives in t_infomask2
5613
5652
* rather than t_infomask.
7138
7177
MarkBufferDirty(buffer);
7180
* At the end of crash recovery the init forks of unlogged relations are
7181
* copied, without going through shared buffers. So we need to force the
7182
* on-disk state of init forks to always be in sync with the state in
7185
if (xlrec->forknum == INIT_FORKNUM)
7186
FlushOneBuffer(buffer);
7139
7188
UnlockReleaseBuffer(buffer);