21
21
* The bgwriter is started by the postmaster as soon as the startup subprocess
22
* finishes. It remains alive until the postmaster commands it to terminate.
22
* finishes, or as soon as recovery begins if we are doing archive recovery.
23
* It remains alive until the postmaster commands it to terminate.
23
24
* Normal termination is by SIGUSR2, which instructs the bgwriter to execute
24
25
* a shutdown checkpoint and then exit(0). (All backends must be stopped
25
26
* before SIGUSR2 is issued!) Emergency termination is by SIGQUIT; like any
40
* $PostgreSQL: pgsql/src/backend/postmaster/bgwriter.c,v 1.57 2009/03/26 22:26:06 petere Exp $
41
* $PostgreSQL: pgsql/src/backend/postmaster/bgwriter.c,v 1.62 2009/06/26 20:29:04 tgl Exp $
42
43
*-------------------------------------------------------------------------
425
426
if (do_checkpoint)
427
bool ckpt_performed = false;
428
bool do_restartpoint;
428
bool ckpt_performed = false;
429
bool do_restartpoint;
430
431
/* use volatile pointer to prevent code rearrangement */
431
432
volatile BgWriterShmemStruct *bgs = BgWriterShmem;
434
* Check if we should perform a checkpoint or a restartpoint.
435
* As a side-effect, RecoveryInProgress() initializes
436
* TimeLineID if it's not set yet.
435
* Check if we should perform a checkpoint or a restartpoint. As a
436
* side-effect, RecoveryInProgress() initializes TimeLineID if
438
439
do_restartpoint = RecoveryInProgress();
449
450
SpinLockRelease(&bgs->ckpt_lck);
453
* The end-of-recovery checkpoint is a real checkpoint that's
454
* performed while we're still in recovery.
456
if (flags & CHECKPOINT_END_OF_RECOVERY)
457
do_restartpoint = false;
452
460
* We will warn if (a) too soon since last checkpoint (whatever
453
461
* caused it) and (b) somebody set the CHECKPOINT_CAUSE_XLOG flag
454
462
* since the last checkpoint start. Note in particular that this
459
467
(flags & CHECKPOINT_CAUSE_XLOG) &&
460
468
elapsed_secs < CheckPointWarning)
462
(errmsg(ngettext("checkpoints are occurring too frequently (%d second apart)",
463
"checkpoints are occurring too frequently (%d seconds apart)",
470
(errmsg_plural("checkpoints are occurring too frequently (%d second apart)",
471
"checkpoints are occurring too frequently (%d seconds apart)",
466
474
errhint("Consider increasing the configuration parameter \"checkpoint_segments\".")));
798
806
PG_SETMASK(&BlockSig);
801
* DO NOT proc_exit() -- we're here because shared memory may be
802
* corrupted, so we don't want to try to clean up our transaction. Just
803
* nail the windows shut and get out of town.
809
* We DO NOT want to run proc_exit() callbacks -- we're here because
810
* shared memory may be corrupted, so we don't want to try to clean up our
811
* transaction. Just nail the windows shut and get out of town. Now that
812
* there's an atexit callback to prevent third-party code from breaking
813
* things by calling exit() directly, we have to reset the callbacks
814
* explicitly to make this work as intended.
805
819
* Note we do exit(2) not exit(0). This is to force the postmaster into a
806
820
* system reset cycle if some idiot DBA sends a manual SIGQUIT to a random
807
821
* backend. This is necessary precisely because we don't clean up our
808
* shared memory state.
822
* shared memory state. (The "dead man switch" mechanism in pmsignal.c
823
* should ensure the postmaster sees this as a crash, too, but no harm in
824
* being doubly sure.)
888
904
* flags is a bitwise OR of the following:
889
905
* CHECKPOINT_IS_SHUTDOWN: checkpoint is for database shutdown.
906
* CHECKPOINT_END_OF_RECOVERY: checkpoint is for end of WAL recovery.
890
907
* CHECKPOINT_IMMEDIATE: finish the checkpoint ASAP,
891
908
* ignoring checkpoint_completion_target parameter.
892
909
* CHECKPOINT_FORCE: force a checkpoint even if no XLOG activity has occured
893
* since the last one (implied by CHECKPOINT_IS_SHUTDOWN).
910
* since the last one (implied by CHECKPOINT_IS_SHUTDOWN or
911
* CHECKPOINT_END_OF_RECOVERY).
894
912
* CHECKPOINT_WAIT: wait for completion before returning (otherwise,
895
913
* just signal bgwriter to do it, and return).
896
914
* CHECKPOINT_CAUSE_XLOG: checkpoint is requested due to xlog filling.
946
964
* Send signal to request checkpoint. It's possible that the bgwriter
947
* hasn't started yet, or is in process of restarting, so we will retry
948
* a few times if needed. Also, if not told to wait for the checkpoint
949
* to occur, we consider failure to send the signal to be nonfatal and
965
* hasn't started yet, or is in process of restarting, so we will retry a
966
* few times if needed. Also, if not told to wait for the checkpoint to
967
* occur, we consider failure to send the signal to be nonfatal and merely
952
for (ntries = 0; ; ntries++)
970
for (ntries = 0;; ntries++)
954
972
if (BgWriterShmem->bgwriter_pid == 0)
956
if (ntries >= 20) /* max wait 2.0 sec */
974
if (ntries >= 20) /* max wait 2.0 sec */
958
976
elog((flags & CHECKPOINT_WAIT) ? ERROR : LOG,
959
"could not request checkpoint because bgwriter not running");
977
"could not request checkpoint because bgwriter not running");
963
981
else if (kill(BgWriterShmem->bgwriter_pid, SIGINT) != 0)
965
if (ntries >= 20) /* max wait 2.0 sec */
983
if (ntries >= 20) /* max wait 2.0 sec */
967
985
elog((flags & CHECKPOINT_WAIT) ? ERROR : LOG,
968
986
"could not signal for checkpoint: %m");