2
Developer's Frequently Asked Questions (FAQ) for PostgreSQL
4
Last updated: Fri May 6 13:47:54 EDT 2005
6
Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
8
The most recent version of this document can be viewed at
9
http://www.postgresql.org/files/documentation/faqs/FAQ_DEV.html.
10
_________________________________________________________________
14
1.1) How do I get involved in PostgreSQL development?
15
1.2) What development environment is required to develop code?
16
1.3) What areas need work?
17
1.4) What do I do after choosing an item to work on?
18
1.5) Where can I learn more about the code?
19
1.6) I've developed a patch, what next?
20
1.7) How do I download/update the current source tree?
21
1.8) How do I test my changes?
22
1.9) What tools are available for developers?
23
1.10) What books are good for developers?
24
1.11) What is configure all about?
25
1.12) How do I add a new port?
26
1.13) Why don't you use threads, raw devices, async-I/O, <insert your
27
favorite wizz-bang feature here>?
28
1.14) How are RPM's packaged?
29
1.15) How are CVS branches handled?
30
1.16) Where can I get a copy of the SQL standards?
31
1.17) Where can I get technical assistance?
32
1.18) How do I get involved in PostgreSQL web site development?
36
2.1) How do I efficiently access information in tables from the
38
2.2) Why are table, column, type, function, view names sometimes
39
referenced as Name or NameData, and sometimes as char *?
40
2.3) Why do we use Node and List to make data structures?
41
2.4) I just added a field to a structure. What else should I do?
42
2.5) Why do we use palloc() and pfree() to allocate memory?
43
2.6) What is ereport()?
44
2.7) What is CommandCounterIncrement()?
45
2.8) What debugging features are available?
46
_________________________________________________________________
50
1.1) How do I get involved in PostgreSQL development?
52
Download the code and have a look around. See 1.7.
54
Subscribe to and read the pgsql-hackers mailing list (often termed
55
'hackers'). This is where the major contributors and core members of
56
the project discuss development.
58
1.2) What development environment is required to develop code?
60
PostgreSQL is developed mostly in the C programming language. It also
61
makes use of Yacc and Lex.
63
The source code is targeted at most of the popular Unix platforms and
64
the Windows environment (XP, Windows 2000, and up).
66
Most developers make use of the open source development tool chain. If
67
you have contributed to open source software before, you will probably
68
be familiar with these tools. They include: GCC (http://gcc.gnu.org,
69
GDB (www.gnu.org/software/gdb/gdb.html), autoconf
70
(www.gnu.org/software/autoconf/) AND GNU make
71
(www.gnu.org/software/make/make.html.
73
Developers using this tool chain on Windows make use of MingW (see
74
http://www.mingw.org/).
76
Some developers use compilers from other software vendors with mixed
79
Developers who are regularly rebuilding the source often pass the
80
--enable-depend flag to configure. The result is that when you make a
81
modification to a C header file, all files depend upon that file are
84
1.3) What areas need work?
86
Outstanding features are detailed in the TODO list. This is located in
87
doc/TODO in the source distribution or at
88
http://developer.postgresql.org/todo.php.
90
You can learn more about these features by consulting the archives,
91
the SQL standards and the recommend texts (see 1.10).
93
1.4) What do I do after choosing an item to work on?
95
Send an email to pgsql-hackers with a proposal for what you want to do
96
(assuming your contribution is not trivial). Working in isolation is
97
not advisable: others may be working on the same TODO item; you may
98
have misunderstood the TODO item; your approach may benefit from the
101
A web site is maintained for patches that are ready to be applied,
102
http://momjian.postgresql.org/cgi-bin/pgpatches, and those that are
103
being kept for the next release,
104
http://momjian.postgresql.org/cgi-bin/pgpatches2.
106
1.5) Where can I learn more about the code?
108
Other than documentation in the source tree itself, you can find some
109
papers/presentations discussing the code at
110
http://www.postgresql.org/developer.
112
1.6) I've developed a patch, what next?
114
Generate the patch in contextual diff format. If you are unfamiliar
115
with this, you may find the script src/tools/makediff/difforig useful.
117
Ensure that your patch is generated against the most recent version of
118
the code. If it is a patch adding new functionality, the most recent
119
version is cvs HEAD; if it is a bug fix, this will be the most
120
recently version of the branch which suffers from the bug (for more on
121
branches in PostgreSQL, see 1.15).
123
Finally, submit the patch to pgsql-patches@postgresql.org. It will be
124
reviewed by other contributors to the project and may be either
125
accepted or sent back for further work.
127
1.7) How do I download/update the current source tree?
129
There are several ways to obtain the source tree. Occasional
130
developers can just get the most recent source tree snapshot from
131
ftp://ftp.postgresql.org.
133
Regular developers may want to take advantage of anonymous access to
134
our source code management system. The source tree is currently hosted
135
in CVS. For details of how to obtain the source from CVS see
136
http://developer.postgresql.org/docs/postgres/cvs.html.
138
1.8) How do I test my changes?
142
The easiest way to test your code is to ensure that it builds against
143
the latest version of the code and that it does not generate compiler
146
It is worth advised that you pass --enable-cassert to configure. This
147
will turn on assertions with in the source which will often show us
148
bugs because they cause data corruption of segmentation violations.
149
This generally makes debugging much easier.
151
Then, perform run time testing via psql.
153
Regression test suite
155
The next step is to test your changes against the existing regression
156
test suite. To do this, issue "make check" in the root directory of
157
the source tree. If any tests failure, investigate.
159
If you've deliberately changed existing behavior, this change may
160
cause a regression test failure but not any actual regression. If so,
161
you should also patch the regression test suite.
163
Other run time testing
165
Some developers make use of tools such as valgrind
166
(http://valgrind.kde.org) for memory testing, gprof (which comes with
167
the GNU binutils suite) and oprofile
168
(http://oprofile.sourceforge.net/) for profiling and other related
171
What about unit testing, static analysis, model checking...?
173
There have been a number of discussions about other testing frameworks
174
and some developers are exploring these ideas.
176
Keep in mind the Makefiles do not have the proper dependencies for
177
include files. You have to do a make clean and then another make. If
178
you are using GCC you can use the --enable-depend option of configure
179
to have the compiler compute the dependencies automatically.
181
1.9) What tools are available for developers?
183
First, all the files in the src/tools directory are designed for
185
RELEASE_CHANGES changes we have to make for each release
186
backend description/flowchart of the backend directories
187
ccsym find standard defines made by your compiler
188
copyright fixes copyright notices
190
entab converts tabs to spaces, used by pgindent
191
find_static finds functions that could be made static
192
find_typedef finds typedefs in the source code
193
find_badmacros finds macros that use braces incorrectly
194
fsync a script to provide information about the cost of cache
196
make_ctags make vi 'tags' file in each directory
197
make_diff make *.orig and diffs of source
198
make_etags make emacs 'etags' files
199
make_keywords make comparison of our keywords and SQL'92
200
make_mkid make mkid ID files
201
pgcvslog used to generate a list of changes for each release
202
pginclude scripts for adding/removing include files
203
pgindent indents source files
204
pgtest a semi-automated build system
205
thread a thread testing script
207
In src/include/catalog:
208
unused_oids a script which generates unused OIDs for use in system
210
duplicate_oids finds duplicate OIDs in system catalog definitions
212
If you point your browser at the tools/backend/index.html file, you
213
will see few paragraphs describing the data flow, the backend
214
components in a flow chart, and a description of the shared memory
215
area. You can click on any flowchart box to see a description. If you
216
then click on the directory name, you will be taken to the source
217
directory, to browse the actual source code behind it. We also have
218
several README files in some source directories to describe the
219
function of the module. The browser will display these when you enter
220
the directory also. The tools/backend directory is also contained on
221
our web page under the title How PostgreSQL Processes a Query.
223
Second, you really should have an editor that can handle tags, so you
224
can tag a function call to see the function definition, and then tag
225
inside that function to see an even lower-level function, and then
226
back out twice to return to the original function. Most editors
227
support this via tags or etags files.
229
Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/id-utils/
231
By running tools/make_mkid, an archive of source symbols can be
232
created that can be rapidly queried.
234
Some developers make use of cscope, which can be found at
235
http://cscope.sf.net/. Others use glimpse, which can be found at
236
http://webglimpse.net/.
238
tools/make_diff has tools to create patch diff files that can be
239
applied to the distribution. This produces context diffs, which is our
242
Our standard format is to indent each code level with one tab, where
243
each tab is four spaces. You will need to set your editor to display
253
M-x set-variable tab-width
259
(indent-tabs-mode . t)
265
nil ) ; t = set this style, nil = don't
267
(defun pgsql-c-mode ()
269
(c-set-style "pgsql")
272
and add this to your autoload list (modify file path in macro):
274
(setq auto-mode-alist
275
(cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode)
286
pgindent will the format code by specifying flags to your operating
287
system's utility indent. This article describes the value of a
288
consistent coding style.
290
pgindent is run on all source files just before each beta test period.
291
It auto-formats all source files to make them consistent. Comment
292
blocks that need specific line breaks should be formatted as block
293
comments, where the comment starts as /*------. These comments will
294
not be reformatted in any way.
296
pginclude contains scripts used to add needed #include's to include
297
files, and removed unneeded #include's.
299
When adding system types, you will need to assign oids to them. There
300
is also a script called unused_oids in pgsql/src/include/catalog that
301
shows the unused oids.
303
1.10) What books are good for developers?
305
I have four good books, An Introduction to Database Systems, by C.J.
306
Date, Addison, Wesley, A Guide to the SQL Standard, by C.J. Date, et.
307
al, Addison, Wesley, Fundamentals of Database Systems, by Elmasri and
308
Navathe, and Transaction Processing, by Jim Gray, Morgan, Kaufmann
310
There is also a database performance site, with a handbook on-line
311
written by Jim Gray at http://www.benchmarkresources.com..
313
1.11) What is configure all about?
315
The files configure and configure.in are part of the GNU autoconf
316
package. Configure allows us to test for various capabilities of the
317
OS, and to set variables that can then be tested in C programs and
318
Makefiles. Autoconf is installed on the PostgreSQL main server. To add
319
options to configure, edit configure.in, and then run autoconf to
322
When configure is run by the user, it tests various OS capabilities,
323
stores those in config.status and config.cache, and modifies a list of
324
*.in files. For example, if there exists a Makefile.in, configure
325
generates a Makefile that contains substitutions for all @var@
326
parameters found by configure.
328
When you need to edit files, make sure you don't waste time modifying
329
files generated by configure. Edit the *.in file, and re-run configure
330
to recreate the needed file. If you run make distclean from the
331
top-level source directory, all files derived by configure are
332
removed, so you see only the file contained in the source
335
1.12) How do I add a new port?
337
There are a variety of places that need to be modified to add a new
338
port. First, start in the src/template directory. Add an appropriate
339
entry for your OS. Also, use src/config.guess to add your OS to
340
src/template/.similar. You shouldn't match the OS version exactly. The
341
configure test will look for an exact OS version number, and if not
342
found, find a match without version number. Edit src/configure.in to
343
add your new OS. (See configure item above.) You will need to run
344
autoconf, or patch src/configure too.
346
Then, check src/include/port and add your new OS file, with
347
appropriate values. Hopefully, there is already locking code in
348
src/include/storage/s_lock.h for your CPU. There is also a
349
src/makefiles directory for port-specific Makefile handling. There is
350
a backend/port directory if you need special files for your OS.
352
1.13) Why don't you use threads, raw devices, async-I/O, <insert your
353
favorite wizz-bang feature here>?
355
There is always a temptation to use the newest operating system
356
features as soon as they arrive. We resist that temptation.
358
First, we support 15+ operating systems, so any new feature has to be
359
well established before we will consider it. Second, most new
360
wizz-bang features don't provide dramatic improvements. Third, they
361
usually have some downside, such as decreased reliability or
362
additional code required. Therefore, we don't rush to use new features
363
but rather wait for the feature to be established, then ask for
364
testing to show that a measurable improvement is possible.
366
As an example, threads are not currently used in the backend code
368
* Historically, threads were unsupported and buggy.
369
* An error in one backend can corrupt other backends.
370
* Speed improvements using threads are small compared to the
371
remaining backend startup time.
372
* The backend code would be more complex.
374
So, we are not ignorant of new features. It is just that we are
375
cautious about their adoption. The TODO list often contains links to
376
discussions showing our reasoning in these areas.
378
1.14) How are RPMs packaged?
380
This was written by Lamar Owen:
384
As to how the RPMs are built -- to answer that question sanely
385
requires me to know how much experience you have with the whole RPM
386
paradigm. 'How is the RPM built?' is a multifaceted question. The
387
obvious simple answer is that I maintain:
388
1. A set of patches to make certain portions of the source tree
389
'behave' in the different environment of the RPMset;
391
3. Any other ancillary scripts and files;
392
4. A README.rpm-dist document that tries to adequately document both
393
the differences between the RPM build and the WHY of the
394
differences, as well as useful RPM environment operations (like,
395
using syslog, upgrading, getting postmaster to start at OS boot,
397
5. The spec file that throws it all together. This is not a trivial
398
undertaking in a package of this size.
400
I then download and build on as many different canonical distributions
401
as I can -- currently I am able to build on Red Hat 6.2, 7.0, and 7.1
402
on my personal hardware. Occasionally I receive opportunity from
403
certain commercial enterprises such as Great Bridge and PostgreSQL,
404
Inc. to build on other distributions.
406
I test the build by installing the resulting packages and running the
407
regression tests. Once the build passes these tests, I upload to the
408
postgresql.org ftp server and make a release announcement. I am also
409
responsible for maintaining the RPM download area on the ftp site.
411
You'll notice I said 'canonical' distributions above. That simply
412
means that the machine is as stock 'out of the box' as practical --
413
that is, everything (except select few programs) on these boxen are
414
installed by RPM; only official Red Hat released RPMs are used (except
415
in unusual circumstances involving software that will not alter the
416
build -- for example, installing a newer non-RedHat version of the Dia
417
diagramming package is OK -- installing Python 2.1 on the box that has
418
Python 1.5.2 installed is not, as that alters the PostgreSQL build).
419
The RPM as uploaded is built to as close to out-of-the-box pristine as
420
is possible. Only the standard released 'official to that release'
421
compiler is used -- and only the standard official kernel is used as
424
For a time I built on Mandrake for RedHat consumption -- no more.
425
Nonstandard RPM building systems are worse than useless. Which is not
426
to say that Mandrake is useless! By no means is Mandrake useless --
427
unless you are building Red Hat RPMs -- and Red Hat is useless if
428
you're trying to build Mandrake or SuSE RPMs, for that matter. But I
429
would be foolish to use 'Lamar Owen's Super Special RPM Blend Distro
430
0.1.2' to build for public consumption! :-)
432
I _do_ attempt to make the _source_ RPM compatible with as many
433
distributions as possible -- however, since I have limited resources
434
(as a volunteer RPM maintainer) I am limited as to the amount of
435
testing said build will get on other distributions, architectures, or
438
And, while I understand people's desire to immediately upgrade to the
439
newest version, realize that I do this as a side interest -- I have a
440
regular, full-time job as a broadcast
441
engineer/webmaster/sysadmin/Technical Director which occasionally
442
prevents me from making timely RPM releases. This happened during the
443
early part of the 7.1 beta cycle -- but I believe I was pretty much on
444
the ball for the Release Candidates and the final release.
446
I am working towards a more open RPM distribution -- I would dearly
447
love to more fully document the process and put everything into CVS --
448
once I figure out how I want to represent things such as the spec file
449
in a CVS form. It makes no sense to maintain a changelog, for
450
instance, in the spec file in CVS when CVS does a better job of
451
changelogs -- I will need to write a tool to generate a real spec file
452
from a CVS spec-source file that would add version numbers, changelog
453
entries, etc to the result before building the RPM. IOW, I need to
454
rethink the process -- and then go through the motions of putting my
455
long RPM history into CVS one version at a time so that version
456
history information isn't lost.
458
As to why all these files aren't part of the source tree, well, unless
459
there was a large cry for it to happen, I don't believe it should.
460
PostgreSQL is very platform-agnostic -- and I like that. Including the
461
RPM stuff as part of the Official Tarball (TM) would, IMHO, slant that
462
agnostic stance in a negative way. But maybe I'm too sensitive to
463
that. I'm not opposed to doing that if that is the consensus of the
464
core group -- and that would be a sneaky way to get the stuff into CVS
465
:-). But if the core group isn't thrilled with the idea (and my
466
instinct says they're not likely to be), I am opposed to the idea --
467
not to keep the stuff to myself, but to not hinder the
468
platform-neutral stance. IMHO, of course.
470
Of course, there are many projects that DO include all the files
471
necessary to build RPMs from their Official Tarball (TM).
473
1.15) How are CVS branches managed?
475
This was written by Tom Lane:
479
If you just do basic "cvs checkout", "cvs update", "cvs commit", then
480
you'll always be dealing with the HEAD version of the files in CVS.
481
That's what you want for development, but if you need to patch past
482
stable releases then you have to be able to access and update the
483
"branch" portions of our CVS repository. We normally fork off a branch
484
for a stable release just before starting the development cycle for
487
The first thing you have to know is the branch name for the branch you
488
are interested in getting at. To do this, look at some long-lived
489
file, say the top-level HISTORY file, with "cvs status -v" to see what
490
the branch names are. (Thanks to Ian Lance Taylor for pointing out
491
that this is the easiest way to do it.) Typical branch names are:
496
OK, so how do you do work on a branch? By far the best way is to
497
create a separate checkout tree for the branch and do your work in
498
that. Not only is that the easiest way to deal with CVS, but you
499
really need to have the whole past tree available anyway to test your
500
work. (And you *better* test your work. Never forget that dot-releases
501
tend to go out with very little beta testing --- so whenever you
502
commit an update to a stable branch, you'd better be doubly sure that
505
Normally, to checkout the head branch, you just cd to the place you
506
want to contain the toplevel "pgsql" directory and say
507
cvs ... checkout pgsql
509
To get a past branch, you cd to wherever you want it and say
510
cvs ... checkout -r BRANCHNAME pgsql
512
For example, just a couple days ago I did
513
mkdir ~postgres/REL7_1
515
cvs ... checkout -r REL7_1_STABLE pgsql
517
and now I have a maintenance copy of 7.1.*.
519
When you've done a checkout in this way, the branch name is "sticky":
520
CVS automatically knows that this directory tree is for the branch,
521
and whenever you do "cvs update" or "cvs commit" in this tree, you'll
522
fetch or store the latest version in the branch, not the head version.
525
So, if you have a patch that needs to apply to both the head and a
526
recent stable branch, you have to make the edits and do the commit
527
twice, once in your development tree and once in your stable branch
528
tree. This is kind of a pain, which is why we don't normally fork the
529
tree right away after a major release --- we wait for a dot-release or
530
two, so that we won't have to double-patch the first wave of fixes.
532
1.16) Where can I get a copy of the SQL standards?
534
There are three versions of the SQL standard: SQL-92, SQL:1999, and
535
SQL:2003. They are endorsed by ANSI and ISO. Draft versions can be
537
* SQL-92 http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
539
http://www.cse.iitb.ac.in/dbms/Data/Papers-Other/SQL1999/ansi-iso-
541
* SQL:2003 http://www.wiscorp.com/sql/sql_2003_standard.zip
543
Some SQL standards web pages are:
544
* http://troels.arvin.dk/db/rdbms/links/#standards
545
* http://www.wiscorp.com/SQLStandards.html
546
* http://www.contrib.andrew.cmu.edu/~shadow/sql.html#syntax (SQL-92)
547
* http://dbs.uni-leipzig.de/en/lokal/standards.pdf (paper)
549
1.17) Where can I get technical assistance?
551
Many technical questions held by those new to the code have been
552
answered on the pgsql-hackers mailing list - the archives of which can
553
be found at http://archives.postgresql.org/pgsql-hackers/.
555
If you cannot find discussion or your particular question, feel free
556
to put it to the list.
558
Major contributors also answer technical questions, including
559
questions about development of new features, on IRC at
560
irc.freenode.net in the #postgresql channel.
562
1.18) How do I get involved in PostgreSQL web site development?
564
PostgreSQL website development is discussed on the
565
pgsql-www@postgresql.org mailing list. The is a project page where the
566
source code is available at
567
http://gborg.postgresql.org/project/pgweb/projdisplay.php , the code
568
for the next version of the website is under the "portal" module. You
569
will also find code for the "techdocs" website if you would like to
570
contribute to that. A temporary todo list for current website
571
development issues is available at http://xzilla.postgresql.org/todo
575
2.1) How do I efficiently access information in tables from the backend code?
577
You first need to find the tuples(rows) you are interested in. There
578
are two ways. First, SearchSysCache() and related functions allow you
579
to query the system catalogs. This is the preferred way to access
580
system tables, because the first call to the cache loads the needed
581
rows, and future requests can return the results without accessing the
582
base table. The caches use system table indexes to look up tuples. A
583
list of available caches is located in
584
src/backend/utils/cache/syscache.c.
585
src/backend/utils/cache/lsyscache.c contains many column-specific
586
cache lookup functions.
588
The rows returned are cache-owned versions of the heap rows.
589
Therefore, you must not modify or delete the tuple returned by
590
SearchSysCache(). What you should do is release it with
591
ReleaseSysCache() when you are done using it; this informs the cache
592
that it can discard that tuple if necessary. If you neglect to call
593
ReleaseSysCache(), then the cache entry will remain locked in the
594
cache until end of transaction, which is tolerable but not very
597
If you can't use the system cache, you will need to retrieve the data
598
directly from the heap table, using the buffer cache that is shared by
599
all backends. The backend automatically takes care of loading the rows
600
into the buffer cache.
602
Open the table with heap_open(). You can then start a table scan with
603
heap_beginscan(), then use heap_getnext() and continue as long as
604
HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be
605
assigned to the scan. No indexes are used, so all rows are going to be
606
compared to the keys, and only the valid rows returned.
608
You can also use heap_fetch() to fetch rows by block number/offset.
609
While scans automatically lock/unlock rows from the buffer cache, with
610
heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it
613
Once you have the row, you can get data that is common to all tuples,
614
like t_self and t_oid, by merely accessing the HeapTuple structure
615
entries. If you need a table-specific column, you should take the
616
HeapTuple pointer, and use the GETSTRUCT() macro to access the
617
table-specific start of the tuple. You then cast the pointer as a
618
Form_pg_proc pointer if you are accessing the pg_proc table, or
619
Form_pg_type if you are accessing pg_type. You can then access the
620
columns by using a structure pointer:
621
((Form_pg_class) GETSTRUCT(tuple))->relnatts
623
You must not directly change live tuples in this way. The best way is
624
to use heap_modifytuple() and pass it your original tuple, and the
625
values you want changed. It returns a palloc'ed tuple, which you pass
626
to heap_replace(). You can delete tuples by passing the tuple's t_self
627
to heap_destroy(). You use t_self for heap_update() too. Remember,
628
tuples can be either system cache copies, which may go away after you
629
call ReleaseSysCache(), or read directly from disk buffers, which go
630
away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the
631
heap_fetch() case. Or it may be a palloc'ed tuple, that you must
632
pfree() when finished.
634
2.2) Why are table, column, type, function, view names sometimes referenced
635
as Name or NameData, and sometimes as char *?
637
Table, column, type, function, and view names are stored in system
638
tables in columns of type Name. Name is a fixed-length,
639
null-terminated type of NAMEDATALEN bytes. (The default value for
640
NAMEDATALEN is 64 bytes.)
641
typedef struct nameData
643
char data[NAMEDATALEN];
645
typedef NameData *Name;
647
Table, column, type, function, and view names that come into the
648
backend via user queries are stored as variable-length,
649
null-terminated character strings.
651
Many functions are called with both types of names, ie. heap_open().
652
Because the Name type is null-terminated, it is safe to pass it to a
653
function expecting a char *. Because there are many cases where
654
on-disk names(Name) are compared to user-supplied names(char *), there
655
are many cases where Name and char * are used interchangeably.
657
2.3) Why do we use Node and List to make data structures?
659
We do this because this allows a consistent way to pass data inside
660
the backend in a flexible way. Every node has a NodeTag which
661
specifies what type of data is inside the Node. Lists are groups of
662
Nodes chained together as a forward-linked list.
664
Here are some of the List manipulation commands:
666
lfirst(i), lfirst_int(i), lfirst_oid(i)
667
return the data (a point, integer and OID respectively) at list
671
return the next list element after i.
674
loop through list, assigning each list element to i. It is
675
important to note that i is a List *, not the data in the List
676
element. You need to use lfirst(i) to get at the data. Here is
677
a typical code snippet that loops through a List containing Var
678
*'s and processes each one:
685
Var *var = lfirst(i);
687
/* process var here */
691
add node to the front of list, or create a new list with node
695
add node to the end of list. This is more expensive that lcons.
698
Concat list2 on to the end of list1.
701
return the length of the list.
704
return the i'th element in list.
707
There are integer versions of these: lconsi, lappendi, etc.
708
Also versions for OID lists: lconso, lappendo, etc.
710
You can print nodes easily inside gdb. First, to disable output
711
truncation when you use the gdb print command:
712
(gdb) set print elements 0
714
Instead of printing values in gdb format, you can use the next two
715
commands to print out List, Node, and structure contents in a verbose
716
format that is easier to understand. List's are unrolled into nodes,
717
and nodes are printed in detail. The first prints in a short format,
718
and the second in a long format:
719
(gdb) call print(any_pointer)
720
(gdb) call pprint(any_pointer)
722
The output appears in the postmaster log file, or on your screen if
723
you are running a backend directly without a postmaster.
725
2.4) I just added a field to a structure. What else should I do?
727
The structures passing around from the parser, rewrite, optimizer, and
728
executor require quite a bit of support. Most structures have support
729
routines in src/backend/nodes used to create, copy, read, and output
730
those structures (in particular, the files copyfuncs.c and
731
equalfuncs.c. Make sure you add support for your new field to these
732
files. Find any other places the structure may need code for your new
733
field. mkid is helpful with this (see 1.9).
735
2.5) Why do we use palloc() and pfree() to allocate memory?
737
palloc() and pfree() are used in place of malloc() and free() because
738
we find it easier to automatically free all memory allocated when a
739
query completes. This assures us that all memory that was allocated
740
gets freed even if we have lost track of where we allocated it. There
741
are special non-query contexts that memory can be allocated in. These
742
affect when the allocated memory is freed by the backend.
744
2.6) What is ereport()?
746
ereport() is used to send messages to the front-end, and optionally
747
terminate the current query being processed. The first parameter is an
748
ereport level of DEBUG (levels 1-5), LOG, INFO, NOTICE, ERROR, FATAL,
749
or PANIC. NOTICE prints on the user's terminal and the postmaster
750
logs. INFO prints only to the user's terminal and LOG prints only to
751
the server logs. (These can be changed from postgresql.conf.) ERROR
752
prints in both places, and terminates the current query, never
753
returning from the call. FATAL terminates the backend process. The
754
remaining parameters of ereport are a printf-style set of parameters
757
ereport(ERROR) frees most memory and open file descriptors so you
758
don't need to clean these up before the call.
760
2.7) What is CommandCounterIncrement()?
762
Normally, transactions can not see the rows they modify. This allows
763
UPDATE foo SET x = x + 1 to work correctly.
765
However, there are cases where a transactions needs to see rows
766
affected in previous parts of the transaction. This is accomplished
767
using a Command Counter. Incrementing the counter allows transactions
768
to be broken into pieces so each piece can see rows modified by
769
previous pieces. CommandCounterIncrement() increments the Command
770
Counter, creating a new part of the transaction.
772
2.8) What debugging features are available?
774
First, try running configure with the --enable-cassert option, many
775
assert()s monitor the progress of the backend and halt the program
776
when something unexpected occurs.
778
The postmaster has a -d option that allows even more detailed
779
information to be reported. The -d option takes a number that
780
specifies the debug level. Be warned that high debug level values
781
generate large log files.
783
If the postmaster is not running, you can actually run the postgres
784
backend from the command line, and type your SQL statement directly.
785
This is recommended only for debugging purposes. If you have compiled
786
with debugging symbols, you can use a debugger to see what is
787
happening. Because the backend was not started from postmaster, it is
788
not running in an identical environment and locking/backend
789
interaction problems may not be duplicated.
791
If the postmaster is running, start psql in one window, then find the
792
PID of the postgres process used by psql using SELECT
793
pg_backend_pid(). Use a debugger to attach to the postgres PID. You
794
can set breakpoints in the debugger and issue queries from psql. If
795
you are debugging postgres startup, you can set PGOPTIONS="-W n", then
796
start psql. This will cause startup to delay for n seconds so you can
797
attach to the process with the debugger, set any breakpoints, and
798
continue through the startup sequence.
800
You can also compile with profiling to see what functions are taking
801
execution time. The backend profile files will be deposited in the
802
pgsql/data/base/dbname directory. The client profile file will be put
803
in the client's current directory. Linux requires a compile with
804
-DLINUX_PROFILE for proper profiling.