1
$PostgreSQL: pgsql/contrib/pg_dumplo/README.pg_dumplo,v 1.3 2003-11-29 19:51:35 pgsql Exp $
3
pg_dumplo - PostgreSQL large object dumper
4
==========================================
6
By Karel Zak <zakkr@zf.jcu.cz>
12
* run master ./configure in the PG source top directory
19
<??? I lost his e-mail ???>
20
* option '--all' and pg_class usage
22
Pavel Jan�k ml. <Pavel.Janik@linux.cz>
23
* HOWTO (the rest of this file)
29
(c) 2000, Pavel Jan�k ml. <Pavel.Janik@linux.cz>
32
Q: How do you use pg_dumplo?
33
============================
35
A: This is a small demo of backing up the database table with Large Objects:
38
We will create a demo database and a small and useless table `lo' inside
41
SnowWhite:$ createdb test
44
Ok, our database with the name 'test' is created. Now we should create demo
45
table which will contain only one column with the name 'id' which will hold
46
the OID number of a Large Object:
49
Welcome to psql, the PostgreSQL interactive terminal.
51
Type: \copyright for distribution terms
52
\h for help with SQL commands
53
\? for help on internal slash commands
54
\g or terminate with semicolon to execute query
57
test=# CREATE TABLE lo (id oid);
59
test=# \lo_import /etc/aliases
61
test=# INSERT INTO lo VALUES (19338);
63
test=# select * from lo;
71
In the above example you can see that we have also imported one "Large
72
Object" - the file /etc/aliases. It has an oid of 19338 so we have inserted
73
this oid number to the database table lo to the column id. The final SELECT
74
shows that we have one record in the table.
76
Now we can demonstrate the work of pg_dumplo. We will create a dump directory
77
which will contain the whole dump of large objects (/tmp/dump):
81
Now we can dump all large objects from the database `test' which have OIDs
82
stored in the column `id' in the table `lo':
84
SnowWhite:$ pg_dumplo -s /tmp/dump -d test -l lo.id
85
pg_dumplo: dump lo.id (1 large obj)
87
Voila, we have the dump of all Large Objects in our directory:
89
SnowWhite:$ tree /tmp/dump/
97
3 directories, 2 files
100
In practice, we'd probably use
102
SnowWhite:$ pg_dumplo -s /tmp/dump -d test -e
104
to export all large objects that are referenced by any OID-type column
105
in the database. Calling out specific column(s) with -l is only needed
106
for a selective dump.
108
For routine backup purposes, the dump directory could now be converted into
109
an archive file with tar and stored on tape. Notice that a single dump
110
directory can hold the dump of multiple databases.
112
Now, how can we recreate the contents of the table lo and the Large Object
113
database when something went wrong? To do this, we expect that pg_dump is
114
also used to store the definition and contents of the regular tables in
117
SnowWhite:$ pg_dump test >test.backup
119
Now, if we lose the database:
121
SnowWhite:$ dropdb test
124
we can recreate it and reload the regular tables from the dump file:
126
SnowWhite:$ createdb test
129
SnowWhite:$ psql test <test.backup
131
But at this point our database has no large objects in it. What's more,
132
the large-object-referencing columns contain the OIDs of the old large
133
objects, which will not be the OIDs they'll have when reloaded. Never
134
fear: pg_dumplo will fix the large object references at the same time
135
it reloads the large objects. We reload the LO data from the dump
138
SnowWhite:$ pg_dumplo -s /tmp/dump -d test -i
139
19338 lo id test/lo/id/19338
142
And this is everything. The contents of table lo will be automatically
143
updated to refer to the new large object OIDs.
145
Summary: In this small example we have shown that pg_dumplo can be used to
146
completely dump the database's Large Objects very easily.
148
For more information see the help ( pg_dumplo -h ).