4
\section*{Catalog Services}
5
\label{_ChapterStart30}
6
\index[general]{Services!Catalog }
7
\index[general]{Catalog Services }
8
\addcontentsline{toc}{section}{Catalog Services}
11
\index[general]{General }
12
\addcontentsline{toc}{subsection}{General}
14
This chapter is intended to be a technical discussion of the Catalog services
15
and as such is not targeted at end users but rather at developers and system
16
administrators that want or need to know more of the working details of {\bf
19
The {\bf Bacula Catalog} services consist of the programs that provide the SQL
20
database engine for storage and retrieval of all information concerning files
21
that were backed up and their locations on the storage media.
23
We have investigated the possibility of using the following SQL engines for
24
Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each
25
presents certain problems with either licensing or maturity. At present, we
26
have chosen for development purposes to use MySQL, PostgreSQL and SQLite.
27
MySQL was chosen because it is fast, proven to be reliable, widely used, and
28
actively being developed. MySQL is released under the GNU GPL license.
29
PostgreSQL was chosen because it is a full-featured, very mature database, and
30
because Dan Langille did the Bacula driver for it. PostgreSQL is distributed
31
under the BSD license. SQLite was chosen because it is small, efficient, and
32
can be directly embedded in {\bf Bacula} thus requiring much less effort from
33
the system administrator or person building {\bf Bacula}. In our testing
34
SQLite has performed very well, and for the functions that we use, it has
35
never encountered any errors except that it does not appear to handle
36
databases larger than 2GBytes. That said, we would not recommend it for
37
serious production use.
39
The Bacula SQL code has been written in a manner that will allow it to be
40
easily modified to support any of the current SQL database systems on the
41
market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft
42
ODBC, InterBase, Oracle8, Oracle7, and DB2).
44
If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or
45
{\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist
46
internal database. This database is kept for build reasons but is no longer
47
supported. Bacula {\bf requires} one of the three databases (MySQL,
48
PostgreSQL, or SQLite) to run.
50
\subsubsection*{Filenames and Maximum Filename Length}
51
\index[general]{Filenames and Maximum Filename Length }
52
\index[general]{Length!Filenames and Maximum Filename }
53
\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
55
In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
56
path names and file names in the catalog database. In practice, there still
57
may be one or two places in the Catalog interface code that restrict the
58
maximum path length to 512 characters and the maximum file name length to 512
59
characters. These restrictions are believed to have been removed. Please note,
60
these restrictions apply only to the Catalog database and thus to your ability
61
to list online the files saved during any job. All information received and
62
stored by the Storage daemon (normally on tape) allows and handles arbitrarily
63
long path and filenames.
65
\subsubsection*{Installing and Configuring MySQL}
66
\index[general]{MySQL!Installing and Configuring }
67
\index[general]{Installing and Configuring MySQL }
68
\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
70
For the details of installing and configuring MySQL, please see the
71
\ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of
74
\subsubsection*{Installing and Configuring PostgreSQL}
75
\index[general]{PostgreSQL!Installing and Configuring }
76
\index[general]{Installing and Configuring PostgreSQL }
77
\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
79
For the details of installing and configuring PostgreSQL, please see the
80
\ilink{Installing and Configuring PostgreSQL}{_ChapterStart10}
81
chapter of this manual.
83
\subsubsection*{Installing and Configuring SQLite}
84
\index[general]{Installing and Configuring SQLite }
85
\index[general]{SQLite!Installing and Configuring }
86
\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
88
For the details of installing and configuring SQLite, please see the
89
\ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of
92
\subsubsection*{Internal Bacula Catalog}
93
\index[general]{Catalog!Internal Bacula }
94
\index[general]{Internal Bacula Catalog }
95
\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
98
\ilink{Internal Bacula Database}{_ChapterStart42} chapter of this
99
manual for more details.
101
\subsubsection*{Database Table Design}
102
\index[general]{Design!Database Table }
103
\index[general]{Database Table Design }
104
\addcontentsline{toc}{subsubsection}{Database Table Design}
106
All discussions that follow pertain to the MySQL database. The details for the
107
PostgreSQL and SQLite databases are essentially identical except for that all
108
fields in the SQLite database are stored as ASCII text and some of the
109
database creation statements are a bit different. The details of the internal
110
Bacula catalog are not discussed here.
112
Because the Catalog database may contain very large amounts of data for large
113
sites, we have made a modest attempt to normalize the data tables to reduce
114
redundant information. While reducing the size of the database significantly,
115
it does, unfortunately, add some complications to the structures.
117
In simple terms, the Catalog database must contain a record of all Jobs run by
118
Bacula, and for each Job, it must maintain a list of all files saved, with
119
their File Attributes (permissions, create date, ...), and the location and
120
Media on which the file is stored. This is seemingly a simple task, but it
121
represents a huge amount interlinked data. Note: the list of files and their
122
attributes is not maintained when using the internal Bacula database. The data
123
stored in the File records, which allows the user or administrator to obtain a
124
list of all files backed up during a job, is by far the largest volume of
125
information put into the Catalog database.
127
Although the Catalog database has been designed to handle backup data for
128
multiple clients, some users may want to maintain multiple databases, one for
129
each machine to be backed up. This reduces the risk of confusion of accidental
130
restoring a file to the wrong machine as well as reducing the amount of data
131
in a single database, thus increasing efficiency and reducing the impact of a
132
lost or damaged database.
134
\subsection*{Sequence of Creation of Records for a Save Job}
135
\index[general]{Sequence of Creation of Records for a Save Job }
136
\index[general]{Job!Sequence of Creation of Records for a Save }
137
\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
140
Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
141
MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
142
means to create a new record whether or not it is unique. ``Create unique''
143
means each record in the database should be unique. Thus, one must first
144
search to see if the record exists, and only if not should a new one be
145
created, otherwise the existing RecordId should be used.
148
\item Create new Job record with StartDate; save JobId
149
\item Create unique Media record; save MediaId
150
\item Create unique Client record; save ClientId
151
\item Create unique Filename record; save FilenameId
152
\item Create unique Path record; save PathId
153
\item Create unique Attribute record; save AttributeId
154
store ClientId, FilenameId, PathId, and Attributes
155
\item Create new File record
156
store JobId, AttributeId, MediaCoordinates, etc
157
\item Repeat steps 4 through 8 for each file
158
\item Create a JobMedia record; save MediaId
159
\item Update Job record filling in EndDate and other Job statistics
162
\subsection*{Database Tables}
163
\index[general]{Database Tables }
164
\index[general]{Tables!Database }
165
\addcontentsline{toc}{subsection}{Database Tables}
167
\addcontentsline{lot}{table}{Filename Table Layout}
168
\begin{longtable}{|l|l|l|}
170
\multicolumn{3}{|l| }{\bf Filename } \\
172
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type }
173
& \multicolumn{1}{l| }{\bf Remark } \\
175
{FilenameId } & {integer } & {Primary Key } \\
177
{Name } & {Blob } & {Filename }
182
The {\bf Filename} table shown above contains the name of each file backed up
183
with the path removed. If different directories or machines contain the same
184
filename, only one copy will be saved in this table.
188
\addcontentsline{lot}{table}{Path Table Layout}
189
\begin{longtable}{|l|l|l|}
191
\multicolumn{3}{|l| }{\bf Path } \\
193
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
194
} & \multicolumn{1}{c| }{\bf Remark } \\
196
{PathId } & {integer } & {Primary Key } \\
198
{Path } & {Blob } & {Full Path }
203
The {\bf Path} table contains shown above the path or directory names of all
204
directories on the system or systems. The filename and any MSDOS disk name are
205
stripped off. As with the filename, only one copy of each directory name is
206
kept regardless of how many machines or drives have the same directory. These
207
path names should be stored in Unix path name format.
209
Some simple testing on a Linux file system indicates that separating the
210
filename and the path may be more complication than is warranted by the space
211
savings. For example, this system has a total of 89,097 files, 60,467 of which
212
have unique filenames, and there are 4,374 unique paths.
214
Finding all those files and doing two stats() per file takes an average wall
215
clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux.
217
Finding all those files and putting them directly into a MySQL database with
218
the path and filename defined as TEXT, which is variable length up to 65,535
219
characters takes 19 mins 31 seconds and creates a 27.6 MByte database.
221
Doing the same thing, but inserting them into Blob fields with the filename
222
indexed on the first 30 characters and the path name indexed on the 255 (max)
223
characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning
224
the job (with the database already created) takes about 2 mins 50 seconds.
226
Running the same as the last one (Path and Filename Blob), but Filename
227
indexed on the first 30 characters and the Path on the first 50 characters
228
(linear search done there after) takes 5 mins on the average and creates a 3.4
229
MB database. Rerunning with the data already in the DB takes 3 mins 35
232
Finally, saving only the full path name rather than splitting the path and the
233
file, and indexing it on the first 50 characters takes 6 mins 43 seconds and
234
creates a 7.35 MB database.
238
\addcontentsline{lot}{table}{File Table Layout}
239
\begin{longtable}{|l|l|l|}
241
\multicolumn{3}{|l| }{\bf File } \\
243
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
244
} & \multicolumn{1}{c| }{\bf Remark } \\
246
{FileId } & {integer } & {Primary Key } \\
248
{FileIndex } & {integer } & {The sequential file number in the Job } \\
250
{JobId } & {integer } & {Link to Job Record } \\
252
{PathId } & {integer } & {Link to Path Record } \\
254
{FilenameId } & {integer } & {Link to Filename Record } \\
256
{MarkId } & {integer } & {Used to mark files during Verify Jobs } \\
258
{LStat } & {tinyblob } & {File attributes in base64 encoding } \\
260
{MD5 } & {tinyblob } & {MD5 signature in base64 encoding }
265
The {\bf File} table shown above contains one entry for each file backed up by
266
Bacula. Thus a file that is backed up multiple times (as is normal) will have
267
multiple entries in the File table. This will probably be the table with the
268
most number of records. Consequently, it is essential to keep the size of this
269
record to an absolute minimum. At the same time, this table must contain all
270
the information (or pointers to the information) about the file and where it
271
is backed up. Since a file may be backed up many times without having changed,
272
the path and filename are stored in separate tables.
274
This table contains by far the largest amount of information in the Catalog
275
database, both from the stand point of number of records, and the stand point
276
of total database size. As a consequence, the user must take care to
277
periodically reduce the number of File records using the {\bf retention}
278
command in the Console program.
282
\addcontentsline{lot}{table}{Job Table Layout}
283
\begin{longtable}{|l|l|p{2.5in}|}
285
\multicolumn{3}{|l| }{\bf Job } \\
287
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
288
} & \multicolumn{1}{c| }{\bf Remark } \\
290
{JobId } & {integer } & {Primary Key } \\
292
{Job } & {tinyblob } & {Unique Job Name } \\
294
{Name } & {tinyblob } & {Job Name } \\
296
{PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods
299
{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
302
{Level } & {binary(1) } & {Job Level } \\
304
{ClientId } & {integer } & {Client index } \\
306
{JobStatus } & {binary(1) } & {Job Termination Status } \\
308
{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
310
{StartTime } & {datetime } & {Time/date when Job started } \\
312
{EndTime } & {datetime } & {Time/date when Job ended } \\
314
{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
315
Retention period. } \\
317
{VolSessionId } & {integer } & {Unique Volume Session ID } \\
319
{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
321
{JobFiles } & {integer } & {Number of files saved in Job } \\
323
{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
325
{JobErrors } & {integer } & {Number of errors during Job } \\
327
{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
330
{PoolId } & {integer } & {Link to Pool Record } \\
332
{FileSetId } & {integer } & {Link to FileSet Record } \\
334
{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
336
{HasBase } & {tiny integer } & {Set when Base Job run }
341
The {\bf Job} table contains one record for each Job run by Bacula. Thus
342
normally, there will be one per day per machine added to the database. Note,
343
the JobId is used to index Job records in the database, and it often is shown
344
to the user in the Console program. However, care must be taken with its use
345
as it is not unique from database to database. For example, the user may have
346
a database for Client data saved on machine Rufus and another database for
347
Client data saved on machine Roxie. In this case, the two database will each
348
have JobIds that match those in another database. For a unique reference to a
351
The Name field of the Job record corresponds to the Name resource record given
352
in the Director's configuration file. Thus it is a generic name, and it will
353
be normal to find many Jobs (or even all Jobs) with the same Name.
355
The Job field contains a combination of the Name and the schedule time of the
356
Job by the Director. Thus for a given Director, even with multiple Catalog
357
databases, the Job will contain a unique name that represents the Job.
359
For a given Storage daemon, the VolSessionId and VolSessionTime form a unique
360
identification of the Job. This will be the case even if multiple Directors
361
are using the same Storage daemon.
363
The Job Type (or simply Type) can have one of the following values:
365
\addcontentsline{lot}{table}{Job Types}
366
\begin{longtable}{|l|l|}
368
\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
370
{B } & {Backup Job } \\
372
{V } & {Verify Job } \\
374
{R } & {Restore Job } \\
376
{C } & {Console program (not in database) } \\
378
{D } & {Admin Job } \\
380
{A } & {Archive Job (not implemented) }
385
The JobStatus field specifies how the job terminated, and can be one of the
388
\addcontentsline{lot}{table}{Job Statuses}
389
\begin{longtable}{|l|l|}
391
\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
393
{C } & {Created but not yet running } \\
399
{T } & {Terminated normally } \\
401
{E } & {Terminated in Error } \\
403
{e } & {Non-fatal error } \\
405
{f } & {Fatal error } \\
407
{D } & {Verify Differences } \\
409
{A } & {Canceled by the user } \\
411
{F } & {Waiting on the File daemon } \\
413
{S } & {Waiting on the Storage daemon } \\
415
{m } & {Waiting for a new Volume to be mounted } \\
417
{M } & {Waiting for a Mount } \\
419
{s } & {Waiting for Storage resource } \\
421
{j } & {Waiting for Job resource } \\
423
{c } & {Waiting for Client resource } \\
425
{d } & {Wating for Maximum jobs } \\
427
{t } & {Waiting for Start Time } \\
429
{p } & {Waiting for higher priority job to finish }
436
\addcontentsline{lot}{table}{File Sets Table Layout}
437
\begin{longtable}{|l|l|l|}
439
\multicolumn{3}{|l| }{\bf FileSet } \\
441
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
442
\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
444
{FileSetId } & {integer } & {Primary Key } \\
446
{FileSet } & {tinyblob } & {FileSet name } \\
448
{MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\
450
{CreateTime } & {datetime } & {Time and date Fileset created }
455
The {\bf FileSet} table contains one entry for each FileSet that is used. The
456
MD5 signature is kept to ensure that if the user changes anything inside the
457
FileSet, it will be detected and the new FileSet will be used. This is
458
particularly important when doing an incremental update. If the user deletes a
459
file or adds a file, we need to ensure that a Full backup is done prior to the
464
\addcontentsline{lot}{table}{JobMedia Table Layout}
465
\begin{longtable}{|l|l|p{2.5in}|}
467
\multicolumn{3}{|l| }{\bf JobMedia } \\
469
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
470
\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
472
{JobMediaId } & {integer } & {Primary Key } \\
474
{JobId } & {integer } & {Link to Job Record } \\
476
{MediaId } & {integer } & {Link to Media Record } \\
478
{FirstIndex } & {integer } & {The index (sequence number) of the first file
479
written for this Job to the Media } \\
481
{LastIndex } & {integer } & {The index of the last file written for this
482
Job to the Media } \\
484
{StartFile } & {integer } & {The physical media (tape) file number of the
485
first block written for this Job } \\
487
{EndFile } & {integer } & {The physical media (tape) file number of the
488
last block written for this Job } \\
490
{StartBlock } & {integer } & {The number of the first block written for
493
{EndBlock } & {integer } & {The number of the last block written for this
496
{VolIndex } & {integer } & {The Volume use sequence number within the Job }
501
The {\bf JobMedia} table contains one entry at the following: start of
502
the job, start of each new tape file, start of each new tape, end of the
503
job. Since by default, a new tape file is written every 2GB, in general,
504
you will have more than 2 JobMedia records per Job. The number can be
505
varied by changing the "Maximum File Size" specified in the Device
506
resource. This record allows Bacula to efficiently position close to
507
(within 2GB) any given file in a backup. For restoring a full Job,
508
these records are not very important, but if you want to retrieve
509
a single file that was written near the end of a 100GB backup, the
510
JobMedia records can speed it up by orders of magnitude by permitting
511
forward spacing files and blocks rather than reading the whole 100GB
518
\addcontentsline{lot}{table}{Media Table Layout}
519
\begin{longtable}{|l|l|p{2.4in}|}
521
\multicolumn{3}{|l| }{\bf Media } \\
523
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
524
\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
526
{MediaId } & {integer } & {Primary Key } \\
528
{VolumeName } & {tinyblob } & {Volume name } \\
530
{Slot } & {integer } & {Autochanger Slot number or zero } \\
532
{PoolId } & {integer } & {Link to Pool Record } \\
534
{MediaType } & {tinyblob } & {The MediaType supplied by the user } \\
536
{FirstWritten } & {datetime } & {Time/date when first written } \\
538
{LastWritten } & {datetime } & {Time/date when last written } \\
540
{LabelDate } & {datetime } & {Time/date when tape labeled } \\
542
{VolJobs } & {integer } & {Number of jobs written to this media } \\
544
{VolFiles } & {integer } & {Number of files written to this media } \\
546
{VolBlocks } & {integer } & {Number of blocks written to this media } \\
548
{VolMounts } & {integer } & {Number of time media mounted } \\
550
{VolBytes } & {bigint } & {Number of bytes saved in Job } \\
552
{VolErrors } & {integer } & {Number of errors during Job } \\
554
{VolWrites } & {integer } & {Number of writes to media } \\
556
{MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\
558
{VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\
560
{VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle,
561
Read-Only, Disabled, Error, Busy } \\
563
{Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes:
566
{VolRetention } & {bigint } & {64 bit seconds until expiration } \\
568
{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
570
{MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\
572
{MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume }
577
The {\bf Volume} table (internally referred to as the Media table) contains
578
one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...),
579
or file on which information is or was backed up. There is one Volume record
580
created for each of the NumVols specified in the Pool resource record.
584
\addcontentsline{lot}{table}{Pool Table Layout}
585
\begin{longtable}{|l|l|p{2.4in}|}
587
\multicolumn{3}{|l| }{\bf Pool } \\
589
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
590
} & \multicolumn{1}{c| }{\bf Remark } \\
592
{PoolId } & {integer } & {Primary Key } \\
594
{Name } & {Tinyblob } & {Pool Name } \\
596
{NumVols } & {Integer } & {Number of Volumes in the Pool } \\
598
{MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\
600
{UseOnce } & {tinyint } & {Use volume once } \\
602
{UseCatalog } & {tinyint } & {Set to use catalog } \\
604
{AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\
606
{VolRetention } & {bigint } & {64 bit seconds to retain volume } \\
608
{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
610
{MaxVolJobs } & {integer } & {max jobs on volume } \\
612
{MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\
614
{MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\
616
{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
618
{Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume }
621
{PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\
623
{LabelFormat } & {Tinyblob } & {Label format }
628
The {\bf Pool} table contains one entry for each media pool controlled by
629
Bacula in this database. One media record exists for each of the NumVols
630
contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType
631
is defined by the administrator, and corresponds to the MediaType specified in
632
the Director's Storage definition record. The CurrentVol is the sequence
633
number of the Media record for the current volume.
637
\addcontentsline{lot}{table}{Client Table Layout}
638
\begin{longtable}{|l|l|l|}
640
\multicolumn{3}{|l| }{\bf Client } \\
642
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
643
} & \multicolumn{1}{c| }{\bf Remark } \\
645
{ClientId } & {integer } & {Primary Key } \\
647
{Name } & {TinyBlob } & {File Services Name } \\
649
{UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\
651
{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
653
{FileRetention } & {bigint } & {64 bit seconds to retain Files } \\
655
{JobRetention } & {bigint } & {64 bit seconds to retain Job }
660
The {\bf Client} table contains one entry for each machine backed up by Bacula
661
in this database. Normally the Name is a fully qualified domain name.
665
\addcontentsline{lot}{table}{Unsaved Files Table Layout}
666
\begin{longtable}{|l|l|l|}
668
\multicolumn{3}{|l| }{\bf UnsavedFiles } \\
670
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
671
} & \multicolumn{1}{c| }{\bf Remark } \\
673
{UnsavedId } & {integer } & {Primary Key } \\
675
{JobId } & {integer } & {JobId corresponding to this record } \\
677
{PathId } & {integer } & {Id of path } \\
679
{FilenameId } & {integer } & {Id of filename }
684
The {\bf UnsavedFiles} table contains one entry for each file that was not
685
saved. Note! This record is not yet implemented.
689
\addcontentsline{lot}{table}{Counter Table Layout}
690
\begin{longtable}{|l|l|l|}
692
\multicolumn{3}{|l| }{\bf Counter } \\
694
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
695
} & \multicolumn{1}{c| }{\bf Remark } \\
697
{Counter } & {tinyblob } & {Counter name } \\
699
{MinValue } & {integer } & {Start/Min value for counter } \\
701
{MaxValue } & {integer } & {Max value for counter } \\
703
{CurrentValue } & {integer } & {Current counter value } \\
705
{WrapCounter } & {tinyblob } & {Name of another counter }
710
The {\bf Counter} table contains one entry for each permanent counter defined
715
\addcontentsline{lot}{table}{Version Table Layout}
716
\begin{longtable}{|l|l|l|}
718
\multicolumn{3}{|l| }{\bf Version } \\
720
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
721
} & \multicolumn{1}{c| }{\bf Remark } \\
723
{VersionId } & {integer } & {Primary Key }
728
The {\bf Version} table defines the Bacula database version number. Bacula
729
checks this number before reading the database to ensure that it is compatible
730
with the Bacula binary file.
734
\addcontentsline{lot}{table}{Base Files Table Layout}
735
\begin{longtable}{|l|l|l|}
737
\multicolumn{3}{|l| }{\bf BaseFiles } \\
739
\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
740
} & \multicolumn{1}{c| }{\bf Remark } \\
742
{BaseId } & {integer } & {Primary Key } \\
744
{BaseJobId } & {integer } & {JobId of Base Job } \\
746
{JobId } & {integer } & {Reference to Job } \\
748
{FileId } & {integer } & {Reference to File } \\
750
{FileIndex } & {integer } & {File Index number }
755
The {\bf BaseFiles} table contains all the File references for a particular
756
JobId that point to a Base file -- i.e. they were previously saved and hence
757
were not saved in the current JobId but in BaseJobId under FileId. FileIndex
758
is the index of the file, and is used for optimization of Restore jobs to
759
prevent the need to read the FileId record when creating the in memory tree.
760
This record is not yet implemented.
764
\subsubsection*{MySQL Table Definition}
765
\index[general]{MySQL Table Definition }
766
\index[general]{Definition!MySQL Table }
767
\addcontentsline{toc}{subsubsection}{MySQL Table Definition}
769
The commands used to create the MySQL tables are as follows:
774
CREATE TABLE Filename (
775
FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
777
PRIMARY KEY(FilenameId),
781
PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
787
FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
788
FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
789
JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
790
PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
791
FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
792
MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0,
793
LStat TINYBLOB NOT NULL,
794
MD5 TINYBLOB NOT NULL,
801
JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
802
Job TINYBLOB NOT NULL,
803
Name TINYBLOB NOT NULL,
804
Type BINARY(1) NOT NULL,
805
Level BINARY(1) NOT NULL,
806
ClientId INTEGER NOT NULL REFERENCES Client,
807
JobStatus BINARY(1) NOT NULL,
808
SchedTime DATETIME NOT NULL,
809
StartTime DATETIME NOT NULL,
810
EndTime DATETIME NOT NULL,
811
JobTDate BIGINT UNSIGNED NOT NULL,
812
VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0,
813
VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0,
814
JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
815
JobBytes BIGINT UNSIGNED NOT NULL,
816
JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
817
JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
818
PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
819
FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet,
820
PurgedFiles TINYINT NOT NULL DEFAULT 0,
821
HasBase TINYINT NOT NULL DEFAULT 0,
825
CREATE TABLE FileSet (
826
FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
827
FileSet TINYBLOB NOT NULL,
828
MD5 TINYBLOB NOT NULL,
829
CreateTime DATETIME NOT NULL,
830
PRIMARY KEY(FileSetId)
832
CREATE TABLE JobMedia (
833
JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
834
JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
835
MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media,
836
FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
837
LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
838
StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
839
EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
840
StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
841
EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
842
VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
843
PRIMARY KEY(JobMediaId),
844
INDEX (JobId, MediaId)
847
MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
848
VolumeName TINYBLOB NOT NULL,
849
Slot INTEGER NOT NULL DEFAULT 0,
850
PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
851
MediaType TINYBLOB NOT NULL,
852
FirstWritten DATETIME NOT NULL,
853
LastWritten DATETIME NOT NULL,
854
LabelDate DATETIME NOT NULL,
855
VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
856
VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
857
VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0,
858
VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0,
859
VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
860
VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
861
VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0,
862
VolCapacityBytes BIGINT UNSIGNED NOT NULL,
863
VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged',
864
'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL,
865
Recycle TINYINT NOT NULL DEFAULT 0,
866
VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0,
867
VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0,
868
MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
869
MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
870
MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
871
InChanger TINYINT NOT NULL DEFAULT 0,
872
MediaAddressing TINYINT NOT NULL DEFAULT 0,
873
VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
874
VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
875
PRIMARY KEY(MediaId),
879
PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
880
Name TINYBLOB NOT NULL,
881
NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
882
MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
883
UseOnce TINYINT NOT NULL,
884
UseCatalog TINYINT NOT NULL,
885
AcceptAnyVolume TINYINT DEFAULT 0,
886
VolRetention BIGINT UNSIGNED NOT NULL,
887
VolUseDuration BIGINT UNSIGNED NOT NULL,
888
MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
889
MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
890
MaxVolBytes BIGINT UNSIGNED NOT NULL,
891
AutoPrune TINYINT DEFAULT 0,
892
Recycle TINYINT DEFAULT 0,
893
PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL,
894
LabelFormat TINYBLOB,
895
Enabled TINYINT DEFAULT 1,
896
ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
897
RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
901
CREATE TABLE Client (
902
ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
903
Name TINYBLOB NOT NULL,
904
Uname TINYBLOB NOT NULL, /* full uname -a of client */
905
AutoPrune TINYINT DEFAULT 0,
906
FileRetention BIGINT UNSIGNED NOT NULL,
907
JobRetention BIGINT UNSIGNED NOT NULL,
909
PRIMARY KEY(ClientId)
911
CREATE TABLE BaseFiles (
912
BaseId INTEGER UNSIGNED AUTO_INCREMENT,
913
BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
914
JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
915
FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
916
FileIndex INTEGER UNSIGNED,
919
CREATE TABLE UnsavedFiles (
920
UnsavedId INTEGER UNSIGNED AUTO_INCREMENT,
921
JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
922
PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
923
FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
924
PRIMARY KEY (UnsavedId)
926
CREATE TABLE Version (
927
VersionId INTEGER UNSIGNED NOT NULL
929
-- Initialize Version
930
INSERT INTO Version (VersionId) VALUES (7);
931
CREATE TABLE Counters (
932
Counter TINYBLOB NOT NULL,
935
CurrentValue INTEGER,
936
WrapCounter TINYBLOB NOT NULL,
937
PRIMARY KEY (Counter(128))