~ubuntu-branches/ubuntu/lucid/bacula/lucid

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
                    Kern's ToDo List
                       16 July 2007


Document:
- !!! Cannot restore two jobs a the same time that were
  written simultaneously unless they were totally spooled.
- Document cleaning up the spool files:
  db, pid, state, bsr, mail, conmsg, spool
- Document the multiple-drive-changer.txt script.
- Pruning with Admin job.
- Does WildFile match against full name?  Doc.
- %d and %v only valid on Director, not for ClientRunBefore/After.
- During tests with the 260 char fix code, I found one problem:
  if the system "sees" a long path once, it seems to forget it's
  working drive (e.g.  c:\), which will lead to a problem during
  the next job (create bootstrap file will fail).  Here is the
  workaround: specify absolute working and pid directory in
  bacula-fd.conf (e.g.  c:\bacula\working instead of
  \bacula\working).
- Document techniques for restoring large numbers of files.
- Document setting my.cnf to big file usage.
- Add example of proper index output to doc. show index from File;
- Correct the Include syntax in the m4.xxx files in examples/conf
- Document JobStatus and Termination codes.
- Fix the error with the "DVI file can't be opened" while
  building the French PDF.
- Document more DVD stuff
- Doc
   { "JobErrors",  "i"},
   { "JobFiles",   "i"},
   { "SDJobFiles", "i"},
   { "SDErrors",   "i"},
   { "FDJobStatus","s"},
   { "SDJobStatus","s"},
- Document all the little details of setting up certificates for
  the Bacula data encryption code.
- Document more precisely how to use master keys -- especially
  for disaster recovery.
 
Professional Needs:
- Migration from other vendors
  - Date change
  - Path change
- Filesystem types
- Backup conf/exe (all daemons) 
- Backup up system state
- Detect state change of system (verify)
- Synthetic Full, Diff, Inc (Virtual, Reconstructed)
- SD to SD
- Modules for Databases, Exchange, ...
- Novell NSS backup http://www.novell.com/coolsolutions/tools/18952.html
- Compliance norms that compare restored code hash code.
- When glibc crash, get address with
    info symbol 0x809780c
- How to sync remote offices.
- Exchange backup:
  http://www.microsoft.com/technet/itshowcase/content/exchbkup.mspx
- David's priorities
   Copypools
   Extract capability (#25)
   Continued enhancement of bweb
   Threshold triggered migration jobs (not currently in list, but will be
    needed ASAP)
   Client triggered backups
   Complete rework of the scheduling system (not in list)
   Performance and usage instrumentation (not in list)
   See email of 21Aug2007 for details.

Priority:
- Erabt if min_block_size > max_block_size
- KIWI
- Implement wait on multiple objects
   - Multiple max times
   - pthread signal
   - socket input ready
- Implement SDErrors (must return from SD)
- Implement USB keyboard support in rescue CD.
- Implement continue spooling while despooling.
- Remove all install temp files in Win32 PLUGINSDIR.
- Audit retention periods to make sure everything is 64 bit.
- Use E'xxx' to escape PostgreSQL strings.
- No where in restore causes kaboom.
- Performance: multiple spool files for a single job.
- Performance: despool attributes when despooling data (problem
  multiplexing Dir connection).
- Make restore use the in-use volume reservation algorithm.
- Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
- Add TLS to bat (should be done).
- When Pool specifies Storage command override does not work.
- Implement wait_for_sysop() message display in wait_for_device(), which
  now prints warnings too often.
- Ensure that each device in an Autochanger has a different
  Device Index.
- Add Catalog = to Pool resource so that pools will exist
  in only one catalog -- currently Pools are "global".
- Look at sg_logs -a /dev/sg0 for getting soft errors.
- btape "test" command with Offline on Unmount = yes

   This test is essential to Bacula.

   I'm going to write one record  in file 0,
   two records in file 1,
   and three records in file 2

   02-Feb 11:00 btape: ABORTING due to ERROR in dev.c:715
   dev.c:714 Bad call to rewind. Device "LTO" (/dev/nst0) not open
   02-Feb 11:00 btape: Fatal Error because: Bacula interrupted by signal 11: Segmentation violation
   Kaboom! btape, btape got signal 11. Attempting traceback.

- Encryption -- email from Landon
   > The backup encryption algorithm is currently not configurable, and is  
   > set to AES_128_CBC in src/filed/backup.c. The encryption code  
   > supports a number of different ciphers (as well as adding arbitrary  
   > new ones) -- only a small bit of code would be required to map a  
   > configuration string value to a CRYPTO_CIPHER_* value, if anyone is  
   > interested in implementing this functionality.

- Why doesn't @"xxx abc" work in a conf file?
- Figure out some way to "automatically" backup conf changes.
- Add the OS version back to the Win32 client info.
- Restarted jobs have a NULL in the from field.
- Modify SD status command to indicate when the SD is writing
  to a DVD (the device is not open -- see bug #732).
- Look at the possibility of adding "SET NAMES UTF8" for MySQL,
  and possibly changing the blobs into varchar.
- Ensure that the SD re-reads the Media record if the JobFiles
  does not match -- it may have been updated by another job.
- Look at moving the Storage directive from the Job to the
  Pool in the default conf files.
- Doc items
- Test Volume compatibility between machine architectures
- Encryption documentation
- Wrong jobbytes with query 12 (todo)
- Bare-metal recovery Windows (todo)
   

Projects:
- Pool enhancements
  - Access Mode = Read-Only, Read-Write, Unavailable, Destroyed, Offsite
  - Pool Type = Copy
  - Maximum number of scratch volumes
  - Maximum File size
  - Next Pool (already have)
  - Reclamation threshold
  - Reclamation Pool
  - Reuse delay (after all files purged from volume before it can be used)
  - Copy Pool = xx, yyy (or multiple lines).
  - Catalog = xxx
  - Allow pool selection during restore.

- Average tape size from Eric
    SELECT COALESCE(media_avg_size.volavg,0) * count(Media.MediaId) AS volmax,                                                              GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
           count(Media.MediaId)  AS volnum,
           sum(Media.VolBytes)   AS voltotal,
           Media.PoolId          AS PoolId,
           Media.MediaType       AS MediaType
    FROM Media
    LEFT JOIN (SELECT avg(Media.VolBytes) AS volavg,
                      Media.MediaType     AS MediaType
               FROM Media
              WHERE Media.VolStatus = 'Full'
              GROUP BY Media.MediaType
               ) AS media_avg_size ON (Media.MediaType = media_avg_size.MediaType)
    GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
- GUI
  - Admin
  - Management reports
  - Add doc for bweb -- especially Installation
  - Look at Webmin
     http://www.orangecrate.com/modules.php?name=News&file=article&sid=501
- Performance
  - Despool attributes in separate thread
  - Database speedups
  - Embedded MySQL
  - Check why restore repeatedly sends Rechdrs between
    each data chunk -- according to James Harper 9Jan07.
- Features
  - Better scheduling  
  - Full at least once a month, ...
  - Cancel Inc if Diff/Full running
  - More intelligent re-run
  - New/deleted file backup   
  - FD plugins
  - Incremental backup -- rsync, Stow


For next release:
- Try to fix bscan not working with multiple DVD volumes bug #912.
- Look at mondo/mindi
- Don't restore Solaris Door files:
   #define   S_IFDOOR   in st_mode.
  see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
- Make Bacula by default not backup tmpfs, procfs, sysfs, ...
- Fix hardlinked immutable files when linking a second file, the
  immutable flag must be removed prior to trying to link it.
- Implement Python event for backing up/restoring a file.
- Change dbcheck to tell users to use native tools for fixing
  broken databases, and to ensure they have the proper indexes.
- add udev rules for Bacula devices.
- If a job terminates, the DIR connection can close before the
  Volume info is updated, leaving the File count wrong.
- Look at why SIGPIPE during connection can cause seg fault in
  writing the daemon message, when Dir dropped to bacula:bacula
- Look at zlib 32 => 64 problems.
- Possibly turn on St. Bernard code.
- Fix bextract to restore ACLs, or better yet, use common routines.
- Do we migrate appendable Volumes?
- Remove queue.c code.
- Print warning message if LANG environment variable does not specify
  UTF-8.
- New dot commands from Arno.
  .show device=xxx lists information from one storage device, including 
     devices (I'm not even sure that information exists in the DIR...)
  .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with 
     better machine-readable output like "Ok" or "Error busy"
  .move eject device=xxx toslot=yyy the same as above, but with a new 
     target slot. The catalog should be updated accordingly.
  .move transfer device=xxx fromslot=yyy toslot=zzz

Low priority:
- Article: http://www.heise.de/open/news/meldung/83231
- Article: http://www.golem.de/0701/49756.html
- Article: http://lwn.net/Articles/209809/
- Article: http://www.onlamp.com/pub/a/onlamp/2004/01/09/bacula.html
- Article: http://www.linuxdevcenter.com/pub/a/linux/2005/04/07/bacula.html
- Article: http://www.osreviews.net/reviews/admin/bacula
- Article: http://www.debianhelp.co.uk/baculaweb.htm
- Article: 
- Wikis mentioning Bacula
  http://wiki.finkproject.org/index.php/Admin:Backups
  http://wiki.linuxquestions.org/wiki/Bacula
  http://www.openpkg.org/product/packages/?package=bacula
  http://www.iterating.com/products/Bacula
  http://net-snmp.sourceforge.net/wiki/index.php/Net-snmp_extensions
  http://www.section6.net/wiki/index.php/Using_Bacula_for_Tape_Backups
  http://bacula.darwinports.com/
  http://wiki.mandriva.com/en/Releases/Corporate/Server_4/Notes#Bacula
  http://en.wikipedia.org/wiki/Bacula

- Bacula Wikis
  http://www.devco.net/pubwiki/Bacula/
  http://paramount.ind.wpi.edu/wiki/doku.php
  http://gentoo-wiki.com/HOWTO_Backup
  http://www.georglutz.de/wiki/Bacula
  http://www.clarkconnect.com/wiki/index.php?title=Modules_-_LAN_Backup/Recovery
  http://linuxwiki.de/Bacula   (in German)

- Possibly allow SD to spool even if a tape is not mounted.
- It appears to me that you have run into some sort of race
  condition where two threads want to use the same Volume and they
  were both given access.  Normally that is no problem.  However,
  one thread wanted the particular Volume in drive 0, but it was
  loaded into drive 1 so it decided to unload it from drive 1 and
  then loaded it into drive 0, while the second thread went on
  thinking that the Volume could be used in drive 1 not realizing
  that in between time, it was loaded in drive 0.
  I'll look at the code to see if there is some way we can avoid
  this kind of problem.  Probably the best solution is to make the
  first thread simply start using the Volume in drive 1 rather than
  transferring it to drive 0.
- Fix re-read of last block to check if job has actually written
  a block, and check if block was written by a different job
  (i.e. multiple simultaneous jobs writing).
- Figure out how to configure query.sql.  Suggestion to use m4:
    == changequote.m4 ===
    changequote(`[',`]')dnl
    ==== query.sql.in ===
    :List next 20 volumes to expire
    SELECT
        Pool.Name AS PoolName,
        Media.VolumeName,
        Media.VolStatus,
        Media.MediaType,
    ifdef([MySQL],
    [ FROM_UNIXTIME(UNIX_TIMESTAMP(Media.LastWritten) Media.VolRetention) AS Expire, ])dnl
    ifdef([PostgreSQL],
    [ media.lastwritten + interval '1 second' * media.volretention as expire, ])dnl
      Media.LastWritten
      FROM Pool
      LEFT JOIN Media
      ON Media.PoolId=Pool.PoolId
      WHERE Media.LastWritten>0
      ORDER BY Expire
      LIMIT 20;
    ====
    Command: m4 -DmySQL changequote.m4 query.sql.in >query.sql

  The problem is that it requires m4, which is not present on all machines
  at ./configure time.
- Given all the problems with FIFOs, I think the solution is to do something a
  little different, though I will look at the code and see if there is not some
  simple solution (i.e. some bug that was introduced).  What might be a better
  solution would be to use a FIFO as a sort of "key" to tell Bacula to read and
  write data to a program rather than the FIFO.  For example, suppose you
  create a FIFO named:

     /home/kern/my-fifo

  Then, I could imagine if you backup and restore this file with a direct
  reference as is currently done for fifos, instead, during backup Bacula will
  execute:

    /home/kern/my-fifo.backup

  and read the data that my-fifo.backup writes to stdout. For restore, Bacula
  will execute:

    /home/kern/my-fifo.restore

  and send the data backed up to stdout. These programs can either be an
  executable or a shell script and they need only read/write to stdin/stdout.

  I think this would give a lot of flexibility to the user without making any
  significant changes to Bacula.


==== SQL
# get null file
select FilenameId from Filename where Name='';
# Get list of all directories referenced in a Backup.
select Path.Path from Path,File where File.JobId=nnn and
  File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId
  order by Path.Path ASC;

- Look into using Dart for testing
  http://public.kitware.com/Dart/HTML/Index.shtml

- Look into replacing autotools with cmake
  http://www.cmake.org/HTML/Index.html

=== Migration from David ===
What I'd like to see: 

Job {
  Name = "<poolname>-migrate"
  Type = Migrate
  Messages = Standard
  Pool = Default
  Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy |
Client | PoolResidence | Volume | JobName | SQLquery
  Migration Selection Pattern = "regexp"
  Next Pool = <override>
}

There should be no need for a Level (migration is always Full, since you
don't calculate differential/incremental differences for migration),
Storage should be determined by the volume types in the pool, and Client
is really a selection issue.  Migration should always occur to the
NextPool defined in the pool definition. If no nextpool is defined, the
job should end with a reason of "no place to go". If Next Pool statement
is present, we override the check in the pool definition and use the
pool specified. 

Here's how I'd define Migration Selection Types: 

With Regexes:
Client  -- Migrate data from selected client only. Migration Selection
Pattern regexp provides pattern to select client names, eg ^FS00* makes
all client names starting with FS00 eligible for migration. 

Jobname -- Migration all jobs matching name. Migration Selection Pattern
regexp provides pattern to select jobnames existing in pool. 

Volume -- Migrate all data on specified volumes. Migration Selection
Pattern regexp provides selection criteria for volumes to be migrated.
Volumes must exist in pool to be eligible for migration. 


With Regex optional:
LowestUtil -- Identify the volume in the pool with the least data on it
and empty it. No Migration Selection Pattern required. 

OldestVol -- Identify the LRU volume with data written, and empty it. No
Migration Selection Pattern required. 

PoolOccupancy -- if pool occupancy exceeds <highmig>, migrate volumes
(starting with most full volumes) until pool occupancy drops below
<lowmig>. Pool highmig and lowmig values are in pool definition, no
Migration Selection Pattern required.


No regex:
SQLQuery -- Migrate all jobuids returned by the supplied SQL query.
Migration Selection Pattern contains SQL query to execute; should return
a list of 1 or more jobuids to migrate.

PoolResidence -- Migrate data sitting in pool for longer than
PoolResidence value in pool definition. Migration Selection Pattern
optional; if specified, override value in pool definition (value in
minutes). 


[ possibly a Python event -- kes ]
===
- Mount on an Autochanger with no tape in the drive causes:
   Automatically selected Storage: LTO-changer
   Enter autochanger drive[0]: 0
   3301 Issuing autochanger "loaded drive 0" command.
   3302 Autochanger "loaded drive 0", result: nothing loaded.
   3301 Issuing autochanger "loaded drive 0" command.
   3302 Autochanger "loaded drive 0", result: nothing loaded.
   3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because:
   Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found.
   3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted.
   If this is not a blank tape, try unmounting and remounting the Volume.
- If Drive 0 is blocked, and drive 1 is set "Autoselect=no", drive 1 will
  be used.
- Autochanger did not change volumes.  
   select * from Storage;
   +-----------+-------------+-------------+
   | StorageId | Name        | AutoChanger |
   +-----------+-------------+-------------+
   |         1 | LTO-changer |           0 |
   +-----------+-------------+-------------+
   05-May 03:50 roxie-sd: 3302 Autochanger "loaded drive 0", result is Slot 11.
   05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Warning: Director wanted Volume "LT
    Current Volume "LT0-002" not acceptable because:
    1997 Volume "LT0-002" not in catalog.
   05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Error: Autochanger Volume "LT0-002"
    Setting InChanger to zero in catalog.
   05-May 03:50 roxie-dir: Tibs.2006-05-05_03.05.02 Error: Unable to get Media record

   05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Error getting Volume i
   05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Job 530 canceled.
   05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: spool.c:249 Fatal appe
   05-May 03:49 Tibs: Tibs.2006-05-05_03.05.02 Fatal error: c:\cygwin\home\kern\bacula
   , got
     (missing)
    llist volume=LTO-002
              MediaId: 6
           VolumeName: LTO-002
                 Slot: 0
               PoolId: 1
            MediaType: LTO-2
         FirstWritten: 2006-05-05 03:11:54
          LastWritten: 2006-05-05 03:50:23
            LabelDate: 2005-12-26 16:52:40
              VolJobs: 1
             VolFiles: 0
            VolBlocks: 1
            VolMounts: 0
             VolBytes: 206
            VolErrors: 0
            VolWrites: 0
     VolCapacityBytes: 0
            VolStatus: 
              Recycle: 1
         VolRetention: 31,536,000
       VolUseDuration: 0
           MaxVolJobs: 0
          MaxVolFiles: 0
          MaxVolBytes: 0
            InChanger: 0
              EndFile: 0
             EndBlock: 0
             VolParts: 0
            LabelType: 0
            StorageId: 1

   Note VolStatus is blank!!!!!
   llist volume=LTO-003
             MediaId: 7
          VolumeName: LTO-003
                Slot: 12
              PoolId: 1
           MediaType: LTO-2
        FirstWritten: 0000-00-00 00:00:00
         LastWritten: 0000-00-00 00:00:00
           LabelDate: 2005-12-26 16:52:40
             VolJobs: 0
            VolFiles: 0
           VolBlocks: 0
           VolMounts: 0
            VolBytes: 1
           VolErrors: 0
           VolWrites: 0
    VolCapacityBytes: 0
           VolStatus: Append
             Recycle: 1
        VolRetention: 31,536,000
      VolUseDuration: 0
          MaxVolJobs: 0
         MaxVolFiles: 0
         MaxVolBytes: 0
           InChanger: 0
             EndFile: 0
            EndBlock: 0
            VolParts: 0
           LabelType: 0
           StorageId: 1
===
   mount
   Automatically selected Storage: LTO-changer
   Enter autochanger drive[0]: 0
   3301 Issuing autochanger "loaded drive 0" command.
   3302 Autochanger "loaded drive 0", result: nothing loaded.
   3301 Issuing autochanger "loaded drive 0" command.
   3302 Autochanger "loaded drive 0", result: nothing loaded.
   3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because:
   Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found.

   3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted.
   If this is not a blank tape, try unmounting and remounting the Volume.

- http://www.dwheeler.com/essays/commercial-floss.html
- Add VolumeLock to prevent all but lock holder (SD) from updating
  the Volume data (with the exception of VolumeState).
- The btape fill command does not seem to use the Autochanger
- Make Windows installer default to system disk drive.
- Look at using ioctl(FIOBMAP, ...) on Linux, and 
  DeviceIoControl(...,  FSCTL_QUERY_ALLOCATED_RANGES, ...) on
  Win32 for sparse files.
  http://www.flexhex.com/docs/articles/sparse-files.phtml
  http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html
- Directive: at <event> "command"
- Command: pycmd "command" generates "command" event.  How to
  attach to a specific job?
- Integrate Christopher's St. Bernard code.
- run_cmd() returns int should return JobId_t
- get_next_jobid_from_list() returns int should return JobId_t
- Document export LDFLAGS=-L/usr/lib64
- Don't attempt to restore from "Disabled" Volumes.
- Network error on Win32 should set Win32 error code.
- What happens when you rename a Disk Volume?
- Job retention period in a Pool (and hence Volume).  The job would
  then be migrated.
- Look at -D_FORTIFY_SOURCE=2
- Add Win32 FileSet definition somewhere
- Look at fixing restore status stats in SD.
- Look at using ioctl(FIMAP) and FIGETBSZ for sparse files.
  http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html
- Implement a mode that says when a hard read error is
  encountered, read many times (as it currently does), and if the
  block cannot be read, skip to the next block, and try again.  If
  that fails, skip to the next file and try again, ...
- Add level table:
  create table LevelType (LevelType binary(1), LevelTypeLong tinyblob);
  insert into LevelType (LevelType,LevelTypeLong) values
  ("F","Full"),
  ("D","Diff"),
  ("I","Inc");
- Show files/second in client status output.
- Add a recursive mark command (rmark) to restore.
- "Minimum Job Interval = nnn" sets minimum interval between Jobs
  of the same level and does not permit multiple simultaneous
  running of that Job (i.e. lets any previous invocation finish
  before doing Interval testing).
- Look at simplifying File exclusions.
- New directive "Delete purged Volumes"
- new pool XXX with ScratchPoolId = MyScratchPool's PoolId and
  let it fill itself, and RecyclePoolId = XXX's PoolId so I can
  see if it become stable and I just have to supervise
  MyScratchPool
- If I want to remove this pool, I set RecyclePoolId = MyScratchPool's
  PoolId, and when it is empty remove it.
- Figure out how to recycle Scratch volumes back to the Scratch Pool.
- Add Volume=SCRTCH
- Allow Check Labels to be used with Bacula labels.
- "Resuming" a failed backup (lost line for example) by using the
  failed backup as a sort of "base" job.
- Look at NDMP
- Email to the user when the tape is about to need changing x
  days before it needs changing.
- Command to show next tape that will be used for a job even
  if the job is not scheduled.
- From: Arunav Mandal <amandal@trolltech.com>
  1. When jobs are running and bacula for some reason crashes or if I do a 
  restart it remembers and jobs it was running before it crashed or restarted 
  as of now I loose all jobs if I restart it.

  2. When spooling and in the midway if client is disconnected for instance a 
  laptop bacula completely discard the spool. It will be nice if it can write 
  that spool to tape so there will be some backups for that client if not all.

  3. We have around 150 clients machines it will be nice to have a option to 
  upgrade all the client machines bacula version automatically.

  4. Atleast one connection should be reserved for the bconsole so at heavy load 
  I should connect to the director via bconsole which at sometimes I can't

  5. Another most important feature that is missing, say at 10am I manually 
  started  backup of client abc and it was a full backup since client abc has 
  no backup history and at 10.30am bacula again automatically started backup of 
  client abc as that was in the schedule. So now we have 2 multiple Full 
  backups of the same client and if we again try to start a full backup of 
  client backup abc bacula won't complain. That should be fixed.

- Fix bpipe.c so that it does not modify results pointer.
  ***FIXME*** calling sequence should be changed.
- For Windows disaster recovery see http://unattended.sf.net/
- regardless of the retention period, Bacula will not prune the
  last Full, Diff, or Inc File data until a month after the
  retention period for the last Full backup that was done.
- update volume=xxx --- add status=Full
- Remove old spool files on startup.
- Exclude SD spool/working directory.
- Refuse to prune last valid Full backup. Same goes for Catalog.
- Python:
  - Make a callback when Rerun failed levels is called.
  - Give Python program access to Scheduled jobs.
  - Add setting Volume State via Python.
  - Python script to save with Python, not save, save with Bacula.
  - Python script to do backup.
  - What events?
  - Change the Priority, Client, Storage, JobStatus (error) 
    at the start of a job.
- Why is SpoolDirectory = /home/bacula/spool;  not reported
  as an error when writing a DVD?
- Make bootstrap file handle multiple MediaTypes (SD)
- Remove all old Device resource code in Dir and code to pass it
  back in SD -- better, rework it to pass back device statistics.
- Check locking of resources -- be sure to lock devices where previously
  resources were locked. 
- The last part is left in the spool dir.


- In restore don't compare byte count on a raw device -- directory
  entry does not contain bytes.
=== rate design
  jcr->last_rate
  jcr->last_runtime
  MA = (last_MA * 3 + rate) / 4
  rate = (bytes - last_bytes) / (runtime - last_runtime)
- Max Vols limit in Pool off by one?
- Implement Files/Bytes,... stats for restore job.
- Implement Total Bytes Written, ... for restore job.
- Despool attributes simultaneously with data in a separate
  thread, rejoined at end of data spooling.
- 7. Implement new Console commands to allow offlining/reserving drives,
     and possibly manipulating the autochanger (much asked for).
- Add start/end date editing in messages (%t %T, %e?) ...
- Add ClientDefs similar to JobDefs.
- Print more info when bextract -p accepts a bad block.
- Fix FD JobType to be set before RunBeforeJob in FD.
- Look at adding full Volume and Pool information to a Volume 
  label so that bscan can get *all* the info. 
- If the user puts "Purge Oldest Volume = yes" or "Recycle Oldest Volume = yes"
  and there is only one volume in the pool, refuse to do it -- otherwise
  he fills the Volume, then immediately starts reusing it.
- Implement copies and stripes.
- Add history file to console.
- Each file on tape creates a JobMedia record. Peter has 4 million
  files spread over 10000 tape files and four tapes. A restore takes
  16 hours to build the restore list.
- Add and option to check if the file size changed during backup.
- Make sure SD deletes spool files on error exit.
- Delete old spool files when SD starts.
- When labeling tapes, if you enter 000026, Bacula uses
  the tape index rather than the Volume name 000026.
- Add offline tape command to Bacula console.
- Bug: 
  Enter MediaId or Volume name: 32
  Enter new Volume name: DLT-20Dec04
  Automatically selected Pool: Default
  Connecting to Storage daemon DLTDrive at 192.168.68.104:9103 ...
  Sending relabel command from "DLT-28Jun03" to "DLT-20Dec04" ...
  block.c:552 Write error at 0:0 on device /dev/nst0. ERR=Bad file descriptor.
  Error writing final EOF to tape. This tape may not be readable.
  dev.c:1207 ioctl MTWEOF error on /dev/nst0. ERR=Permission denied.
  askdir.c:219 NULL Volume name. This shouldn't happen!!!
  3912 Failed to label Volume: ERR=dev.c:1207 ioctl MTWEOF error on /dev/nst0. ERR=Permission denied.
  Label command failed for Volume DLT-20Dec04.
  Do not forget to mount the drive!!!
- Bug: if a job is manually scheduled to run later, it does not appear
  in any status report and cannot be cancelled.

==== Keeping track of deleted/new files ====
- To mark files as deleted, run essentially a Verify to disk, and
  when a file is found missing (MarkId != JobId), then create
  a new File record with FileIndex == -1. This could be done
  by the FD at the same time as the backup.

     My "trick" for keeping track of deletions is the following.
     Assuming the user turns on this option, after all the files
     have been backed up, but before the job has terminated, the
     FD will make a pass through all the files and send their
     names to the DIR (*exactly* the same as what a Verify job
     currently does).  This will probably be done at the same
     time the files are being sent to the SD avoiding a second
     pass.  The DIR will then compare that to what is stored in
     the catalog.  Any files in the catalog but not in what the
     FD sent will receive a catalog File entry that indicates
     that at that point in time the file was deleted. This
     either transmitted to the FD or simultaneously computed in
     the FD, so that the FD can put a record on the tape that
     indicates that the file has been deleted at this point.
     A delete file entry could potentially be one with a FileIndex
     of 0 or perhaps -1 (need to check if FileIndex is used for
     some other thing as many of the Bacula fields are "overloaded"
     in the SD).

     During a restore, any file initially picked up by some
     backup (Full, ...) then subsequently having a File entry
     marked "delete" will be removed from the tree, so will not
     be restored.  If a file with the same name is later OK it
     will be inserted in the tree -- this already happens.  All
     will be consistent except for possible changes during the
     running of the FD.

     Since I'm on the subject, some of you may be wondering what
     the utility of the in memory tree is if you are going to
     restore everything (at least it comes up from time to time
     on the list).  Well, it is still *very* useful because it
     allows only the last item found for a particular filename
     (full path) to be entered into the tree, and thus if a file
     is backed up 10 times, only the last copy will be restored.
     I recently (last Friday) restored a complete directory, and
     the Full and all the Differential and Incremental backups
     spanned 3 Volumes.  The first Volume was not even mounted
     because all the files had been updated and hence backed up
     since the Full backup was made.  In this case, the tree
     saved me a *lot* of time.

     Make sure this information is stored on the tape too so
     that it can be restored directly from the tape.

     All the code (with the exception of formally generating and
     saving the delete file entries) already exists in the Verify
     Catalog command.  It explicitly recognizes added/deleted files since
     the last InitCatalog.  It is more or less a "simple" matter of
     taking that code and adapting it slightly to work for backups.

  Comments from Martin Simmons (I think they are all covered):
  Ok, that should cover the basics.  There are few issues though:

  - Restore will depend on the catalog.  I think it is better to include the
  extra data in the backup as well, so it can be seen by bscan and bextract.

  - I'm not sure if it will preserve multiple hard links to the same inode.  Or
  maybe adding or removing links will cause the data to be dumped again?

  - I'm not sure if it will handle renamed directories.  Possibly it will work
  by dumping the whole tree under a renamed directory?

  - It remains to be seen how the backup performance of the DIR's will be
  affected when comparing the catalog for a large filesystem.

==== 
From David:
How about introducing a Type = MgmtPolicy job type? That job type would
be responsible for scanning the Bacula environment looking for specific
conditions, and submitting the appropriate jobs for implementing said
policy, eg: 

Job {
   Name = "Migration-Policy"
   Type = MgmtPolicy
   Policy Selection Job Type = Migrate
   Scope = "<keyword> <operator> <regexp>"
   Threshold = "<keyword> <operator> <regexp>"
   Job Template = <template-name>
}

Where <keyword> is any legal job keyword, <operator> is a comparison
operator (=,<,>,!=, logical operators AND/OR/NOT) and <regexp> is a
appropriate regexp. I could see an argument for Scope and Threshold
being SQL queries if we want to support full flexibility. The
Migration-Policy job would then get scheduled as frequently as a site
felt necessary (suggested default: every 15 minutes). 

Example: 

Job {
   Name = "Migration-Policy"
   Type = MgmtPolicy
   Policy Selection Job Type = Migration
   Scope = "Pool=*"
   Threshold = "Migration Selection Type = LowestUtil"
   Job Template = "MigrationTemplate"
}

would select all pools for examination and generate a job based on
MigrationTemplate to automatically select the volume with the lowest
usage and migrate it's contents to the nextpool defined for that pool. 

This policy abstraction would be really handy for adjusting the behavior
of Bacula according to site-selectable criteria (one thing that pops
into mind is Amanda's ability to automatically adjust backup levels
depending on various criteria).


=====

Regression tests:
- Add Pool/Storage override regression test.
- Add delete JobId to regression.
- Add a regression test for dbcheck.  
- New test to add bscan to four-concurrent-jobs regression,
  i.e. after the four-concurrent jobs zap the
  database as is done in the bscan-test, then use bscan to
  restore the database, do a restore and compare with the
  original.
- Add restore of specific JobId to regression (item 3
  on the restore prompt)
- Add IPv6 to regression
- Add database test to regression. Test each function like delete,
  purge, ...

- AntiVir can slow down backups on Win32 systems. 
- Win32 systems with FAT32 can be much slower than NTFS for
  more than 1000 files per directory.


1.37 Possibilities:
- A HOLD command to stop all jobs from starting.
- A PAUSE command to pause all running jobs ==> release the
  drive.
- Media Type = LTO,LTO-2,LTO-3
  Media Type Read = LTO,LTO2,LTO3
  Media Type Write = LTO2, LTO3

=== From Carsten Menke <bootsy52@gmx.net>

Following is a list of what I think in the situations where I'm faced with, 
could be a usefull enhancement to bacula, which I'm certain other users will 
benefit from as well.

1. NextJob/NextJobs Directive within a Job Resource in the form of
    NextJobs = job1,job2.

    Why:
    I currently solved the problem with running multiple jobs each after each
    by setting the Max Wait Time for a job to 8 hours, and give
    the jobs different Priorities. However, there scenarios where
    1 Job is directly depending on another job, so if the former job fails,
    the job after it needn't to be run
    while maybe other jobs should run despite of that

Example:
  A Backup Job and a Verify job, if the backup job fails there is no need to run
  the verify job, as the backup job already failed. However, one may like
  to backup the Catalog to disk despite of that the main backup job failed.

Notes:
  I see that this is related to the Event Handlers which are on the ToDo
  list, also it is maybe a good idea to check for the return value and
  execute different actions based on the return value


3. offline capability to bconsole

    Why:
    Currently I use a script which I execute within the last Job via the
    RunAfterJob Directive, to release and eject the tape.
    So I have to call bconsole "release=Storage-Name" and afterwards
    mt -f /dev/nst0 eject to get the tape out.

    If I have multiple Storage Devices, than these may not be /dev/nst0 and
    I have to modify the script or call it with parameters etc.
    This would actually not be needed, as everything is already defined
    in bacula-sd.conf and if   I can invoke bconsole with the
    storage name via $1 in the script than I'm done and information is
    not duplicated.

4. %s for Storage Name added to the chars being substituted in "RunAfterJob"

    Why:

    For the reason mentioned in 3. to have the ability to call a
    script with /scripts/foobar %s and in the script use $1
    to pass the Storage Name to bconsole

5. Setting Volume State within a Job Resource

    Why:
    Instead of using "Maximum Volume Jobs" in the Pool Resource,
    I would have the possibilty to define
    in a Job Resource that after this certain job is run, the Volume State
    should be set to "Volume State = Used", this give more flexibility (IMHO).

6. Localization of Bacula Messages

    Why:
    Unfortunatley many,many people I work with don't speak english very well.
    So if at least the Reporting messages would be localized then they
    would  understand that they have to change the tape,etc. etc.

    I volunteer to do the german translations, and if I can convince my wife also
    french and Morre (western african language).

7. OK, this is evil, probably bound to security risks and maybe not possible
    due to the design of bacula.

    Implementation of Backtics ( `command` ) for shell comand execution to
    the "Label Format" Directive.

Why:

    Currently I have defined BACULA_DAY_OF_WEEK="day1|day2..." resulting in
    Label Format = "HolyBackup-${BACULA_DAY_OF_WEEK[${WeekDay}]}". If I could
    use backticks than I could use "Label Format = HolyBackup-`date +%A` to have
    the localized name for the day of the week appended to the
    format string. Then I have the tape labeled automatically with weekday
    name in the correct language.
==========
-  Yes, that is surely the case. I probably should turn those into Warning
   errors. In addition, you just made me think that it might not be bad to
   add an option to check the file size after backing up the file and
   report if it changes. This would be done as an option because it would
   add extra overhead.
 
   Kern, good idea.  If you do do that, mention in the output: file 
   shrunk, or file expanded, just to make it obvious to the user 
   (without having to the refer to file size), just how the file size 
   changed.
 
   Would this option be for all file, or just one file?  Or a fileset?
- Make output from status use html table tags for nicely 
  presenting in a browser.
- Can one write tapes faster with 8192 byte block sizes?
- Document security problems with the same password for everyone in
  rpm and Win32 releases.
- Browse generations of files.
- I've seen an error when my catalog's File table fills up.  I
   then have to recreate the File table with a larger maximum row
   size.  Relevant information is at
   http://dev.mysql.com/doc/mysql/en/Full_table.html ; I think the
   "Installing and Configuring MySQL" chapter should talk a bit
   about this potential problem, and recommend a solution.
- For Solaris must use POSIX awk.
- Want speed of writing to tape while despooling.
- Supported autochanger:
OS: Linux
Man.: HP
Media: LTO-2
Model: SSL1016
Slots: 16
Cap: 200GB
- Supported drive:
  Wangtek 6525ES (SCSI-1 QIC drive, 525MB), under Linux 2.4.something, 
  bacula 1.36.0/1 works with blocksize 16k INSIDE bacula-sd.conf.
- Add regex from http://www.pcre.org to Bacula for Win32.
- Use only shell tools no make in CDROM package.
- Include within include does it work?
- Implement a Pool of type Cleaning?
- Implement VolReadTime and VolWriteTime in SD
- Modify Backing up Your Database to include a bootstrap file.
- Think about making certain database errors fatal.
- Look at correcting the time jump in the scheduler for daylight
  savings time changes.
- Add a "real" timer to network connections.
- Promote to Full = Time period 
- Check dates entered by user for correctness (month/day/... ranges)
- Compress restore Volume listing by date and first file.
- Look at patches/bacula_db.b2z postgresql that loops during restore.
  See Gregory Wright.
- Perhaps add read/write programs and/or plugins to FileSets.
- How to handle backing up portables ...
- Add some sort of guaranteed Interval for upgrading jobs.
- Can we write the state file after every job terminates? On Win32
  the system crashes and the state file is not updated.
- Limit bandwidth

Documentation to do: (any release a little bit at a time)
- Doc to do unmount before removing magazine.
- Alternative to static linking "ldd prog" save all binaries listed,
  restore them and point LD_LIBRARY_PATH to them.
- Document add "</dev/null >/dev/null 2>&1" to the bacula-fd command line
- Document query file format.
- Add more documentation for bsr files.
- Document problems with Verify and pruning.
- Document how to use multiple databases.
- VXA drives have a "cleaning required"
  indicator, but Exabyte recommends preventive cleaning after every 75
  hours of operation.
  From Phil:
    In this context, it should be noted that Exabyte has a command-line
    vxatool utility available for free download. (The current version is
    vxatool-3.72.) It can get diagnostic info, read, write and erase tapes,
    test the drive, unload tapes, change drive settings, flash new firmware,
    etc.
    Of particular interest in this context is that vxatool <device> -i will
    report, among other details, the time since last cleaning in tape motion
    minutes. This information can be retrieved (and settings changed, for
    that matter) through the generic-SCSI device even when Bacula has the
    regular tape device locked. (Needless to say, I don't recommend
    changing tape settings while a job is running.)
- Lookup HP cleaning recommendations.
- Lookup HP tape replacement recommendations (see trouble shooting autochanger)
- Document doing table repair


===================================
- Add macro expansions in JobDefs.
  Run Before Job = "SomeFile %{Level} %{Client}"
  Write Bootstrap="/some/dir/%{JobName}_%{Client}.bsr"
- Use non-blocking network I/O but if no data is available, use
  select().
- Use gather write() for network I/O.
- Autorestart on crash.
- Add bandwidth limiting.
- Add acks every once and a while from the SD to keep
  the line from timing out.
- When an error in input occurs and conio beeps, you can back
  up through the prompt.
- Detect fixed tape block mode during positioning by looking at
  block numbers in btape "test".  Possibly adjust in Bacula.
- Fix list volumes to output volume retention in some other
  units, perhaps via a directive.
- Allow Simultaneous Priorities = yes  => run up to Max concurrent jobs even
  with multiple priorities.
- If you use restore replace=never, the directory attributes for
  non-existent directories will not be restored properly.

- see lzma401.zip in others directory for new compression
  algorithm/library.
- Allow the user to select JobType for manual pruning/purging.
- bscan does not put first of two volumes back with all info in
  bscan-test.
- Implement the FreeBSD nodump flag in chflags.
- Figure out how to make named console messages go only to that
  console and to the non-restricted console (new console class?).
- Make restricted console prompt for password if *ask* is set or
  perhaps if password is undefined.
- Implement "from ISO-date/time every x hours/days/weeks/months" in
  schedules.

==== from Marc Schoechlin
- the help-command should be more verbose
  (it should explain the paramters of the different 
  commands in detail)
  -> its time-comsuming to consult the manual anytime
     you need a special parameter
  -> maybe its more easy to maintain this, if the
     descriptions of that commands are outsourced to
     a ceratin-file
- the cd-command should allow complete paths
  i.e. cd /foo/bar/foo/bar
  -> if a customer mails me the path to a certain file,
     its faster to enter the specified directory
- if the password is not configured in bconsole.conf
  you should be asked for it.
  -> sometimes you like to do restore on a customer-machine
     which shouldnt know the password for bacula.
  -> adding the password to the file favours admins
     to forget to remove the password after usage
  -> security-aspects
     the protection of that file is less important
- long-listed-output of commands should be scrollable
  like the unix more/less-command does
  -> if someone runs 200 and more machines, the lists could
     be a little long and complex
- command-output should be shown column by column
  to reduce scrolling and to increase clarity
  -> see last item
- lsmark should list the selected files with full
  paths
- wildcards for selecting and file and directories would be nice
- any actions should be interuptable with STRG+C
- command-expansion would be pretty cool
====
- When the replace Never option is set, new directory permissions
  are not restored. See bug 213. To fix this requires creating a
  list of newly restored directories so that those directory 
  permissions *can* be restored.
- Add prune all command
- Document fact that purge can destroy a part of a restore by purging
  one volume while others remain valid -- perhaps mark Jobs.
- Add multiple-media-types.txt
- look at mxt-changer.html
- Make ? do a help command (no return needed).
- Implement restore directory.
- Document streams and how to implement them.
- Try not to re-backup a file if a new hard link is added.
- Add feature to backup hard links only, but not the data.
- Fix stream handling to be simpler.
- Add Priority and Bootstrap to Run a Job.
- Eliminate Restore "Run Restore Job" prompt by allowing new "run command
  to be issued"
- Remove View FileSet button from Run a Job dialog.
- Handle prompt for restore job at end of Restore command.
- Add display of total selected files to Restore window.
- Add tree pane to left of window.
- Add progress meter.
- Max wait time or max run time causes seg fault -- see runtime-bug.txt
- Add message to user to check for fixed block size when the forward
  space test fails in btape.
- When unmarking a directory check if all files below are unmarked and
  then remove the + flag -- in the restore tree.
- Possibly implement: Action = Unmount Device="TapeDrive1" in Admin jobs.
- Setup lrrd graphs: (http://www.linpro.no/projects/lrrd/) Mike Acar.
- Revisit the question of multiple Volumes (disk) on a single device.
- Add a block copy option to bcopy.
- Finish work on Gnome restore GUI.
- Fix "llist jobid=xx" where no fileset or client exists.
- For each job type (Admin, Restore, ...) require only the really necessary
  fields.- Pass Director resource name as an option to the Console.
- Add a "batch" mode to the Console (no unsolicited queries, ...).
- Add a .list all files in the restore tree (probably also a list all files)
  Do both a long and short form.
- Allow browsing the catalog to see all versions of a file (with 
  stat data on each file).
- Restore attributes of directory if replace=never set but directory
  did not exist.
- Use SHA1 on authentication if possible.
- See comtest-xxx.zip for Windows code to talk to USB.
- Add John's appended files:
   Appended = {     /files/server/logs/http/*log   }
   and such files would be treated as follows.On a FULL backup, they would
   be backed up like any other file.On an INCREMENTAL backup, where a
   previous INCREMENTAL or FULL was already in thecatalogue and the length
   of the file wasgreater than the length of the last backup, only thedata
   added since the last backup will be dumped.On an INCREMENTAL backup, if
   the length of the file is less than thelength of the file with the same
   name last backed up, the completefile is dumped.On Windows systems, with
   creation date of files, we can be evensmarter about this and not count
   entirely upon the length.On a restore, the full and all incrementals
   since it will beapplied in sequence to restore the file.  
- Check new HAVE_WIN32 open bits.    
- Check if the tape has moved before writing.  
- Handling removable disks -- see below:
- Keep track of tape use time, and report when cleaning is necessary.
- Add FromClient and ToClient keywords on restore command (or
  BackupClient RestoreClient).
- Implement a JobSet, which groups any number of jobs. If the
  JobSet is started, all the jobs are started together.
  Allow Pool, Level, and Schedule overrides.
- Enhance cancel to timeout BSOCK packets after a specific delay.
- Do scheduling by UTC using gmtime_r() in run_conf, scheduler, and   
  ua_status.!!! Thanks to Alan Brown for this tip.
- Look at updating Volume Jobs so that Max Volume Jobs = 1 will work
  correctly for multiple simultaneous jobs.
- Correct code so that FileSet MD5 is calculated for < and | filename   
  generation.
- Implement the Media record flag that indicates that the Volume does disk 
  addressing.
- Implement VolAddr, which is used when Volume is addressed like a disk,
  and form it from VolFile and VolBlock.
- Make multiple restore jobs for multiple media types specifying 
  the proper storage type.
- Fix fast block rejection (stored/read_record.c:118). It passes a null
  pointer (rec) to try_repositioning().
- Look at extracting Win data from BackupRead.
- Implement RestoreJobRetention? Maybe better "JobRetention" in a Job,
  which would take precidence over the Catalog "JobRetention".
- Implement Label Format in Add and Label console commands.
- Possibly up network buffers to 65K. Put on variable.
- Put email tape request delays on one or more variables. User wants
  to cancel the job after a certain time interval.  Maximum Mount Wait?
- Job, Client, Device, Pool, or Volume?
  Is it possible to make this a directive which is *optional* in multiple
  resources, like Level? If so, I think I'd make it an optional directive
  in Job, Client, and Pool, with precedence such that Job overrides Client
  which in turn overrides Pool.

- New Storage specifications:
  - Want to write to multiple storage devices simultaneously
  - Want to write to multiple storage devices sequentially (in one job)
  - Want to read/write simultaneously
  - Key is MediaType -- it must match

  Passed to SD as a sort of BSR record called Storage Specification
    Record or SSR.
    SSR                    
      Next -> Next SSR
      MediaType -> Next MediaType
      Pool -> Next Pool
      Device -> Next Device
  Job Resource
     Allow multiple Storage specifications
     New flags
        One Archive = yes
        One Device = yes
        One Storage = yes
        One MediaType = yes
        One Pool = yes
  Storage
     Allow Multiple Pool specifications (note, Pool currently
       in Job resource).
     Allow Multiple MediaType specifications in Dir conf
     Allow Multiple Device specifications in Dir conf
     Perhaps keep this in a single SSR
  Tie a Volume to a specific device by using a MediaType that 
    is contained in only one device.
  In SD allow Device to have Multiple MediaTypes

- Ideas from Jerry Scharf:
  First let's point out some big pluses that bacula has for this
        it's open source
        more importantly it's active. Thank you so much for that
        even more important, it's not flaky
        it has an open access catalog, opening many possibilities
        it's pushing toward heterogeneous systems capability
  big things:
   Macintosh file client
        macs are an interesting niche, but I fear a server is a rathole
   working bare iron recovery for windows
   the option for  inc/diff backups not reset on fileset revision
        a) use both change and inode update time against base time
        b) do the full catalog check (expensive but accurate)
   sizing guide (how much system is needed to back up N systems/files)
   consultants on using bacula in building a disaster recovery system
   an integration guide
        or how to get at fancy things that one could do with bacula
   logwatch code for bacula logs (or similar)
   linux distro inclusion of bacula (brings good and bad, but necessary)
   win2k/XP server capability (icky but you asked)
   support for Oracle database ??
===
- Look at adding SQL server and Exchange support for Windows. 
- Make dev->file and dev->block_num signed integers so that -1 can
  be an invalid value which happens with BSR.
- Create VolAddr for disk files in place of VolFile and VolBlock. This
  is needed to properly specify ranges.
- Add progress of files/bytes to SD and FD.
- Print warning message if FileId > 4 billion
- do a "messages" before the first prompt in Console
- Client does not show busy during Estimate command.
- Implement Console mtx commands.
- Implement a Mount Command and an Unmount Command where
  the users could specify a system command to be performed
  to do the mount, after which Bacula could attempt to
  read the device. This is for Removeable media such as a CDROM.
  - Most likely, this mount command would be invoked explicitly
  by the user using the current Console "mount" and "unmount" 
  commands -- the Storage Daemon would do the right thing 
  depending on the exact nature of the device.
  - As with tape drives, when Bacula wanted a new removable
  disk mounted, it would unmount the old one, and send a message
  to the user, who would then use "mount" as described above 
  once he had actually inserted the disk.
- Implement dump/print label to UA
- Spool to disk only when the tape is full, then when a tape is hung move
  it to tape.
- bextract is sending everything to the log file ****FIXME****
- Allow multiple Storage specifications (or multiple names on
  a single Storage specification) in the Job record. Thus a job 
  can be backed up to a number of storage devices.
- Implement some way for the File daemon to contact the Director 
  to start a job or pass its DHCP obtained IP number.
- Implement a query tape prompt/replace feature for a console
- Copy console @ code to gnome2-console
- Make tree walk routines like cd, ls, ... more user friendly
  by handling spaces better.
- Make sure that Bacula rechecks the tape after the 20 min wait.
- Set IO_NOWAIT on Bacula TCP/IP packets.
- Try doing a raw partition backup and restore by mounting a
  Windows partition.
- From Lars Kellers:
    Yes, it would allow to highly automatic the request for new tapes. If a 
    tape is empty, bacula reads the barcodes (native or simulated), and if 
    an unused tape is found, it runs the label command with all the 
    necessary parameters.

    By the way can bacula automatically "move" an empty/purged volume say 
    in the "short" pool to the "long" pool if this pool runs out of volume 
    space?
- What to do about "list files job=xxx".
- Look at how fuser works and /proc/PID/fd that is how Nic found the
  file descriptor leak in Bacula.
- Implement WrapCounters in Counters.
- Add heartbeat from FD to SD if hb interval expires.
- Can we dynamically change FileSets?
- If pool specified to label command and Label Format is specified,
  automatically generate the Volume name.
- Why can't SQL do the filename sort for restore?
- Add ExhautiveRestoreSearch
- Look at the possibility of loading only the necessary 
  data into the restore tree (i.e. do it one directory at a
  time as the user walks through the tree).
- Possibly use the hash code if the user selects all for a restore command.
- Fix "restore all" to bypass building the tree.
- Prohibit backing up archive device (findlib/find_one.c:128)
- Implement Release Device in the Job resource to unmount a drive.
- Implement Acquire Device in the Job resource to mount a drive,
  be sure this works with admin jobs so that the user can get
  prompted to insert the correct tape.  Possibly some way to say to
  run the job but don't save the files.
- Make things like list where a file is saved case independent for
  Windows.
- Use autochanger to handle multiple devices.
- Implement a Recycle command
- Start working on Base jobs.
- Implement UnsavedFiles DB record.
- From Phil Stracchino:
  It would probably be a per-client option, and would be called
  something like, say, "Automatically purge obsoleted jobs".  What it
  would do is, when you successfully complete a Differential backup of a
  client, it would automatically purge all Incremental backups for that
  client that are rendered redundant by that Differential.  Likewise,
  when a Full backup on a client completed, it would automatically purge
  all Differential and Incremental jobs obsoleted by that Full backup.
  This would let people minimize the number of tapes they're keeping on
  hand without having to master the art of retention times.
- When doing a Backup send all attributes back to the Director, who
  would then figure out what files have been deleted.
- Currently in mount.c:236 the SD simply creates a Volume. It should have
  explicit permission to do so.  It should also mark the tape in error
  if there is an error.
- Cancel waiting for Client connect in SD if FD goes away.

- Implement timeout in response() when it should come quickly.
- Implement a Slot priority (loaded/not loaded).
- Implement "vacation" Incremental only saves.
- Implement create "FileSet"?
- Add prefixlinks to where or not where absolute links to FD.
- Issue message to mount a new tape before the rewind.
- Simplified client job initiation for portables.
- If SD cannot open a drive, make it periodically retry.
- Add more of the config info to the tape label.

- Refine SD waiting output:
    Device is being positioned
    >     Device is being positioned for append
    >     Device is being positioned to file x
    > 
- Figure out some way to estimate output size and to avoid splitting
  a backup across two Volumes -- this could be useful for writing CDROMs
  where you really prefer not to have it split -- not serious.
- Have SD compute MD5 or SHA1 and compare to what FD computes.
- Make VolumeToCatalog calculate an MD5 or SHA1 from the 
  actual data on the Volume and compare it.                  
- Implement Bacula plugins -- design API
- Make bcopy read through bad tape records.
- Program files (i.e. execute a program to read/write files).
  Pass read date of last backup, size of file last time.
- Add Signature type to File DB record.
- CD into subdirectory when open()ing files for backup to
  speed up things.  Test with testfind().
- Priority job to go to top of list.
- Why are save/restore of device different sizes (sparse?)   Yup! Fix it.
- Implement some way for the Console to dynamically create a job.
- Solaris -I on tar for include list
- Need a verbose mode in restore, perhaps to bsr.
- bscan without -v is too quiet -- perhaps show jobs.
- Add code to reject whole blocks if not wanted on restore.
- Check if we can increase Bacula FD priorty in Win2000
- Make sure the MaxVolFiles is fully implemented in SD
- Check if both CatalogFiles and UseCatalog are set to SD.
- Possibly add email to Watchdog if drive is unmounted too
  long and a job is waiting on the drive.
- After unmount, if restore job started, ask to mount.
- Add UA rc and history files.
- put termcap (used by console) in ./configure and
  allow -with-termcap-dir.
- Fix Autoprune for Volumes to respect need for full save.
- Compare tape to Client files (attributes, or attributes and data) 
- Make all database Ids 64 bit.
- Allow console commands to detach or run in background.
- Add SD message variables to control operator wait time
  - Maximum Operator Wait
  - Minimum Message Interval
  - Maximum Message Interval
- Send Operator message when cannot read tape label.
- Verify level=Volume (scan only), level=Data (compare of data to file).
  Verify level=Catalog, level=InitCatalog
- Events file
- Add keyword search to show command in Console.
- Events : tape has more than xxx bytes.
- Complete code in Bacula Resources -- this will permit
  reading a new config file at any time.
- Handle ctl-c in Console
- Implement script driven addition of File daemon to config files.
- Think about how to make Bacula work better with File (non-tape) archives.
- Write Unix emulator for Windows.
- Put memory utilization in Status output of each daemon
  if full status requested or if some level of debug on.
- Make database type selectable by .conf files i.e. at runtime
- Set flag for uname -a.  Add to Volume label.
- Restore files modified after date
- SET LD_RUN_PATH=$HOME/mysql/lib/mysql
- Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...).
- Timout a job or terminate if link goes down, or reopen link and query.
- Concept of precious tapes (cannot be reused).
- Make bcopy copy with a single tape drive.
- Permit changing ownership during restore.

- From Phil:
  > My suggestion:  Add a feature on the systray menu-icon menu to request
  > an immediate backup now.  This would be useful for laptop users who may
  > not be on the network when the regular scheduled backup is run.
  > 
  > My wife's suggestion: Add a setting to the win32 client to allow it to
  > shut down the machine after backup is complete (after, of course,
  > displaying a "System will shut down in one minute, click here to cancel"
  > warning dialog).  This would be useful for sites that want user
  > woorkstations to be shut down overnight to save power.
  > 

- Autolabel should be specified by DIR instead of SD.
- Storage daemon    
  - Add media capacity
  - AutoScan (check checksum of tape)
  - Format command = "format /dev/nst0"
  - MaxRewindTime
  - MinRewindTime
  - MaxBufferSize
  - Seek resolution (usually corresponds to buffer size)
  - EODErrorCode=ENOSPC or code
  - Partial Read error code
  - Partial write error code
  - Nonformatted read error
  - Nonformatted write error
  - WriteProtected error
  - IOTimeout
  - OpenRetries
  - OpenTimeout
  - IgnoreCloseErrors=yes
  - Tape=yes
  - NoRewind=yes
- Pool
  - Maxwrites
  - Recycle period
- Job
  - MaxWarnings
  - MaxErrors (job?)
=====
- FD sends unsaved file list to Director at end of job (see
  RFC below).
- File daemon should build list of files skipped, and then
  at end of save retry and report any errors.
- Write a Storage daemon that uses pipes and
  standard Unix programs to write to the tape.
  See afbackup.
- Need something that monitors the JCR queue and
  times out jobs by asking the deamons where they are.
- Enhance Jmsg code to permit buffering and saving to disk.
- device driver = "xxxx" for drives.
- Verify from Volume
- Ensure that /dev/null works
- Need report class for messages. Perhaps
  report resource where report=group of messages
- enhance scan_attrib and rename scan_jobtype, and
  fill in code for "since" option 
- Director needs a time after which the report status is sent
  anyway -- or better yet, a retry time for the job.
- Don't reschedule a job if previous incarnation is still running.
- Some way to automatically backup everything is needed????
- Need a structure for pending actions:
  - buffered messages
  - termination status (part of buffered msgs?)
- Drive management
  Read, Write, Clean, Delete
- Login to Bacula; Bacula users with different permissions:
   owner, group, user, quotas
- Store info on each file system type (probably in the job header on tape.
  This could be the output of df; or perhaps some sort of /etc/mtab record.

========= ideas ===============
From: "Jerry K. Schieffer" <jerry@skylinetechnology.com>
To: <kern@sibbald.com>
Subject: RE: [Bacula-users] future large programming jobs
Date: Thu, 26 Feb 2004 11:34:54 -0600

I noticed the subject thread and thought I would offer the following
merely as sources of ideas, i.e. something to think about, not even as
strong as a request.  In my former life (before retiring) I often
dealt with backups and storage management issues/products as a
developer and as a consultant.  I am currently migrating my personal
network from amanda to bacula specifically because of the ability to
cross media boundaries during storing backups.
Are you familiar with the commercial product called ADSM (I think IBM
now sells it under the Tivoli label)?  It has a couple of interesting
ideas that may apply to the following topics.

1. Migration:  Consider that when you need to restore a system, there
may be pressure to hurry.  If all the information for a single client
can eventually end up on the same media (and in chronological order),
the restore is facillitated by not having to search past information
from other clients.  ADSM has the concept of "client affinity" that
may be associated with it's storage pools.  It seems to me that this
concept (as an optional feature) might fit in your architecture for
migration.

ADSM also has the concept of defining one or more storage pools as
"copy pools" (almost mirrors, but only in the sense of contents).
These pools provide the ability to have duplicte data stored both
onsite and offsite.  The copy process can be scheduled to be handled
by their storage manager during periods when there is no backup
activity.  Again, the migration process might be a place to consider
implementing something like this.

>
> It strikes me that it would be very nice to be able to do things
like
> have the Job(s) backing up the machines run, and once they have all
> completed, start a migration job to copy the data from disks Volumes
to
> a tape library and then to offsite storage. Maybe this can already
be
> done with some careful scheduling and Job prioritzation; the events
> mechanism described below would probably make it very easy.

This is the goal. In the first step (before events), you simply
schedule
the Migration to tape later.

2. Base jobs:  In ADSM, each copy of each stored file is tracked in
the database.  Once a file (unique by path and metadata such as dates,
size, ownership, etc.) is in a copy pool, no more copies are made.  In
other words, when you start ADSM, it begins like your concept of a
base job.  After that it is in the "incremental" mode.  You can
configure the number of "generations" of files to be retained, plus a
retention date after which even old generations are purged.  The
database tracks the contents of media and projects the percentage of
each volume that is valid.  When the valid content of a volume drops
below a configured percentage, the valid data are migrated to another
volume and the old volume is marked as empty.  Note, this requires
ADSM to have an idea of the contents of a client, i.e. marking the
database when an existing file was deleted, but this would solve your
issue of restoring a client without restoring deleted files.

This is pretty far from what bacula now does, but if you are going to
rip things up for Base jobs,.....
Also, the benefits of this are huge for very large shops, especially
with media robots, but are a pain for shops with manual media
mounting.

>
> Base jobs sound pretty useful, but I'm not dying for them.

Nobody is dying for them, but when you see what it does, you will die
without it.

3. Restoring deleted files:  Since I think my comments in (2) above
have low probability of implementation, I'll also suggest that you
could approach the issue of deleted files by a mechanism of having the
fd report to the dir, a list of all files on the client for every
backup job.  The dir could note in the database entry for each file
the date that the file was seen.  Then if a restore as of date X takes
place, only files that exist from before X until after X would be
restored.  Probably the major cost here is the extra date container in
each row of the files table.

Thanks for "listening".  I hope some of this helps.  If you want to
contact me, please send me an email - I read some but not all of the
mailing list traffic and might miss a reply there.

Please accept my compliments for bacula.  It is doing a great job for
me!!  I sympathize with you in the need to wrestle with excelence in
execution vs. excelence in feature inclusion.

Regards,
Jerry Schieffer

==============================

Longer term to do:
- Design at hierarchial storage for Bacula. Migration and Clone. 
- Implement FSM (File System Modules).
- Audit M_ error codes to ensure they are correct and consistent.
- Add variable break characters to lex analyzer.
  Either a bit mask or a string of chars so that
  the caller can change the break characters.
- Make a single T_BREAK to replace T_COMMA, etc.
- Ensure that File daemon and Storage daemon can
  continue a save if the Director goes down (this
  is NOT currently the case). Must detect socket error,
  buffer messages for later. 
- Enhance time/duration input to allow multiple qualifiers e.g. 3d2h
- Add ability to backup to two Storage devices (two SD sessions) at
  the same time -- e.g. onsite, offsite.
- Add the ability to consolidate old backup sets (basically do a restore
  to tape and appropriately update the catalog). Compress Volume sets.
  Might need to spool via file is only one drive is available.
- Compress or consolidate Volumes of old possibly deleted files. Perhaps
  someway to do so with every volume that has less than x% valid 
  files.


Migration: Move a backup from one Volume to another
Clone:     Copy a backup -- two Volumes


======================================================
        Base Jobs design
It is somewhat like a Full save becomes an incremental since
the Base job (or jobs) plus other non-base files.
Need:
- A Base backup is same as Full backup, just different type.
- New BaseFiles table that contains:
    BaseId - index
    BaseJobId - Base JobId referenced for this FileId (needed ???)
    JobId - JobId currently running
    FileId - File not backed up, exists in Base Job
    FileIndex - FileIndex from Base Job.
  i.e. for each base file that exists but is not saved because
  it has not changed, the File daemon sends the JobId, BaseId,
  FileId, FileIndex back to the Director who creates the DB entry.
- To initiate a Base save, the Director sends the FD 
  the FileId, and full filename for each file in the Base.
- When the FD finds a Base file, he requests the Director to
  send him the full File entry (stat packet plus MD5), or
  conversely, the FD sends it to the Director and the Director
  says yes or no. This can be quite rapid if the FileId is kept
  by the FD for each Base Filename.          
- It is probably better to have the comparison done by the FD
  despite the fact that the File entry must be sent across the
  network.
- An alternative would be to send the FD the whole File entry
  from the start. The disadvantage is that it requires a lot of
  space. The advantage is that it requires less communications
  during the save.
- The Job record must be updated to indicate that one or more
  Bases were used.
- At end of Job, FD returns:   
   1. Count of base files/bytes not written to tape (i.e. matches)
   2. Count of base file that were saved i.e. had changed.
- No tape record would be written for a Base file that matches, in the
  same way that no tape record is written for Incremental jobs where
  the file is not saved because it is unchanged.
- On a restore, all the Base file records must explicitly be
  found from the BaseFile tape. I.e. for each Full save that is marked
  to have one or more Base Jobs, search the BaseFile for all occurrences
  of JobId.
- An optimization might be to make the BaseFile have:
     JobId
     BaseId
     FileId
  plus
     FileIndex
  This would avoid the need to explicitly fetch each File record for
  the Base job.  The Base Job record will be fetched to get the
  VolSessionId and VolSessionTime.
=========================================================  


========================================================== 
    Unsaved File design
For each Incremental job that is run, there may be files that
were found but not saved because they were locked (this applies
only to Windows). Such a system could send back to the Director
a list of Unsaved files.
Need:
- New UnSavedFiles table that contains:
  JobId
  PathId
  FilenameId
- Then in the next Incremental job, the list of Unsaved Files will be
  feed to the FD, who will ensure that they are explicitly chosen even
  if standard date/time check would not have selected them.
=============================================================


=====
   Multiple drive autochanger data:  see Alan Brown
   mtx -f xxx unloadStorage Element 1 is Already Full(drive 0 was empty)
   Unloading Data Transfer Element into Storage Element 1...source Element 
   Address 480 is Empty

   (drive 0 was empty and so was slot 1)
   >   mtx -f xxx load 15 0
   no response, just returns to the command prompt when complete.
   >   mtx -f xxx status  Storage Changer /dev/changer:2 Drives, 60 Slots ( 2 Import/Export )
   Data Transfer Element 0:Full (Storage Element 15 Loaded):VolumeTag = HX001
   Data Transfer Element 1:Empty
         Storage Element 1:Empty
         Storage Element 2:Full :VolumeTag=HX002
         Storage Element 3:Full :VolumeTag=HX003
         Storage Element 4:Full :VolumeTag=HX004
         Storage Element 5:Full :VolumeTag=HX005
         Storage Element 6:Full :VolumeTag=HX006
         Storage Element 7:Full :VolumeTag=HX007
         Storage Element 8:Full :VolumeTag=HX008
         Storage Element 9:Full :VolumeTag=HX009
         Storage Element 10:Full :VolumeTag=HX010
         Storage Element 11:Empty
         Storage Element 12:Empty
         Storage Element 13:Empty
         Storage Element 14:Empty
         Storage Element 15:Empty
         Storage Element 16:Empty....
         Storage Element 28:Empty
         Storage Element 29:Full :VolumeTag=CLNU01L1
         Storage Element 30:Empty....
         Storage Element 57:Empty
         Storage Element 58:Full :VolumeTag=NEX261L2
         Storage Element 59 IMPORT/EXPORT:Empty
         Storage Element 60 IMPORT/EXPORT:Empty
   $  mtx -f xxx unload
   Unloading Data Transfer Element into Storage Element 15...done

   (just to verify it remembers where it came from, however it can be
    overrriden with mtx unload {slotnumber} to go to any storage slot.)
   Configuration wise:
   There needs to be a table of drive # to devices somewhere - If there are
   multiple changers or drives there may not be a 1:1 correspondance between
   changer drive number and system device name - and depending on the way the
   drives are hooked up to scsi busses, they may not be linearly numbered
   from an offset point either.something like 

   Autochanger drives = 2
   Autochanger drive 0 = /dev/nst1
   Autochanger drive 1 = /dev/nst2
   IMHO, it would be _safest_ to use explicit mtx unload commands at all
   times, not just for multidrive changers. For a 1 drive changer, that's
   just:

   mtx load xx 0
   mtx unload xx 0

   MTX's manpage (1.2.15):
         unload [<slotnum>] [ <drivenum> ]
                    Unloads media from drive  <drivenum>  into  slot
                    <slotnum>. If <drivenum> is omitted, defaults to
                    drive 0 (as do all commands).  If  <slotnum>  is
                    omitted, defaults to the slot that the drive was
                    loaded from. Note that there's currently no  way
                    to  say  'unload  drive 1's media to the slot it
                    came from', other than to  explicitly  use  that
                    slot number as the destination.AB
====

====
SCSI info:
FreeBSD
undef# camcontrol devlist
<WANGTEK 51000  SCSI M74H 12B3>    at scbus0 target 2 lun 0 (pass0,sa0)
<ARCHIVE 4586XX 28887-XXX 4BGD>    at scbus0 target 4 lun 0 (pass1,sa1)
<ARCHIVE 4586XX 28887-XXX 4BGD>    at scbus0 target 4 lun 1 (pass2)

tapeinfo -f /dev/sg0 with a bad tape in drive 1:
[kern@rufus mtx-1.2.17kes]$ ./tapeinfo -f /dev/sg0
Product Type: Tape Drive
Vendor ID: 'HP      '
Product ID: 'C5713A          '
Revision: 'H107'
Attached Changer: No
TapeAlert[3]:   Hard Error: Uncorrectable read/write error.
TapeAlert[20]:    Clean Now: The tape drive neads cleaning NOW.
MinBlock:1
MaxBlock:16777215
SCSI ID: 5
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x26
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x20
DeCompType: 0x0
Block Position: 0
=====

====
   Handling removable disks

   From: Karl Cunningham <karlc@keckec.com>

   My backups are only to hard disk these days, in removable bays. This is my
   idea of how a backup to hard disk would work more smoothly. Some of these
   things Bacula does already, but I mention them for completeness. If others
   have better ways to do this, I'd like to hear about it.

   1. Accommodate several disks, rotated similar to how tapes are.  Identified
   by partition volume ID or perhaps by the name of a subdirectory.
   2. Abort & notify the admin if the wrong disk is in the bay.
   3. Write backups to different subdirectories for each machine to be backed
   up.
   4. Volumes (files) get created as needed in the proper subdirectory, one
   for each backup.
   5. When a disk is recycled, remove or zero all old backup files. This is
   important as the disk being recycled may be close to full. This may be
   better done manually since the backup files for many machines may be
   scattered in many subdirectories.
====


=== Done
- Why the heck doesn't bacula drop root priviledges before connecting to
  the DB?
- Look at using posix_fadvise(2) for backups -- see bug #751.
  Possibly add the code at findlib/bfile.c:795
/* TCP socket options */
#define TCP_KEEPIDLE            4       /* Start keeplives after this period */
- Fix bnet_connect() code to set a timer and to use time to
  measure the time.
- Implement 4th argument to make_catalog_backup that passes hostname.
- Test FIFO backup/restore -- make regression
- Please mount volume "xxx" on Storage device ... should also list
  Pool and MediaType in case user needs to create a new volume.
- On restore add Restore Client, Original Client.
01-Apr 00:42 rufus-dir: Start Backup JobId 55, Job=kernsave.2007-04-01_00.42.48
01-Apr 00:42 rufus-sd: Python SD JobStart: JobId=55 Client=Rufus
01-Apr 00:42 rufus-dir: Created new Volume "Full0001" in catalog.
01-Apr 00:42 rufus-dir: Using Device "File"
01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
01-Apr 00:42 rufus-sd: Please mount Volume "Full0001" on Storage Device "File" (/tmp) for Job kernsave.2007-04-01_00.42.48
01-Apr 00:44 rufus-sd: Wrote label to prelabeled Volume "Full0001" on device "File" (/tmp)
- Check if gnome-console works with TLS.
- the director seg faulted when I omitted the pool directive from a 
  job resource.  I was experimenting and thought it redundant that I had 
  specified Pool, Full Backup Pool. and Differential Backup Pool. but 
  apparently not.  This  happened when I removed the pool directive and 
  started the director.
- Add Where: client:/.... to restore job report.
- Ensure that moving a purged Volume in ua_purge.c to the RecyclePool
  does the right thing.
- FD-SD quick disconnect
- Building the in memory restore tree is slow.