LSTSRV-L Archives

LISTSERV Site Administrators' Forum

LSTSRV-L

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Eric Thomas <[log in to unmask]>
Mon, 13 May 1996 18:40:45 +0200
text/plain (38 lines)
On    Fri,   10    May   1996    18:10:34   EDT    "Steven   P.    Roder"
<[log in to unmask]> said:
 
>The problem here is that VM:backup detects  changes to the data as it is
>being  dumped, so  it rewinds  the tapes,  and even  remounts tapes,  if
>needed, and trys again, for a  configurable number of times, after which
>the disk or filespace is either skipped, or dumped physically,
 
This is seriously brain  dead. Since tapes are at best  almost as fast as
the disks, your chances of succeeding in  making that kind of backup on a
large active minidisk are close to zero. With the 500-1500 cyl disks that
were mentioned, this algorithm is just a sophisticated way to do CP SLEEP
180 MIN,  and then make a  physical backup, which does  *not* have better
integrity. Not only do you waste a  lot of time, but the resulting backup
has sub-optimal integrity, compared to what could have been achieved with
file-level retries.
 
>Our  filestores are  in SFS,  which exacerbates  the problem,  since the
>dumps are a bit slower.
 
But with  SFS the file  wouldn't change during  the backup (that  is, SFS
will shadow  the file  so that  the copy you  see retains  integrity). Of
course,  if  VMBACKUP queries  SFS  explicitly  for new  changes,  that's
another story. Also, on a notebook SFS directory, only a handful of files
will be changing during the backup operation.
 
Anyway, this is  a completely idiotic algorithm. What good  is a DDR-like
backup of a very active minidisk? The  files that don't change will be ok
(unless the  directory data is corrupt  in the dump, which  *can* happen,
although it is very unlikely), and the files that change will give you an
error 3 reading  file or will contain  garbage, and the only  way to know
which files were affected is to review them manually. A file level backup
with file  level retries  (which could  be grouped  for small  files, for
better performance) could have at  least identified the corrupt files for
your verification.
 
  Eric

ATOM RSS1 RSS2