Hopefully it seems that my userid is still here, and by night EB0UB011 is
staff-free ( :-) ), so that I'll try to raise what I find an important topic
before leaving. I would have liked to speak of that some time after 1.5j was
distributed, but that's possibly my last access to the net for some time.
Now that LISTSERV 1.5j is to be released soon, we'll be able to have peered
filelists at will. I think it would be time to try to organize all the actual
mess of servers there is on the net. Last year, when I started discovering
EARN and BITNET, I spent about two months of my extra time to find what were
the servers, what did they contain, etc (see Chris Condon's work on that and
ask him if you don't believe me :-)). Each server has a different syntax and
offers different services; most of them don't have internet capabilities and
the format of their directories is seldom informative. Now with LISTSERV these
services can be migrated with great advantage (some of them already have done
the migration, eg TCSSERVE). But I feel that migrating to LISTSERV is not
sufficient. What I'd like to see is a standarized set of well documented
listserv filelists, one for each topic (possibly with some sons in a tree
structure). For example, there could be a REXXUTIL FILELIST with all those
nice utilities that operate on REXX programs (like REXXCOMP et al), another
(this one probably forming a tree structure) for Kermit programs, another for
VM SYSPROGS, etc. All those filelists could be peered between the sites
willing to host a copy; a central directory of what filelists will exist and
where can they be found should be maintained (and probably an extract of it
could replace the actual fileserver section of BITNET SERVERS).
Some trivial user exits to redirect GETs to the nearest server could be
written to reduce traffic; and perhaps also an approach like Eric's one with
PEERS FILELIST could be taken: we could define a 'server backbone' (perhaps
different from the actual backbone) and store *all* the directories on each
server, along with information about where the real stuff is hosted for each
filelist. Then every server in the backbone will virtually offer the complete
library while hosting only a part of it; users will be able to request any
file in the standard library (which will be huge) without having to search
every server and will almost not notice if the file they were requesting was
or not physically in the server they sent their command to.
I realize all that would represent an *enormous* lot of work (one of the most
boring parts of it being contacting file owners, getting permissions,
explaining how to remotely maintain the files, etc), but I'm sure there will
be people disposed to give part of their time to this project. I'm afraid this
won't work without a serious coordination tho.
That's one of my dreams. I'm sure that if someone shares it I'll find it
started when I'll be back on the net, thanks to the good work of all of you.
Jose Maria
|