> (1) The default limit be raised to at least 5 times the current (256kb)
> limit
The current limit is already very high, if each single user in the network
were allowed to order 1.2 Mb of data a day from each server, it would be like
having no limit at all.
> In addition, a whole package should be sent when requested,
That is right, and I'll do something about it as soon as the EARN business is
done with. As I might already have said (or forgotten to say), I won't
distribute any change to LISTSERV until after the EARN business is over,
because some people have insinuated that I would make sure to introduce
serious bugs in LISTSERV before I hand it over to EARN. By giving the present
FIX15O1 version, unchanged since 17-Jan-89, to EARN, I will make sure that
nobody can seriously pretend I have acted in such a way.
> (2) The limits should not be time wise, but should be based on the number of
> links crossed (...)
That's too complicated, both to code and to explain to an end-user. That would
also eat a lot of CPU time. In any case, LSV$GETQ is an exit so that you can
code whatever you want into it.
> (3) Some sort of mechanism should be put into place so that you must
> confirm that you want a specific file a second time during a 24 hour period
> and this confirm mechanism will allow that you can not just specify
> 'override' on every command, but that it must be in response to a Listser
> advisory
It must be possible to specify 'override' on any command, because you might
have a service machine or somesuch requesting a file from LISTSERV, and you
don't want to wait for a mail reply to the command (messages can't be trusted
as you know), read it, resend the command with the confirmation, and
eventually get the file. I had planned to do this but the problem is to
remember who got what file and when, which is why I wanted to make it part of
the file stats support that I planned to introduce some day (all of that was
before EARN decided they needed to own LISTSERV, of course).
> (4) If some sort of regional association of backbone sites could be formed
> (...)
>
> (a) The sub-server that has the file because of space considerations at the
> main (regional) server site, or;
>
> (b) A server that is closer to the site that is requesting the file than the
> regional server, and the files have been sub-stored on that server, or the
> site is in fact a closer regional server.
(b) might be down at the time of the request, which might be the reason why
the request was not sent there in the first place. An override option is
therefore required. In any case this is a LOT of work - having to keep track
of what server has what file, keeping that information up to date, regardless
of the order in which the updates arrive, of lost files, etc... You're asking
for something as complicated as the UDD database, and that takes time to
implement.
> All files have an origin point and are sent to this origin by the person
> making the updates.
What if it is down? How can I update the other peers?
> This Listserv then sends the update job to all other regional servers
> housing the file in question. The Regional server then updates all its local
> sub-servers.
What is a "local sub-server"? Who defines to which region a given server
belongs? What happens if network delays cause a server to be in no region, or
more than one region, for a transient but non-negligible amount of time?
Eric
|