Jose Maria,
The problem is not that easy. Providing the filemode is useless unless the
filename and filetype match. That is, if DMTAXM LCLUPD01 is stored under the
name 00002301 RSCSMODS F1, putting a filemode of "F" in the filelist really
does not help a bit... ;-) It would help of course if the fileids are identical
but this would not be the case under normal circumstances. Remember, the file-
list owner has no control whatsoever on the real fileid the file will have,
except that filelists will always have a fileid of "filename FILELIST" and
same thing for $PACKAGE files. Other files will be assigned harmless fileids. I
have always found the filemode information on the Netserv filelists very puz-
zling because it is completely irrelevant outside the Netserv node. That's why
I remove it from the display in NETLIST.
There is also a technical problem. The filemode info will NOT be provided by
the filelist owner when he does a PUT command since he has no control on it. So
it will have to be generated either when the filelist is stored or when it is
ordered by the user. The latter would probably treble the GET command's CPU
requirements :-( The former would not work on those filelists which are shipped
with the code since the PUT command is never performed :-( So as you can see
there is no obvious solution.
All:
There is a missing quote in LSV_CNTRL EXEC line 48, after "origin", which
causes LISTSERV to crash whenever the exec is invoked.
There is a bug in LSVSRVID EXEC which crashes for nodenames containing a '-'
character. This is something very surprising and unexpected: you can do:
a = 'zeg-sdhfj'
b.a = 12
say b.a
but you can't use: say Value('b.'a).... That's not consistent I think. Oh well.
Chris:
About the priority issue there is nothing to do I think. The 80,000 recs file
can be sent via RSCS with prior 0, so sending it via LISTSERV with prior 0 is
possible too. Note that files > 10,000 can't be sent via DISTRIBUTE.
As for splitting large jobs... You may remember this conversation on RSCSMODS
about large files. My last suggestion was that RSCS automatically forwards
large files to a server which would split them, send them to a server of the
same ilk (as near to the destination as possible) which would then rebuild the
file. Several people suggested that LISTSERV be used for that and that I write
the code...
Pros:
1) There are 41 LISTSERVs in the world at present. Quite a comfortable number.
2) All the LISTSERVs could be made to accept the split jobs, while the actual
generation of those jobs would be an installation option which would require
RSCS mods anyway. It would be supplied as a set of local commands.
3) LISTSERV is able to determine the nearest server to any given node.
4) It could be made automatic with DISTRIBUTE, ie a large file sent via DISTRI-
BUTE would be automatically split up.
5) It could be implemented in two steps: (1) provide the feature in LISTSERV,
test it; (2) when it has been thoroughly tested, modify RSCS to cause the
files to be processed that way.
Cons:
1) LISTSERV only has 2,500 blocks of disk space, not enough for large files.
(However the split jobs sent to it would not exceed the 'FILEMAX' limit so no
problem here)
2) This would cause LISTSERV to eat more CPU and do more I/Os. It might also
require a larger A-disk (for temp info about pending files) at hub nodes.
Some people might not agree about this.
3) This is not the prime purpose of LISTSERV.
4) It would make LISTSERV unavailable during large periods of time, while the
large file is being reconstructed. Say, the server stops replying to com-
mands for 1 minute. Users might send commands again and again, and eventual-
ly contact staff people who might FORCE the server.
5) It wouldn't be very easy for me to work on that since we're a dead-end node.
Also such a program would unavoidably require 'real scale' tests (for SIFO
problems, etc), which means sending 4,000 recs junk files on the network.
If this has to be done for the good of the network I don't mind, but some
people might react differently and I'm certain I'd get blamed by the local
authorities for that. I will not accept to make those tests myself and would
rather have them done on REAL links which are internal to a given institu-
tion, something which we don't have here (*sigh* wish we had a couple 3081s
running VM and MVS for the scientists' jobs...)
So what's your opinion about this? Note that I'm not discussing political pro-
blems but technical ones, let's keep the political crap for later please!! :-)
Eric
|