LSTSRV-L Archives

LISTSERV Site Administrators' Forum

LSTSRV-L

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Stan Horwitz <[log in to unmask]>
Tue, 22 Sep 1992 21:24:26 EDT
text/plain (60 lines)
On another  list, I  made some  statements regarding  the efficiency  or lack
thereof, regarding  the way Listserv  postings are distributed  ON individual
hosts after  they pass  through Bitnet.  Now, I am  wondering if  perhaps our
Listserv, or our VM  system itself, is being run properly  in this regard. If
this is  a problem peculiar  to this system, I  would certainly like  to know
about it and have our systems people fix it.
 
If, for  example, 200  people on this  system are signed  up to  a particular
listserv distribution  list, our  system will  end up  holding 200  copies of
every posting sent to that list; one  for each subscriber. There are in fact,
several lists which  are very popular here and very  prolific so this example
is  realistic. Are  we doing  something  very wrong  here on  our system?  It
seems wasteful to have many copies of the same posting in this system's spool
area  when   one  copy  for  central   access  would  do  quite   nicely.  My
understanding, however, is  that this is just  the way things are  done on VM
and that there's nothing anyone can do  to change the situation. Perhaps I am
wrong. If so,  I'de like to know so  I can ask our systems people  to fix it.
 
The reason I am posting this here because several people thought I was insane
when this topic  came up elsewhere. You  know how insanity is,  the victim is
always  the last  to know.  :) I  heard that  some sites  have implemented  a
bulletin board system to centralize  incoming Listserv postings. Is this what
most sites  do? Maybe  there's an  outside chance that  its not  necessary. I
could probably  implement this  type of  facility here, but  it would  take a
while because  I'me already  too busy  with other  matters. Is  there perhaps
some way of improving the internal mechanism on VM systems to change the mode
of distribution of incoming list postings that's not commonly known?
 
From my account,  I can easily see these replicated  postings on our system's
reader so this is  definitely not a figment of my  imagination as some people
on another distribution list implied. I even  wrote a SAS program a couple of
years ago  which summarizes  our spool  file situation. About  a year  or two
ago, I had to use this program often to help head off problems resulting from
the large volume of  spooled files here. This was on  our previous system. We
still have many users who frequently  have several hundred spooled files. The
max is  something like  30,000. If  there's a way  to improve  this situation
without implementing any  user restrictions, I would certainly  like to know.
With about  3,000 users on  this system,  this situation can  cause problems,
although its been rare lately thanks to our bigger system.
 
My  disclaimer  follows. Some  of  you  probably  saw my  postings  elsewhere
regarding Listserv.  Let's not continue  that thread here, or  privately. For
the record, I  happen to like and  respect Listserv a lot. We  intend to keep
Listserv and Bitnet at  Temple. Its also time to toot my own  horn since I am
feeling a little defensive  right now. It is accurate to  say that the reason
why we have continued  to run a Listserv here at Temple  is almost entirely a
result  of my  efforts at  popularizing our  Listserv in  the past.  The view
among  some managers  here a  year or  two ago  was that  Listserv was  silly
because  it had  little  to do  with  academic  pursuits. It  took  a lot  of
posturing on my part to encourage those  who held this view to reconsider. My
goal now  is simple. I want  to get an answer  to the above question.  I will
not participate  in any  long-winded debates  on anything  at this  time. Any
responses to my question will be appreciated.
 
Take care everyone,
Stan Horwitz
Listserv Postmaster
Temple University
Acknowledge to: OASIS@TEMPLEVM (or VM.TEMPLE.EDU)

ATOM RSS1 RSS2