LSTSRV-L Archives

LISTSERV Site Administrators' Forum

LSTSRV-L

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Bill Verity <[log in to unmask]>
Wed, 16 Nov 2005 14:57:20 -0500
text/plain (57 lines)
Before we got off the backbone, I did notice that web performance was ok even when I had a huge X-SPAM backlog.  The backlog was in .mail files and these are all just waiting to be given to sendmail.  So my mail turnaround times were unacceptable, but the web response for owners was acceptable. 

At 8:12 PM +0100 11/16/05, Eric Thomas wrote
>I think we have an assumption in this thread that the X-SPAM jobs are the
>cause of Paul's problems, and that Paul is forced to accept poor web
>response as a result of this legacy spam prevention function. Based on my
>knowledge of the code, I would challenge this assumption. Just because there
>are a lot of X-SPAM jobs doesn't necessarily mean that they tie up LISTSERV.
>There is an easy wait to find out: simply add DEBUG_TIME_ALL=1 to your
>configuration. This will make LISTSERV log execution timing for every
>command it processes. There is *some* overhead outside of what is captured
>by DEBUG_TIME_ALL=1, but it is not very much. I went looking for a really
>old backbone site where I have administrator access, and the oldest I could
>find was a 733MHz Intel server, where I get the following timings:
>
>16 Nov 2005 19:32:50 From [log in to unmask]: X-SPAM [redacted]
>16 Nov 2005 19:32:50 -> Registered.
>VCPU=0.000 TCPU=0.015 T-V=0.015 SIO=6 PGIO=0 Elapsed: 0.016 sec (93.8%)
>
>The job took less than two hundredths of a second. Let's say three
>hundredths of a second, counting the overhead that is not measured by
>DEBUG_TIME_ALL=1, although it's probably less than that. You would need to
>process 15 of these jobs in a row to introduce a delay of about half a
>second, which is about the minimum a web user is going to notice. And that
>is on hardware that is around 5 years old.
>
>Of course, there might be special circumstances at ND. For one thing, my
>test was on 14.4 and the server only has a handful of lists.
>
>I don't really have anything against adding an option to discard X-SPAM
>jobs, but frankly, I doubt that it will make any difference, especially on
>recent hardware. I am more interested in figuring out why Paul is
>experiencing poor web performance than in disabling X-SPAM simply because it
>is old and only catches a fraction of the spam. Speaking as the
>administrator of the old server on which I ran these benchmarks, I am quite
>happy to spend a few hundredths of a second a few times per minute to catch
>even just one spam every day before it hits a list. The cost of the
>aggregated horsepower for all these X-SPAM jobs is far less than the
>manpower cost of even one single spam getting to a large number of people
>who get distracted and have to discard it. Besides, it's not like I have to
>pay for the CPU time by the hour, or get a partial refund from the
>manufacturer if I maintain a certain idle time - I wish :-)
>
>As for jobs having priority over interactive requests, this is definitely
>not the way it is meant to be. If indeed this is what is happening at ND, it
>is a bug and needs to be fixed.
>
>  Eric


-- 
Bad spellers of the world...UNTIE!!!

Bill Verity - 814-865-4758  Fax: 814-863-7049
215A Computer Building - Information Technology Services, Penn State University
At the office - on my Mac, of course ;-)

ATOM RSS1 RSS2