On 27 Feb 2008 at 16:25, Valdis Kletnieks wrote:

>Back when we were still running 1.8mumble on an AIX box, we had the *opposite*
>problem - very large archives would cause the indexer to blow out on memory,
>so the solution was to split them up - it could handle 2 100M files in sequence
>using less memory than 1 200M file (or whatever it was that caused the kablam).

Back about 3 years ago I also went through the hassle of having a server that
went catonic in the middle of the night, apparently due to memory problems,
and I'd been down a similar road many years before with VM/CMS.  In my own
experience, working with much smaller systems than would be found today, my
problems were often seen during archive searches, but seemed to be the result
of an *aggregate* memory shortage.  On VM/CMS we could stretch our up time by
pruning signup files.  That last round (on Maelstrom - DEC VMS) resisted all
of the old tricks, but stopped - permanently - as soon as I (copied for the
owner and...) trimmed my three largest changelogs.

-Kary