I've implemented a DIST2 file cache to avoid useless LSVBITFD calls. This is a RECFM V BITEARN DISTSUM file which contains a "nodeid server_number" pair in each of its records. It does not contains any "server path" information, hence there may still be a few LSVBITFD calls (at worst one per backbone server). I sent a DIST2 job to our french NADs list (about 100 users). The first time (with the cache file erased) it took about 60 sec CPU. The second time it took only 45 seconds, which roughly corresponds to a saving of 100 0.15 sec LSVBITFDs. Now this is quite small in relative value because all the 100 recipients get serviced by my LISTSERV. I XEDITed the cache file to move them all to LISTSERV@CEARN, and the same job was processed in only 5 secs (with only one output file being sent to CEARN) :-) The LSVBITFD saving would be the same in both cases, ie if the recipients has really been going to CEARN, it would have taken 20 secs the first time. In the average case I think we can expect 50% performance improvement for medium to large distributions (otherwise the time to compute the "server path" will be 90% of the total time). This will only affect the "entry" server, which is the one spending the CPU time for all the calculations (with DIST2). The cache file is built dynamically, ie when a new node is calculated, it is entered in the cache file. Domain adresses also get into the cache file, to avoid NAMEFIND calls on DOMAIN NAMES. It therefore does not take a lot of CPU time at once (which would be the case if I generated entries for all the nodes when receiving a new version of PEERS NAMES), and the file will not contain "useless" information which the server will never need to refer to. This will also help keeping it smaller (I doubt it will ever reach 15k bytes, even on sites like CEARN or BITNIC). The cache file is kept in storage (LSVFILER KEEP) during the execution of LSVDIST2, and released before exit, so that no virtual storage is wasted as would be the case with a GLOBALV solution. Eric