Sometime recently, the listserv at CUVMA became the listserv at CUVMB. Last night, I did a review of the I-KERMIT list. Without waiting for the results, i disconnected and went home. About 10:00 P.M. I got a call from our operations staff saying they were getting console messages about spool file limit exceeded for my userid. Since we don't run with spool file limits that means that I had exactly 9999 spool files (the VM/XA limit.) Not good! I dialed in and they were all copies of the I-KERMIT list from various peers. I purged all the files, but they continued to come in at the rate of about one every two seconds. I removed some of the peer links to try and break the loop but the major part of the loop appears to be Columbia looping with itself. I just tried to call them on the phone, but they don't open until 9:00. If all sites which host the I-KERMIT list will check their listserv and purge all rdr files requesting a review of I-KERMIT, as well as killing all outbound copies of I-KERMIT NODEID headed for UGA I would appreciate it. Currently the list topology looks like this: * * PEERS= I-KERMIT@BNANDP11,I-KERMIT@UTORONTO,I-KERMIT@CLVM * PEERS= I$KERMIT@CUVMA,I-KERMIT@DEARN,I-KERMIT@EB0UB011 * PEERS= I-KERMIT@RUTVM1,I-KERMIT@VTVM1,I-KERMIT@VTVM2 * PEERS= IKD@FINHUTC,I-KERMIT@HEARN,I-KERMIT@MARIST * PEERS= I-KERMIT@TCSVM,I-KERMIT@UBVM,I-KERMIT@UGA * * List Topology * * UTORONTO CLVM HEARN * | | | * UBVM MARIST DEARN * | | | * UGA-----VTVM2-----CUVMA-----EB0UB011-----FINHUTC * | | | | * TCSVM VTVM1 RUTVM1 BNANDP11 * Later today, when my reader clears out, I will get the list headers from all sites and change CUVMA to CUVMB. I hope that will fix the problem. All BITNET core sites should also delete all copies of I-KERMIT NODEID (or I$KERMIT CUVMB or IKD FINHUTC) headed for node UGA. Eric, How hard would it be to check for some number of review commands on a given list in a given time and reject those which appear to be looping? Harold