LSTSRV-L Archives

LISTSERV Site Administrators' Forum

LSTSRV-L

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Michael Wagner +49 228 303 245 <WAGNER@DBNGMD21>
Mon, 7 Mar 88 17:13:00 CET
text/plain (94 lines)
  I'm not sure how I received this, since it seems to have been from
  the JES2 list, which I am not a member of.  Eric Thomas seems to
  have re-distributed it to the LSTSRV-L group, although I'm not
  sure why.  I am not used to seeing RFC821 in headers, so I'm not
  really sure what happened (I also haven't a clue what a LINKSWT
  FILE is.  Eric?  What did you do?).
 
  None the less, I know something about the question, so I thought
  I'd answer.
 
> From:         Hilaire De Turck <SYSTHDT@BLEKUL21>
> Subject:      Performance considerations for a CTC connection.
>
> At our site we have a CTC connection between a MVS/XA system and a
> VM computer.
 
  Just to make this a little more explicit, I am assuming this means:
  - the nodes are BLEKUL21 and BLEKUL11, whose descriptions I looked up
    in the nodes database.
  - they are 2 separate physical machines (rather than MVS under VM)
  - CTCA between them (and not 3088)
  - NJE/NJI protocols spoken over the connection (and not VTAM)
  - MVS/XA/JES2 at one end
  - VM/RSCS at the other end
 
> I have ... questions concerning this CTC:
 
> 1) Which JES2 parameters do I have to take in account for speeding
> up the connection?
 
In a similar situation (non-XA), I did some performance studies, and
found that the CTCA connection was vastly underutilized compared to
our expectations.  We never had more than 10% utilization on the
CTCA.  It seems that our expectations did not take into account some
practical realities of the situation.  They are, in rough order:
 
1. the bandwidth of a single VM spool volume (assume 3380) being
   driven by a single virtual machine with no other traffic is about
   80-120K/sec (by comparison, the CTCA bandwidth is an order of
   magnitude higher).  Moreover, this degrades badly under load (I
   can explain this if people care, but it is a bit long for this
   note).  No I/O compute overlap is allowed.
 
2. the bandwidth of an MVS spool volume (again 3380) being driven by
   MVS is somewhat higher, perhaps twice as high.  This is available
   if you have turned on double buffering to the spool and if you
   are using track-celling (probably wise ideas anyways if you have
   any modern fast printers like lasers or so).  This degrades more
   gradually under load.  Some I/O compute overlap is allowed.
 
3. Protocols: CTCAs are half duplex, and the protocol over such
   connections seems to be lock-step.  This means that there is
   almost no processing overlap anywhere in the transfer.  A
   transfer from System A to System B proceeds as follows:
 
   a. get some data from spool A.  Wait for the spool.
   b. compress it into NJE 'tanks' to save 'precious' time on the NJE
      connection
   c. send the block (which can't be bigger than 3.7K or so) over the
      link
   d. take a cpu interrupt on CPU B.
   e. decompress the block.
   f. write it to the spool.  Probably wait for the spool.
   g. acknowledge it (send back an 'ack')
   h. take a cpu interrupt on CPU A.
   i. repeat until done :-)
 
In summary, tuning JES has little to do with the problem.  The VM
spool, together with the requirement for a non-overlapped protocol,
are the major bottlenecks.  The only significant exception is JES
RTAM buffersize, which should be as high as possible.  However, this
is difficult, because it has implications for all RTAM 'stations',
and some are restricted to smaller buffers.
 
Solutions: It isn't clear what one wants to fix in this situation.
120K/sec is already a good fast speed for most data transfers.  In
our case, we had a CRAY on one side, and BIG files were normal.  We
saw the following solutions:
 
1. Improve the VM spool.  Separate paging and spooling.  Dedicate the
   spool device to spooling (well, you can put DUMP space there too).
 
2. Use VTAM over the connection and allow multiple blocks to be in
   the VTAM 'queue' at the same time.  This allows at least RSCS to
   overlap it's processing with the rest of the processing (jes
   already does significant overlap on it's own).
 
3. With those 2 done, there might be some point in looking at JES
   tuning issues.
 
Hope this helps.
 
Michael

ATOM RSS1 RSS2