II. Programming Languages and Language Extensions

A. Introduction

  1. Conflicting interests: existing code, existing programmers, run-time efficiency, human efficiency, portability, marketability.

  2. Sequential languages plus message-passing
    • Machine-specific primitives
    • Research projects: P4, PICL, PARMACS, Chameleon
    • Parallel Virtual Machine (PVM)
    • Message-Passing Interface (MPI)

  3. Fortran-based languages
    • Fortran 90. New standard; includes array operations. Compiler availability?
    • Vienna Fortran. Research project; source-to-source translation from Vienna Fortran to Fortran 77 plus explicit message passing.
    • Fortran D. Research project; extensions to Fortran 77 or 90 to allow data distribution and parallel loop execution; translation to Fortran 77 plus message passing.
    • High Performance Fortran (HPF). Emerging standard; extensions to Fortan 90; heavily influenced by Fortran D.
    • Fortran M. Research project; extensions to Fortran 77 for functional (task) parallelism.

  4. Object-oriented languages
    • Parallel C++ (PC++). Data-parallel extension to C++.
    • Compositional C++ (CC++). Strength seems to be task-parallelism.
    • HPC++. Research project attempting to combine best of PC++ and CC++.

  5. Others
    • Strand. Prolog-like parallel language.
    • Sisal. Another parallel functional language.
    • Program Composition Notation (PCN). Research project; C-like language for combining simple components in parallel blocks; heavily influenced CC++.
    • Linda. Coordination language for parallelism via a virtual shared memory paradigm.
    • Occam. A simple language for distributed computing based on the Communicating Sequential Processes (CSP) paradigm; transputer is optimized for occam.

B. PVM (also see related PVM links)

  1. Introduction (see Geist, et al., 1994)

    • The challenge of heterogenous distributed computing:
      • architecture
      • data format
      • operating system
      • computational speed
      • machine load
      • network load

    • The advantages of heterogenous distributed computing:
      • Use existing hardware.
      • Use most appropriate hardware.
      • Virtual computer can be easily upgraded.
      • Program development on familiar platforms.
      • Exploit stability of workstations.
      • Facilitate collaborative work.

    • Some principles underlying PVM:
      • User-configured host pool.
      • Translucent access to hardware.
      • Process-based computation.
      • Explicit message-passing model.
      • Heterogeneity supported.
      • Multiprocessor support.

    • There are two major components in the system:
      • A daemon which runs on every machine in the virtual machine.
      • A library of routines (for message passing, spawning and coordinating tasks, modifying the virtual machine, etc.).

  2. Programming in PVM

C. MPI (also see related MPI links)

  1. Introduction

    • MPI is an accelerated community effort.

    • Goals:
      • widely used standard for writing message-passing programs.
      • practical, portable, efficient, and flexible.
      • ``\ldots demonstrate that users need not compromise among efficiency, portability, and functionality.'' (Gropp, et al., 95).
      • help people writing parallel libraries.

    • Target machines: from scalable multiprocessors to networks of heterogeneous workstations.

  2. Programming in MPI

    • Familiar stuff

      • Initialization, identification, termination
          MPI_INIT (ierror)
          MPI_COMM_RANK (comm, rank, ierror)
          MPI_COMM_SIZE (comm, size, ierror)
          MPI_FINALIZE (ierror)
        

      • Sends and receives
          MPI_SEND (buf, count, datatype, dest, tag, 
                    comm, ierror)
          MPI_RECV (buf, count, datatype, source, tag,
                    comm, status, ierror)
        

      • Global operations
          MPI_BCAST (buffer, count, datatype, root, 
                     comm, ierror)
          MPI_REDUCE (sendbuf, recvbuf, count, datatype, op, 
                      root, comm, ierror)
          MPI_BARRIER (comm, ierror)
          MPI_GATHER (sendbuf, sendcount, sendtype, 
                      recvbuf, recvcount, recvtype,
                      root, comm, ierror)
          MPI_SCATTER (sendbuf, sendcount, sendtype, 
                       recvbuf, recvcount, recvtype,
                       root, comm, ierror)
        
    • New stuff

      • Communicators
          MPI_COMM_GROUP (comm, group, ierror)
          MPI_GROUP_INCL (group, n, ranks, newgroup, ierror)
          MPI_COMM_CREATE (comm, group, newcomm, ierror)
        

      • Derived types
          MPI_TYPE_CONTIGUOUS (count, oldtype, newtype, ierror)
          MPI_TYPE_VECTOR (count, blocklength, stride, 
                           oldtype, newtype, ierror)
          MPI_TYPE_INDEXED (count, blocklengths, displacements, 
                            oldtype, newtype, ierror)
          MPI_TYPE_STRUCT (count, blocklengths, displacements, 
                           oldtypes, newtype, ierror)
        
      • Topologies
          MPI_CART_CREATE (comm_old, ndims, dims, periods,
                           reorder, comm_cart, ierror)
          MPI_CART_COORDS (comm, rank, maxdims, coords, ierror)
        
  3. More on sending messages:

    • MPI_SEND(buf, count, datatype, dest, tag, comm, ierror)
      Returns when buf may be reused.
      MPI implementation may or may not use internal buffers.
      Blocks if there is no place to put data.

    • MPI_BSEND(buf, count, datatype, dest, tag, comm, ierror) 
      Buffered send.
      Same as MPI_SEND except (local) buffering is done and user must supply the buffer using MPI_BUFFER_ATTACH.

    • MPI_ISEND(buf, count, datatype, dest, tag, comm, request, ierror)
      Nonblocking send.
      Returns immediately. User takes responsibility to not modify buffer until MPI_WAIT or MPI_TEST say message is gone.

    • MPI_SSEND(buf, count, datatype, dest, tag, comm, ierror)
      Synchronous send.
      Returns only when destination begins receiving the message.

D. High Performance Fortran (HPF) (also see related HPF links)



CS6404 class account (cs6404@ei.cs.vt.edu)