Lecture 24

Review/Preview

   we have seen:  basic message passing primitives
                  basic interaction patterns (filter, client/server, peer)

   this week we will look at:  languages and libraries
                               parallel computing paradigms


SR language -- Section 8.7 and Chapter 9 of SR book

   we have seen the following combination in SR:

      call name(actuals)        procedure name(formals)
                                  body
                                end

   and we have seen

      call name(actuals)        op name(formals) ...
                                proc name(formal ids)
                                  body
                                end

   there is another combination:  send and receive
   there is even call and receive [and call and in, as we'll see later]

   thus, SR provides the following combinations of primitives

       invocation     implementation    effect

       call           proc (procedure)  procedure call
       send           receive           message passing (asynchronous)
       send           proc              fork a process
       call           receive           synchronous message passing
                      [in]              [rendezvous]


MPI (Message Passing Interface) library -- Section 7.8

   one program, copy loaded onto every node
   SPMD programming style (single program, multiple data)
      [although can use process id to get task parallelism]

   basic primitives and order of usage:

      MPI_Init          initialize
      MPI_Comm_size     number of processes
      MPI_Comm_rank     my id (rank)
        ...
      MPI_Send          several arguments for
      MPI_Receive         data and tags
        ...
      MPI_Finalize      finalize

   see exchange program in Figure 7.17


Java language and library -- Section 7.9

   message passing supported by the java.net package:
      datagrams, UDP, unreliable
      sockets, TCP, reliable connections (like channels)

   connect to them -- "name", port

   use them -- variety of methods
               Section 7.9 uses streams, readline and println

   see Figures 7.18 and 7.19

   [note:  the SR implementation uses sockets in the RTS]


Paradigms for Process Interaction in Distributed Programs

   recall the exchanging values programs; we saw three organizations

      coordinator      an example of manager/workers style
      symmetric        an example of heartbeat style
      ring             an example of pipeline style


Managers/Workers paradigm -- Section 9.1

   also known as distributed bag of tasks or work farm model

   recall the bag-of-tasks paradigm:

                  shared bag
  
         worker1    ...     workerN

         each worker gets a task, does it, and perhaps produces new tasks

      requirement for using the bag of tasks -- independent tasks
         static number -- primes, matrix mult., words program in homework
         dynamic number -- recursive parallelism

      advantages:  scalability and load balancing

   distributed implementation

          bag -- manager process

          workers as before, but they send messages to the manager
            and receive new tasks from the manager

          manager is a server process with two kinds of operations:
            get a task and produce a result

          how can the manager detect termination?
             every worker is waiting to get a new task and the bag is empty


Linda -- Section 7.7

   Linda is called a coordination language
   is is both a language and a kind of library
   it allows one easily to implement replicated workers with a "shared" bag

   basic idea is that processes share what is called tuple space

   tuple:  ("tag", values)

   primitives:  out ("tag", values)
                in ("tag", vars/values)
                rd ("tab", vars/values)
                eval ("tag", f(...))

   I usually do a few of the small examples in Section 7.7

   see the primes program in Figure 7.16