Ways to Use the Book

The chapters in Parts 1 and 2 of the book are ordered historically --- that is to say, by the order in which mechanisms were developed. Part 1 emphasizes multithreaded programming using shared variables. Part 2 emphasizes distributed programming using message passing, RPC, and rendezvous. I describe aspects of parallel programming in every chapter, but I wait until Part 3 to provide more in depth coverage. I have organized the book in this way so that Chapter 11 can cover both shared-variable and message-passing solutions for three representative parallel applications.

A class could cover the chapters and sections pretty much in the order they occur in the book. Alternatively, a class could be organized around the three main themes: multithreaded, distributed, and parallel programming. Below I give course outlines for semester-length classes using both organizations. Then I describe variations that would be appropriate for a quarter-length class or for an advanced graduate class.

Organization 1 -- By Chapters

My course at Arizona covers the chapters in pretty much the order they appear in the book. The main departure from this order is that I cover shared-variable parts of Chapter 11 after Chapter 3 and later cover message-passing parts of Chapter 11. An outline of the my current organization follows. The syllabus for Spring 2000 contains more details, including how long I spend on each topic. See also the Classroom Use section of the Preface.

Basic concepts -- Chapter 1
Introduction to the SR language -- SR book
Processes and synchronization -- Chapter 2
Critical sections and locks -- Sections 3.1 to 3.3
Barriers, data parallel algorithms, bag of tasks -- Sections 3.4 to 3.6
Parallel programming concepts and applications -- intro to Part 3 and parts of Chapter 11
Semaphores and Pthreads library -- Chapter 4
Multiprocessor implementations -- Chapter 6
Monitors and Java -- Chapter 5
Message passing -- Sections 7.1 to 7.4
RPC and rendezvous -- Sections 8.1 to 8.3
Programming distributed systems: Ada, SR, Java -- case studies sections
Distributed parallel computing and MPI library -- Sections 9.1 to 9.3; parts of Chapter 11
Distributed implementations -- Chapter 10
Distributed computing paradigms -- Sections 9.4 to 9.7

Students do four homework assignments and two projects. They also have two in class exams. The homework assignments contain both problems from the text and one or two small programs (using the various languages and tools). I encourage students to work in teams of two on the projects. The first project implements, tests, and evaluates a shared-variable parallel program; all students do the same assignment. The second project covers some aspect of distributed programming in depth. Students choose their own second project, subject to my approval, using Exercise 7.26 as a starting point for ideas. Most implement something like a distributed game-playing program or banking system (complete with user interface); others do distributed parallel computing experiments; a few each term write a paper that examines some topic in depth. Students who do an implementation project give me a demonstration of what they have done; this is usually very interesting.

Computing Equipment. My class has the use of two, small scale shared-memory multiprocessors. One is used for program development and initial experimentation; the second can be reserved so that students can do accurate timings. We also have a network of workstations that can be used for distributed computing projects.

Organization 2 -- By Topic (Multithreaded, Parallel, Distributed)

I have thought about using this organization, and if I try it some time, I will organize the class roughly as follows:

Basic concepts -- Chapter 1
Introduction to the SR language -- SR book
Processes and synchronization -- Chapter 2
Critical sections and locks -- Sections 3.1 to 3.3
Semaphores and Pthreads library -- Chapter 4
Monitors and Java -- Chapter 5
Multiprocessor implementations -- Chapter 6
Overview of parallel computing -- Chapters 1 and 11
Barriers, data parallel algorithms, bag of tasks -- Sections 3.4 to 3.6
Semaphores and Pthreads library -- Sections 4.1, 4.2, and 4.6
Message passing and MPI library -- Sections 7.1, 7.2, 7.4, and 7.8
Distributed parallel computing -- Sections 9.1 to 9.3 and Chapter 11
Clients and servers using message passing -- Section 7.3
RPC and rendezvous -- Sections 8.1 to 8.3
Programming distributed systems: Ada, SR, Ada, Java sockets and RMI -- case studies sections
Distributed implementations -- Chapter 10
Distributed computing paradigms -- Sections 9.4 to 9.7

Multithreaded programming should logically come first, but parallel and distributed programming can be covered in either order.

Variations on Organization 2

My class is a semester in length, so there is time to cover all three major topics. In addition, my class includes both undergraduate and graduate students, so it is important to cover multithreaded programming -- especially those topics not usually covered in an operating systems course. (Most undergraduates in my class are taking OS at the same time; the graduate students have all had an OS class.)

I hope eventually to be able to offer separate classes for undergraduate and graduate students. I will cover pretty much the same material in the undergraduate class, but I will cover slightly fewer topics. In the graduate class I will emphasize parallel and distributed programming and supplement the text with additional parallel computing applications.

A quarter-long class would really have to move to cover all three main topics. A quarter is 2/3 as long as a semester, so it would be natural to cover any two of the three main topics. Since most students will already have taken an operating systems class, it would be natural to emphasize parallel and distributed programming. However, such a class will probably still want to cover some material from Part 1 of the book -- in particular, barriers, the parallel computing techniques in Chapter 3, Pthreads, and multiprocessor hardware and implementation issues.


Last updated December 18, 2002