Glossary

Assertion

A predicate that characterizes a program state. When an assertion is placed before a statement in a program, it specifies that the predicate must be true every time the program is ready to execute that statement.

At-Most-Once Property

An attribute of an assignment statement x = e in which either (1) x is not read by another process and e contains at most one reference to a variable changed by another process, or (2) x is not written by another process and e contains no references to variables changed by other processes. Such an assignment statement will appear to execute as an atomic action.

Atomic Action

A sequence of one or more statements that appears to execute as a single, indivisible action. A fine-grained atomic action is one that can be implemented directly by a single machine instruction. A coarse-grained atomic action is implemented using critical section protocols.

Bag-of-Tasks Paradigm

A parallel computing method in which tasks are placed in a bag shared by worker processes. Each worker repeatedly takes a task, executes it, and possibly generates new tasks that it puts into the bag. The computation has terminated when the bag is empty and the workers are idle.

Barrier

A synchronization point that all processes must reach before any are allowed to proceed.

Broadcast Algorithm

A method for disseminating information or making decisions in a distributed program. For decision making, each process broadcasts requests and acknowledgements to all other processes and maintains an ordered message queue that it uses to decide when its request is the oldest.

Busy Waiting

An implementation of synchronization in which a process repeatedly executes a loop waiting for a Boolean condition B to be true. This is often programmed as while (!B) skip;. When a process is busy waiting, it is also said to be spinning.

Client/Server Program

A process interaction pattern in a distributed program. A server process manages a resource and implements operations on that resource. A client makes a request of the server by invoking one of its operations.

Concurrent Program

A program containing two or more processes. Each process is a sequential program. See also Distributed Program and Parallel Program.

Condition Synchronization

A type of synchronization that involves delaying a process until some Boolean condition B is true. This is implemented by having one process wait for an event that is signaled by another process.

Conditional Atomic Action

An atomic action that must delay until the state satisfies some Boolean condition B. This is programmed as .

Context Switch

The act of switching a processor from executing one process to executing another. This is called a context switch because the state of each process is called its context. A context switch is performed in a kernel by a routine that is called a dispatcher or scheduler.

Covering Condition

A synchronization technique used with monitors. A process signals a condition variable when it is possible that waiting processes might be able to proceed.

Critical Section

A sequence of statements that must be executed with mutual exclusion with respect to critical sections in other processes that reference the same shared variables. Critical section protocols are used to implement coarse-grained atomic actions.

Data Parallel Program

A program is one in which each processes executes the same actions---usually at the same time---on different parts of shared data.

Deadlock

A state in which two or more processes are waiting for each other, in a so-called deadly embrace.

Distributed Program

A program in which processes communicate using message passing, remote procedure call, or rendezvous. Usually the processes execute on different processors.

Distributed Shared Memory (DSM)

A software implementation of a shared address space on a distributed-memory multiprocessor or a network of processors.

Exclusion of Configurations

A method for proving safety properties such as mutual exclusion and absence of deadlock. Process P[1] cannot be in a state satisfying assertion A[1] at the same time process P[2] is in a state satisfying assertion A[2] if (A[1] and A[2]) == false.

Fairness

An attribute of a scheduler or algorithm that guarantees that every delayed process gets a chance to proceed; see also Scheduling Policy.

False Sharing

A situation in which two processes reference different variables that are stored in the same cache line or DSM page and at least one of the processes writes into its variable.

Filter Process

A process that receives (reads) from one or more input channels, computes a function of the input, and sends (writes) results to one or more output channels. A filter process is both a producer and a consumer, and it can be used in a pipeline.

Global Invariant

A predicate that is true in every visible program state---namely, before and after every atomic action.

Heartbeat Algorithm

A process interaction paradigm in distributed programs. Each process repeatedly executes three phases: send messages to other processes, receive messages from others, then compute with local data and data received in messages.

History

The sequence of states, or actions, resulting from one execution of a program; a history is also called a trace.

Independent Statements

Two statements in different processes that do not write into the same variables and that do not read variables written by the other. Independent statements will not interfere with each other if they are executed in parallel.

Interacting Peers

A process interaction pattern in distributed programs in which processes are equals---executing the same code and interacting with each other to exchange information.

Interference

The result of two processes reading and writing shared variables in an unpredictable order and hence with unpredictable results; see also Noninterference.

Iterative Parallelism

A type of parallelism in which each process executes a loop that manipulates a part of the program's data. Often results from parallelizing loops in sequential programs.

Kernel

A collection of data structures and primitive operations---uninterruptible procedures---that manages processes, schedules them on processors, and implements high-level communication and synchronization operations such as semaphores or message passing.

Livelock

A situation in which a process is spinning while waiting for a condition that will never become true. Livelock is the busy-waiting analog of deadlock.

Liveness Property

A property of a program that asserts that something good will eventually happen---namely, that the program eventually reaches a good state. Termination and eventual entry into a critical section are examples of liveness properties.

Load Balancing

The act of assigning each process (and processor) in a parallel program an approximately equal amount of work.

Lock

A variable that is used to protect a critical section. A lock is set when some process is executing in a critical section; otherwise it is clear.

Loop Invariant

A predicate that is true before and after every execution of the statements in a loop.

Manager/Workers Paradigm

A distributed implementation of the bag-of-tasks paradigm. A manager process implements the bag and gathers results; workers interact with the manager to get tasks and to return results.

Metacomputer

A set of computers linked by high-speed networks and a software infrastructure that presents the illusion of a single computing resources. Sometimes called a networked virtual supercomputer.

Multiple Instruction, Multiple Data (MIMD) Multiprocessor

A hardware architecture in which there are multiple independent processors and each executes a separate program. Also called an asynchronous multiprocessor.

Multicomputer

A MIMD multiprocessor with distributed memory. Hypercube machines are examples of multicomputers.

Multithreaded Program

A program containing multiple processes or threads. Synonymous with concurrent program, although multithreaded program often means that there are more threads than processors and hence that the threads take turns executing on the processors.

Mutual Exclusion

A type of synchronization that ensures that statements in different processes cannot execute at the same time. See also Critical Section.

Nested Monitor Call

A call from one monitor to a second monitor. When a process makes a nested call, the call is said to be open if the process releases exclusion in the first monitor. The call is said to be closed if the process retains exclusion in the first monitor.

Noninterference

A relation between an atomic action a in one process and a critical assertion C in another process. Execution of a does not interfere with C if it leaves C true, assuming that C is already true.

Parallel Program

A concurrent program in which each process executes on its own processor, and hence the processes execute in parallel. (However, the phrase parallel program is sometimes used to mean any concurrent program.)

Partial Correctness

A property of a program that computes the desired result, assuming the program terminates.

Passing the Baton

A synchronization technique that is used with semaphores. When one process decides that another process should be awakened, it signals a semaphore on which that process is waiting. This signal has the effect of passing a baton to the second process; the baton conveys permission to execute with mutual exclusion.

Pipeline

A set of processes that are connected in series so that the output produced by one process is the input consumed by the next process. A pipeline is open if the ends are not connected to other processes. A pipeline is circular if the ends are closed on each other to form a circle. A pipeline is closed if it is circular and there is an extra coordinator process that produces input for the first worker process in the pipeline and that consumes the output of the last worker process.

Postcondition

An assertion that is true when statement S finishes execution.

Precondition

An assertion that is true when statement S starts execution.

Probe/Echo Algorithm

A process interaction paradigm in distributed programs. A probe is used to disseminate information from one process to all others; an echo is used to gather information.

Producer/Consumer Interaction

An interaction between two processes in which one process produces data that is consumed by the other. See also Filter Process and Pipeline.

Proof Outline

A program interspersed with enough assertions to convince the reader that the program is correct. A complete proof outline has an assertion before and after every statement.

Race Condition

A situation in a shared-variable concurrent program in which one process writes a variable that a second process reads, but the first process continues execution---namely, races ahead---and changes the variable again before the second process sees the result of the first change. This usually leads to an incorrectly synchronized program.

Recursive Parallelism

A type of parallelism that results from making recursive calls in parallel. Often results from parallelizing sequential programs that use the divide-and-conquer paradigm to solve smaller and smaller problems.

Replicated Servers Paradigm

A process interaction pattern in distributed programs in which there are multiple instances of a server process. Each server manages a part or a copy of some shared resource; the servers interact to ensure that the resource is kept in a consistent state.

Safety Property

A property of a program that asserts that nothing bad will ever happen---namely, that the program never enters a bad state. Partial correctness, mutual exclusion, and absence of deadlock are examples of safety properties.

Scheduling Policy

A policy that determines which action gets to execute next---namely, the order in which processes execute. A scheduling policy is unconditionally fair if unconditional atomic actions eventually get to execute. It is weakly fair if conditional atomic actions eventually get to execute if the delay condition becomes true and remains true. It is strongly fair if conditional atomic actions eventually get to execute if the delay condition is infinitely often true.

Single Instruction, Multiple Data (SIMD) Multiprocessor

A hardware architecture in which there is a single instruction stream that is executed in lockstep by every processor, which operates on its own local data. Also called a synchronous multiprocessor.

Single Program, Multiple Data (SPMD)

A programming style in which one process is coded. A copy of the process executes on every processor; each copy has its own private data. Usually there is a way for each process to determine its own identity---which is sometimes called its rank.

Speedup

The ratio T[1]/T[p] of the execution time T[1] of a sequential program on one processor to the execution time T[p] of a parallel program on p processors.

Spin Lock

A Boolean variable that is used in conjunction with busy waiting to protect a critical section. A process that wants to enter a critical section spins until the lock is clear.

State of a Program

The value of every program variable at a point in time.

Symmetric Multiprocessor (SMP)

A shared-memory multiprocessor architecture in which processors are identical and every processor can access every memory word in the same amount of time.

Synchronization

An interaction between processes that controls the order in which the processes execute. See also Mutual Exclusion and Condition Synchronization.

Task Parallel Program

A program in which each process executes a separate task, and hence each process is a different sequential program.

Thread

A sequential program that has its own thread of control---or context---and hence that can execute concurrently with other threads. Threads compete for time on the same processor or they may execute in parallel on separate processors.

Token-Passing Algorithm

A process interaction pattern in distributed programs in which tokens are used to convey permission or to gather information about the global state.

Total Correctness

A property of a program that computes the desired result and terminates.

Triple

A programming logic formula having the form { P } S { Q }, where P and Q are predicates and S is a statement list. The interpretation of a triple is: If execution of S starts in a state satisfying P and if S terminates, then the final state will satisfy Q.

Unconditional Atomic Action

An atomic action that does not have a delay condition. This is programmed as and might be implemented as a single machine instruction.