Pete's Log: I like reading! part 5

Entry #1150, (Coding, Hacking, & CS stuff)
(posted when I was 23 years old.)

Parallel Symbolic Computing
by Robert Halstead, Jr., Computer Magazine, August 1986

This paper introduces the Multilisp languages and the "future" language construct. In Multilisp, a future is the only means by which a new thread of execution can be started. Its use is simple: (future X) spawns a new thread of execution which begins computing the value of X while returning an object of type future which can be passed around. When X is computed, the thread computing it dies and updates the value of the future. If any thread tries to touch the future (i.e. directly use its value) before the future has been computed. it is suspended until the value of the future is known. A means for lazy evaluation of futures is provided by the "delay" keyword which causes a future object to be generated, but no thread is spawned to compute the value until a thread touches the future object.

This paper was worth reading because it offers a brief introduction to Multilisp and the future construct, both of which are mentioned frequently in MTA literature, especially in dataflow literature.

In addition to presenting Multilisp, the author also presents some other interesting observations and facts. A comparison of the needs of numerical and symbolic computation is given. Numerical code frequently lends itself to simple parallelization techniques, particularly using SIMD techniques. Vectors and Matrices are frequent datatypes in numerical code, and divide and conquer algorithms generally work well on them. Symbolic computing, on the other hand, does not lend itself so easily to SIMD techniques, requiring other means of finding and exploiting concurrrency. Futures are the solution provided by the author.

Also discussed is the issue of scheduling parallel programs. The distinction between mandatory work and speculative work is made. Given sufficient mandatory work, a scheduler should be able to make efficient use of resources. In some cases preference may need to be given to work units known to be on the critical path, but in general, scheduling work known to be mandatory for a computation to end is always going to get you closer to your goal. However, if there is not enough mandatory work to spread across all processing resources, then speculative work may be employed to increase system utilization. In the speculative approach, we eagerly spawn tasks, before we even know if we will need the results of those tasks. If we do end up needing the result, then we've won, because it's now already available. If we never need the result, we've wasted cycles. However, if the wasted cycles would have been spent idle instead, we've not lost (unless speculative scheduling introduces too much overhead ...).

Another interesting topic is that of nondeterminism. We don't want nondeterminism, so the author's solutions to dealing with nondeterminism are of little practical value.

How do you handle exceptions? In Multilisp a task computating a future may generate an exception. But the future object may have already been spread far and wide, and the task that initially decided to create the future object may be gone. What to do? In Multilisp, the future object is given an "error" value, and any task that touches the future will generate an exception of its own.

When the paper was published, the results available indicated that Multilisp was generally pretty slow. Not surprising. But the idea of a future is still cool.

Finally, the author suggests that research efforts in parallel computing, and in symbolic parallel computing in particular, should take into consideration all aspects of development: language and runtime environment design, algorithm and application development, and hardware design and implementation.