This book sets out the principles of parallel computing in a way which will be useful to student and potential user alike. It includes coverage of both conventional and neural computers. The content of the book is arranged hierarchically. It explains why, where and how parallel computing is used; the fundamental paradigms employed in the field; how systems are programmed or trained; technical aspects including connectivity and processing element complexity; and how system performance is estimated (and why doing so is difficult). The penultimate chapter of the book comprises a set of case studies of archetypal parallel computers, each study written by an individual closely connected with the system in question. The final chapter correlates the various aspects of parallel computing into a taxonomy of systems.
Reviews
"...this book on parallel computer architectures is novel for its extensive coverage of neural networks and its survey of numerous parallel systems developed in the United Kingdom." Michael J. Quinn, IEEE Parallel & Distribued Technology