Cover image
Normal view MARC view ISBD view

An introduction to parallel programming /

by Pacheco, Peter S.
Material type: materialTypeLabelBookPublisher: Amsterdam : Boston : Morgan Kaufmann, c2011Description: xix, 370 p. : ill. ; 25 cm.ISBN: 9780123742605 (hardback); 0123742609.Subject(s): Parallel programming (Computer science)Online resources: WorldCat details
Contents:
Table of contents Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here .
Tags from this library:
No tags from this library for this title.
Log in to add tags.
    average rating: 0.0 (0 votes)
Item type Location Collection Call number Copy number Status Date due
Text Text Reserve Section Non Fiction 005.275 PAI 2011 (Browse shelf) C-1 Not For Loan
Text Text Circulation Section Non Fiction 005.275 PAI 2011 (Browse shelf) C-2 Available

Includes bibliographical references (p. 357-359) and index.

Table of contents Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here .

Computer Science & Engineering

Sagar Shahanawaz

There are no comments for this item.

Log in to your account to post a comment.

Library Home | Contacts | E-journals

Last Update on 12 January 2014
Copyright @ 2011-2015 EWU Library
East West University