Parallel Computing
Description:
Exploration of parallelism is far from just a question of scientific significance. Even the building of ancient pyramids involved concurrent cooperations, work load balancing, pipelining and resource scheduling, which all fell into the pocket of giant parallelism. This is yet another evidence that computer has fallen behind human intelligence when carrying out computation tasks one by one devotionally. However proud we may be about this, if any, computers are catching up, especially when pushed by a large number of scientific scholars who are more and more discontented for long program execution time and distribued data intensive computations. Parallel computing is the savior for those ancious scientists: it provides enormous computing power, very large scale distributed data warehouses support, high performance and efficiency. It does these by multithread of control and by sharing of both heterogeneous and homogeneous resources. There are mainly three aspects of parallel computing: algorithms and application; programming methods, languages and environments; parallel machines and architectures. The future of parallel computing is very prosperous, expecially after the emergence of Grid technology, which provides a middleware layer on top of which parallel programs can run and communicate with each other directly and transparently. However, there are still some problems remaining to be solved, such as efficient scheduling algorithms and automatic parallelism, etc. Hope that one day everyone could rely solely on his and his friends' I386 machine (if still any) to do all kinds of complex computation within seconds, without knowing a whole world working, cooperating and communicating behind scence.
Enablers:
- Interprocess communication bottleneck, especially with low bandwidth and high latency
- Lack of application level tools, including programming models and execution environments
- The hierarchical nature of program structure limits its application over common flat platforms
- Productivity gap between currently available hardware platforms and programing paradigms
- The need for high performance and data-intensive computations may have little to do with home PCs but rather large institutions which need data and computation resouce sharing and coorperation.
Inhibitors:
- Interprocess communication bottleneck, especially with low bandwidth and high latency
- Lack of application level tools, including programming models and execution environments
- The hierarchical nature of program structure limits its application over common flat platforms
- Productivity gap between currently available hardware platforms and programing paradigms
- The need for high performance and data-intensive computations may have little to do with home PCs but rather large institutions which need data and computation resouce sharing and coorperation.
Paradigms:
- Parallel machine organization:
- Processor array
- Shared memory multiprocessors
- Distributed memory multiprocessors
- Processor array
- Flynn's taxonomy:
- SISD: Single Instruction Single Data Traditional uniprocessors
- SIMD: Single Instruction Multiple Data Processor arrays
- MISD: Multiple Instruction Single Data Nonexistent?
- MIMD: Multiple Instruction Multiple Data Multiprocessors and multicomputers
- SISD: Single Instruction Single Data Traditional uniprocessors
- General phases in designing and building parallel programs:
- Partitioning
- Communication
- Agglomeration
- Mapping
- Partitioning
Experts:
Professor Henri Bal, Vrije Unversiteit Amsterdam, http://www.cs.vu.nl/~bal
Dr. Thilo Kielmann, Vrije Universiteit Amsterdam, http://www.cs.vu.nl/~kielmann/
Professor Ian Foster, University of Chicago, http://www-fp.mcs.anl.gov/~foster/
Dutch grid, http://www.dutchgrid.nl
Gridlab, http://www.gridlab.org
Globus, http://www.globus.org
Global Grid Forum, http://www.ggf.org
UCSB, CMU, UC Berkeley, Monash Unversity, Cambridge Unversity, Stanford University
Timing:
Improving on earlier methods of probabilistic forecasting, here we show that there is around an 85 per cent chance that the world's population will stop growing before the end of the century. There is a 60 per cent probability that the world's population will not exceed 10 billion people before 2100, and around a 15 per cent probability that the world's population at the end of the century will be lower than it is today. For different regions, the date and size of the peak population will vary considerably.