Difference between revisions of "Parallel Computing"
Line 3: | Line 3: | ||
==Enablers:== | ==Enablers:== | ||
- | - Interprocess communication bottleneck, especially with low bandwidth and high latency | ||
- | - Lack of application level tools, including programming models and execution environments | ||
- | - The hierarchical nature of program structure limits its application over common flat platforms | ||
- Productivity gap between currently available hardware platforms and programing paradigms | |||
- The need for high performance and data-intensive computations may have little to do with home PCs but rather large institutions which need data and computation resouce sharing and coorperation. | |||
==Inhibitors:== | ==Inhibitors:== |
Revision as of 21:56, 16 March 2005
Description:
Exploration of parallelism is far from just a question of scientific significance. Even the building of ancient pyramids involved concurrent cooperations, work load balancing, pipelining and resource scheduling, which all fell into the pocket of giant parallelism. This is yet another evidence that computer has fallen behind human intelligence when carrying out computation tasks one by one devotionally. However proud we may be about this, if any, computers are catching up, especially when pushed by a large number of scientific scholars who are more and more discontented for long program execution time and distribued data intensive computations. Parallel computing is the savior for those ancious scientists: it provides enormous computing power, very large scale distributed data warehouses support, high performance and efficiency. It does these by multithread of control and by sharing of both heterogeneous and homogeneous resources. There are mainly three aspects of parallel computing: algorithms and application; programming methods, languages and environments; parallel machines and architectures. The future of parallel computing is very prosperous, expecially after the emergence of Grid technology, which provides a middleware layer on top of which parallel programs can run and communicate with each other directly and transparently. However, there are still some problems remaining to be solved, such as efficient scheduling algorithms and automatic parallelism, etc. Hope that one day everyone could rely solely on his and his friends' I386 machine (if still any) to do all kinds of complex computation within seconds, without knowing a whole world working, cooperating and communicating behind scence.
Enablers:
- Interprocess communication bottleneck, especially with low bandwidth and high latency - Lack of application level tools, including programming models and execution environments - The hierarchical nature of program structure limits its application over common flat platforms - Productivity gap between currently available hardware platforms and programing paradigms - The need for high performance and data-intensive computations may have little to do with home PCs but rather large institutions which need data and computation resouce sharing and coorperation.
Inhibitors:
- Extending the retirement age to another 10 years so people will have to work more
Paradigms:
There has been enormous concern about the consequences of human population growth for the environment and for social and economic development. But this growth is likely to come to an end in the foreseeable future.
Experts:
United Nations US Department of Health and Human Services
Timing:
Improving on earlier methods of probabilistic forecasting, here we show that there is around an 85 per cent chance that the world's population will stop growing before the end of the century. There is a 60 per cent probability that the world's population will not exceed 10 billion people before 2100, and around a 15 per cent probability that the world's population at the end of the century will be lower than it is today. For different regions, the date and size of the peak population will vary considerably.