Return to previous page

Cloud computing is emerging as an alternative to supercomputers for some of the high-performance computing (HPC) applications that do not require a fully dedicated machine. With the current trend of increasing the number of cores in modern computers, society faces a problem of high energy dissipation, while increased performance is limited by communication delays. High-performance computing has become necessary to the ability of enterprises, scientific researchers, and government interventions to generate new discoveries and to revolutionize innovative products and services. Indeed, high-performance computers are contributing significantly to scientific progress, industrial competitiveness, national security, and quality of life.

This book covers the state-of-the-art techniques and innovations in high-performance computing (HPC) written by eminent researchers and practitioners from around the world. The book presents new trends in high-performance computing, with the goal of reducing computer execution time. This book begins with the DiamondTorre algorithm for high-performance wave modeling. Effective algorithms of physical media numerical modeling problems’ resolution are mentioned. The computation rate of such issues is proscribed by heart information measure if enforced with ancient algorithms. The numerical resolution of the differential equation is taken into account. A finite distinction theme with a cross stencil and a high order of approximation is employed. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit) memory hierarchy and parallelism.

Next, the book presents a consistency model for runtime objects in the open community runtime. Task-based runtime systems are seen as a way to deal with increasing heterogeneity and performance variability in modern high-performance computing systems. The Open Community Runtime is an example of such a runtime system, with the benefit of having an open specification that defines its interface and behavior. The book also explores how to utilize the highly parallel capabilities of modern commodity graphics processing units (GPUs) in order to improve the performance of different security tools operating at the network, storage, and memory level, and how they can offload the CPU whenever possible. Modern GPUs have been proven to be highly effective and very efficient at accelerating computational- and memory-intensive workloads. The book follows with a miss-aware LLC buffer management strategy based on heterogeneous multi-core. Heterogeneous multi-core systems present new challenges by introducing integrated graphics processing units (GPUs) on the same die with CPU cores. In order to reduce the data transfer between the memory and video memory and improve the efficiency of the system, researchers proposed the shared last-level cache (LLC) architecture, which is designed to store shared data between the CPU and GPU in the LLC to speed up program execution and reduce memory read and write times.

Finally, the book studies the main technological challenges of real-time biofeedback in sport. Communication and processing as two main possible obstacles for high-performance real-time biofeedback systems are identified. Special attention to the role of high-performance computing with some details on the possible usage of the DataFlow computing paradigm is also given. Motion tracking systems, in connection with the biomechanical biofeedback, help in accelerating motor learning. Requirements about various parameters important in real-time biofeedback applications are discussed. Special focus is given on feedback loop delays. Overall, the book should aid researchers in positioning new models and new modeling approaches in relation to the state-of-the-art. It may also be of interest to practitioners and educators seeking an overview of developments in this area.