Download PDFOpen PDF in browser

Evaluating the Performance of Parallel Computing in Hybrid Models

EasyChair Preprint no. 10529

58 pagesDate: July 10, 2023


This paper is mainly a summary of two years of my research. I will start from the basic theory of the book and continue to the latest research. My first year of research was evaluating the performance of hybrid MPI programming. The second year is to design a better parallel computing model.
I analyzed all the papers published so far on MPI hybrid programming, the LogP model, the LogGP model and their variants. Gain a better understanding of the history, hotspots and difficulties of research in this field.
I also sorted out the theory and hardware aspects of the entire parallel computing. Know why the computer develops like this, and what is the bottleneck of development. Lay the foundation for the introduction of the following research.
In terms of MPI fine-grained hybrid programming, I have conducted some experiments on supercomputing systems and found that the performance of MPI fine-grained is not necessarily faster, which is contrary to the opinions of some papers.
I also compared the performance of OpenMP and Pthread in certain scenarios and found that OpenMP runs faster, although Pthread is a more low-level operation.
I summarize all variants of LogP and LogGP, and based on my experiments, propose two parameters that should be considered for fine-grained lock contention problems.

Keyphrases: LogGP, LogP, MPI, MPI HYBRID PROGRAMMING, OpenMP, parallel computing model, PRAM, pthread

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Yongyu Chen},
  title = {Evaluating the Performance of Parallel Computing in Hybrid Models},
  howpublished = {EasyChair Preprint no. 10529},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser