Message Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran.
To achieve high performance computing, CUDA and openAcc are really useful to accelerate the speed of a program. In this post, I mainly make some notes about openAcc and CUDA.
In the following series of blogs, I will introduce the application of high-performance computing. In this blog, I will introduce the basic usage of supercomputers: Edison in the USA and Tianhe in China.
In this post, I mainly make notes about some special deep neural networks (DNNs) and a guide to an efficient way to build DNN architectures, hyperparameter selection, and tuning.
In this post, I mainly make notes about Spark MLlib. This post is mainly based on the open tutorial from here.