Blame view

3rdparty/boost_1_81_0/libs/mpi/doc/introduction.qbk 2.65 KB
73ef4ff3   Hu Chunming   提交三方库
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
  [section:introduction Introduction]
  
  Boost.MPI is a library for message passing in high-performance
  parallel applications. A Boost.MPI program is one or more processes
  that can communicate either via sending and receiving individual
  messages (point-to-point communication) or by coordinating as a group
  (collective communication). Unlike communication in threaded
  environments or using a shared-memory library, Boost.MPI processes can
  be spread across many different machines, possibly with different
  operating systems and underlying architectures. 
  
  Boost.MPI is not a completely new parallel programming
  library. Rather, it is a C++-friendly interface to the standard
  Message Passing Interface (_MPI_), the most popular library interface
  for high-performance, distributed computing. MPI defines
  a library interface, available from C, Fortran, and C++, for which
  there are many _MPI_implementations_. Although there exist C++
  bindings for MPI, they offer little functionality over the C
  bindings. The Boost.MPI library provides an alternative C++ interface
  to MPI that better supports modern C++ development styles, including
  complete support for user-defined data types and C++ Standard Library
  types, arbitrary function objects for collective algorithms, and the
  use of modern C++ library techniques to maintain maximal
  efficiency.
  
  At present, Boost.MPI supports the majority of functionality in MPI
  1.1. The thin abstractions in Boost.MPI allow one to easily combine it
  with calls to the underlying C MPI library. Boost.MPI currently
  supports:
  
  * Communicators: Boost.MPI supports the creation,
    destruction, cloning, and splitting of MPI communicators, along with
    manipulation of process groups. 
  * Point-to-point communication: Boost.MPI supports
    point-to-point communication of primitive and user-defined data
    types with send and receive operations, with blocking and
    non-blocking interfaces.
  * Collective communication: Boost.MPI supports collective
    operations such as [funcref boost::mpi::reduce `reduce`]
    and [funcref boost::mpi::gather `gather`] with both
    built-in and user-defined data types and function objects.
  * MPI Datatypes: Boost.MPI can build MPI data types for
    user-defined types using the _Serialization_ library.
  * Separating structure from content: Boost.MPI can transfer the shape
    (or "skeleton") of complex data structures (lists, maps,
    etc.) and then separately transfer their content. This facility
    optimizes for cases where the data within a large, static data
    structure needs to be transmitted many times.
  
  Boost.MPI can be accessed either through its native C++ bindings, or
  through its alternative, [link mpi.python Python interface].
  
  [endsect:introduction]