Blame view

3rdparty/boost_1_81_0/libs/fiber/doc/migration.qbk 5.27 KB
977ed18d   Hu Chunming   提交三方库
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
  [/
        Copyright Oliver Kowalke 2016.
   Distributed under the Boost Software License, Version 1.0.
      (See accompanying file LICENSE_1_0.txt or copy at
            http://www.boost.org/LICENSE_1_0.txt
  ]
  
  [/ import path is relative to this .qbk file]
  [import ../examples/work_sharing.cpp]
  
  [#migration]
  [section:migration Migrating fibers between threads]
  
  [heading Overview]
  
  Each fiber owns a stack and manages its execution state, including all
  registers and CPU flags, the instruction pointer and the stack pointer. That
  means, in general, a fiber is not bound to a specific thread.[footnote The
  ["main] fiber on each thread, that is, the fiber on which the thread is
  launched, cannot migrate to any other thread. Also __boost_fiber__ implicitly
  creates a dispatcher fiber for each thread [mdash] this cannot migrate
  either.][superscript,][footnote Of course it would be problematic to migrate a
  fiber that relies on [link thread_local_storage thread-local storage].]
  
  Migrating a fiber from a logical CPU with heavy workload to another
  logical CPU with a lighter workload might speed up the overall execution.
  Note that in the case of NUMA-architectures, it is not always advisable to
  migrate data between threads. Suppose fiber ['f] is running on logical CPU
  ['cpu0] which belongs to NUMA node ['node0]. The data of ['f] are allocated on
  the physical memory located at ['node0]. Migrating the fiber from ['cpu0] to
  another logical CPU ['cpuX] which is part of a different NUMA node ['nodeX]
  might reduce the performance of the application due to increased latency of
  memory access.
  
  Only fibers that are contained in __algo__[s] ready queue can migrate between
  threads. You cannot migrate a running fiber, nor one that is __blocked__. You
  cannot migrate a fiber if its [member_link context..is_context] method returns
  `true` for `pinned_context`.
  
  In __boost_fiber__ a fiber is migrated by invoking __context_detach__ on the
  thread from which the fiber migrates and __context_attach__ on the thread to
  which the fiber migrates.
  
  Thus, fiber migration is accomplished by sharing state between instances of a
  user-coded __algo__ implementation running on different threads. The fiber[s]
  original thread calls [member_link algorithm..awakened], passing the
  fiber[s] [class_link context][^*]. The `awakened()` implementation calls
  __context_detach__.
  
  At some later point, when the same or a different thread calls [member_link
  algorithm..pick_next], the `pick_next()` implementation selects a ready
  fiber and calls __context_attach__ on it before returning it.
  
  As stated above, a `context` for which `is_context(pinned_context) == true`
  must never be passed to either __context_detach__ or __context_attach__. It
  may only be returned from `pick_next()` called by the ['same] thread that
  passed that context to `awakened()`.
  
  [heading Example of work sharing]
  
  In the example [@../../examples/work_sharing.cpp work_sharing.cpp]
  multiple worker fibers are created on the main thread. Each fiber gets a
  character as parameter at construction. This character is printed out ten times.
  Between each iteration the fiber calls __yield__. That puts the fiber in the
  ready queue of the fiber-scheduler ['shared_ready_queue], running in the current
  thread.
  The next fiber ready to be executed is dequeued from the shared ready queue
  and resumed by ['shared_ready_queue] running on ['any participating thread].
  
  All instances of ['shared_ready_queue] share one global concurrent queue, used
  as ready queue. This mechanism shares all worker fibers between all instances
  of ['shared_ready_queue], thus between all participating threads.
  
  
  [heading Setup of threads and fibers]
  
  In `main()` the fiber-scheduler is installed and the worker fibers and the
  threads are launched.
  
  [main_ws]
  
  The start of the threads is synchronized with a barrier. The main fiber of
  each thread (including main thread) is suspended until all worker fibers are
  complete. When the main fiber returns from __cond_wait__, the thread
  terminates: the main thread joins all other threads.
  
  [thread_fn_ws]
  
  Each worker fiber executes function `whatevah()` with character `me` as
  parameter. The fiber yields in a loop and prints out a message if it was migrated
  to another thread.
  
  [fiber_fn_ws]
  
  
  [heading Scheduling fibers]
  
  The fiber scheduler `shared_ready_queue` is like `round_robin`, except that it
  shares a common ready queue among all participating threads. A thread
  participates in this pool by executing [function_link use_scheduling_algorithm]
  before any other __boost_fiber__ operation.
  
  The important point about the ready queue is that it[s] a class static, common
  to all instances of shared_ready_queue.
  Fibers that are enqueued via __algo_awakened__ (fibers that are ready to be
  resumed) are thus available to all threads.
  It is required to reserve a separate, scheduler-specific queue for the thread[s]
  main fiber and dispatcher fibers: these may ['not] be shared between threads!
  When we[,]re passed either of these fibers, push it there instead of in the
  shared queue: it would be Bad News for thread B to retrieve and attempt to
  execute thread A[s] main fiber.
  
  [awakened_ws]
  
  When __algo_pick_next__ gets called inside one thread, a fiber is dequeued from
  ['rqueue_] and will be resumed in that thread.
  
  [pick_next_ws]
  
  
  The source code above is found in
  [@../../examples/work_sharing.cpp work_sharing.cpp].
  
  [endsect]