See: Description
Interface | Description |
---|---|
AllGather<T> |
MPI AllGather Operator.
|
AllReduce<T> |
MPI All Reduce Operator.
|
Broadcast |
MPI Broadcast operator.
|
Broadcast.Receiver<T> |
Receivers or Non-roots.
|
Broadcast.Sender<T> |
Sender or Root.
|
Gather |
MPI Gather Operator.
|
Gather.Receiver<T> |
Receiver or Root.
|
Gather.Sender<T> |
Senders or non-roots.
|
GroupCommOperator | |
Reduce |
MPI Reduce operator.
|
Reduce.Receiver<T> |
Receiver or Root.
|
Reduce.ReduceFunction<T> |
Interface for a Reduce Function takes in an
Iterable returns an. |
Reduce.Sender<T> |
Senders or non roots.
|
ReduceScatter<T> |
MPI Reduce Scatter operator.
|
Scatter |
MPI Scatter operator
|
Scatter.Receiver<T> |
Receiver or non-roots.
|
Scatter.Sender<T> |
Sender or Root.
|
Class | Description |
---|---|
AbstractGroupCommOperator |
Configuration
used to instantiate these
operators. It is the responsibility of the Driver, the primary agent in
the control-plane to configure these operators, that is, denote who is
the sender, who are the receivers, what Codec
need to be used and so on for an operation like Scatter with the root node
acting as a sender and the other nodes as receivers.
One thing implicit in MPI operations is the ordering of processors based
on their ranks which determines the order of operations. For ex., if we
scatter an array of 10 elements into 10 processors, then which processor
gets the 1st entry and so on is based on the rank.
In our case we do not have any ranks associated with tasks. Instead,
by default we use the lexicographic order of the task ids. These can
also be over-ridden in the send/receive/apply function callsCopyright © 2017 The Apache Software Foundation. All rights reserved.