API Reference

Communicators

chainermn.create_communicator(communicator_name='hierarchical', mpi_comm=None)

Create a ChainerMN communicator.

Different communicators provide different approaches of communication, so they have different performance charasteristics. The default communicator hierarchical is expected to generally perform well on a variety of environments, so one need not to change communicators in most cases. However, choosing proper communicator may give better performance. The following communicators are available.

Name CPU GPU NCCL Recommended Use Cases
naive OK OK   Testing on CPU mode
hierarchical   OK Required Each node has a single NIC or HCA
two_dimensional   OK Required Each node has multiple NICs or HCAs
single_node   OK Required Single node with multiple GPUs
flat   OK   N/A
pure_nccl   OK Required (>= v2)

pure_nccl is recommended when NCCL2 is available in the environment,

but it’sstill experimental support.
Parameters:
  • communicator_name – The name of communicator (naive, flat, hierarchical, two_dimensional, pure_nccl, or single_node)
  • mpi_comm – MPI4py communicator
Returns:

ChainerMN communicator

Optimizers and Evaluators

chainermn.create_multi_node_optimizer(actual_optimizer, communicator)

Create a multi node optimizer from a Chainer optimizer.

Parameters:
  • actual_optimizer – Chainer optimizer (e.g., chainer.optimizers.Adam).
  • communicator – ChainerMN communicator.
Returns:

The multi node optimizer based on actual_optimizer.

chainermn.create_multi_node_evaluator(actual_evaluator, communicator)

Create a multi node evaluator from a normal evaluator.

Parameters:
  • actual_evaluator – evaluator (e.g., chainer.training.extensions.Evaluator)
  • communicator – ChainerMN communicator
Returns:

The multi node evaluator based on actual_evaluator.

Dataset Utilities

chainermn.scatter_dataset(dataset, comm, root=0, shuffle=False, seed=None)

Scatter the given dataset to the workers in the communicator.

The dataset of worker 0 (i.e., the worker whose comm.rank is 0) is scattered to all workers. The given dataset of other workers are ignored. The dataset is split to sub datasets of almost equal sizes and scattered to workers. To create a sub dataset, chainer.datasets.SubDataset is used.

Parameters:
  • dataset – A dataset (e.g., list, numpy.ndarray, chainer.datasets.TupleDataset, ...).
  • comm – ChainerMN communicator or MPI4py communicator.
  • shuffle (bool) – If True, the order of examples is shuffled before being scattered.
  • root (int) – The root process of the scatter operation.
  • seed (int) – Seed the generator used for the permutation of indexes. If an integer being convertible to 32 bit unsigned integers is specified, it is guaranteed that each sample in the given dataset always belongs to a specific subset. If None, the permutation is changed randomly.
Returns:

Scattered dataset.

chainermn.datasets.create_empty_dataset(dataset)

Creates an empty dataset for models with no inputs and outputs.

This function generates an empty dataset, i.e., __getitem__() only returns None. Its dataset is compatible with the original one. Such datasets used for models which do not take any inputs, neither return any outputs. We expect models, e.g., whose forward() is starting with chainermn.functions.recv() and ending with chainermn.functions.send().

Parameters:dataset – Dataset to convert.
Returns:Dataset consists of only patterns in the original one.
Return type:TransformDataset

Functions

chainermn.functions.send(x, communicator, rank, tag=0)

Send elements to target process.

This function returns a dummy variable only holding the computational graph. If backward() is invoked by this dummy variable, it will try to receive gradients from the target process and send them back to the parent nodes.

Parameters:
  • x (Variable) – Variable holding a matrix which you would like to send.
  • communicator (chainer.communicators.CommunicatorBase) – ChainerMN communicator.
  • rank (int) – Target process specifier.
  • tag (int) – Optional message ID (MPI feature).
Returns:

A dummy variable with no actual data, only holding the computational graph. Please refer chainermn.functions.pseudo_connect for detail.

Return type:

Variable

chainermn.functions.recv(communicator, rank, delegate_variable=None, tag=0, device=-1)

Receive elements from target process.

This function returns data received from target process. If backward() is invoked, it will try to send gradients to the target process.

Note

If you define non-connected computational graph on one process, you have to use delegate_variable to specify the output of previous computational graph component. Otherwise backward() does not work well. Please refer chainermn.functions.pseudo_connect for detail.

Parameters:
  • communicator (chainer.communicators.CommunicatorBase) – ChainerMN communicator.
  • rank (int) – Target process specifier.
  • delegate_variable (chainer.Variable) – Pointer to the other non-connected component.
  • tag (int) – Optional message ID (MPI feature).
  • device (int) – Target device specifier.
Returns:

Data received from target process. If backward() is invoked by this variable, it will send gradients to the target process.

Return type:

Variable

chainermn.functions.pseudo_connect(delegate_variable, *actual_variables)

Connect independent connected graph component.

This function is implemented to return received arguments directly, except the first delegate_variable. In backward computation, it returns received gradients directly, adding a zero grad corresponding to delegate_variable. The detail of delegate_variable is described in the following notes.

Note

In model-parallel framework, models on each process might have many non-connected components. Here we call a given graph non-connected when multiple inter-process communications are needed for its computation. For example, consider the following example:

class ConnectedGraph(chainermn.MultiNodeChainList):

    def __init__(self, comm):
        super(ConnectedGraph, self).__init__(comm)
        self.add_link(ConnectedGraphSub(), rank_in=3, rank_out=1)

This model receives inputs from rank=3 process and sends its outputs to rank=1 process. The entire graph can be seen as one connected component ConnectedGraphSub. Please refer the document of MultiNodeChainList for detail.

On the other hand, see the next example:

class NonConnectedGraph(chainermn.MultiNodeChainList):

    def __init__(self, comm):
        super(NonConnectedGraph, self).__init__(comm)
        self.add_link(NonConnectedGraphSubA(), rank_in=3, rank_out=1)
        self.add_link(NonConnectedGraphSubB(), rank_in=1, rank_out=2)

This model consists of two components: at first, NonConnectedGraphSubA receives inputs from rank=3 process and sends its outputs to rank=1 process, and then NonConnectedGraphSubB receives inputs from rank=1 process and sends its outputs to rank=2 process. Here multiple inter-process communications are invoked between NonConnectedGraphSubA and NonConnectedGraphSubB, so it is regarded as non-connected.

Such kind of non-connected models can be problematic in backward computation. Chainer traces back the computational graph from the output variable, however naive implementation of chainermn.functions.recv does not take any inputs rather receives inputs by MPI_Recv, where backward path vanishes.

To prevent this, dummy variables what we call delegate_variable are used. In principle, chainermn.functions.send does not return any outputs because it sends data to the other process by MPI_Send. However, chainermn.functions.send returns a dummy / empty variable in our implementation, which is called delegate_variable. This variable does not hold any data, just used for retaining backward computation path. We can guarantee the backward computation just by putting delegate_variable to the next chainermn.functions.recv (chainermn.functions.recv has an optional argument to receive delegate_variable).

Note

In some cases the intermediate graph component returns model outputs. See the next example:

class NonConnectedGraph2(chainermn.MultiNodeChainList):

    def __init__(self, comm):
        super(NonConnectedGraph2, self).__init__(comm)
        self.add_link(NonConnectedGraphSubA(), rank_in=1, rank_out=None)
        self.add_link(NonConnectedGraphSubB(), rank_in=None, rank_out=1)

This model first receives inputs from rank=1 process and make model outputs (specified by rank_out=None) in NonConnectedGraphSubA. Then using model inputs (specified by rank_in=None), NonConnectedGraphSubB sends its outputs to rank=1 process. Since MultiNodeChainList.__call__ returns outputs of the last component (in this case, outputs of NonConnectedGraphSubB), naive implementation cannot output the returned value of NonConnectedGraphSubA as the model outputs. In this case, pseudo_connect should be used.

pseudo_connect takes two arguments. The first one delegate_variable is what we explained in above note. In this case, returned value of NonConnectedGraphSubB corresponds to delegate_variable. The second one actual_variables is “what we want delegate_variable to imitate”. In NonConnectedGraph2, we obtain returned value of NonConnectedGraphSubB as the model outputs, but what we actually want is returned value of NonConnectedGraphSubA. At the same time we want to trace back this resulted variable in backward computation. Using pseudo_connect, we can make a variable whose data is the same as the returned value of NonConnectedGraphSubA, and which traces back NonConnectedGraphSubB first.

pseudo_connect should also be used in some pathological cases, for example, where multiple chainermn.functions.send occurs sequentially.

Parameters:
  • delegate_variable (chainer.Variable) – Pointer to the previous non-connected graph component.
  • actual_variables (tuple of chainer.Variable) – Actual values which delegate_variable imitate.
Returns:

A variable with the given values combined with delegating variable.

Return type:

Variable