Remote actors in Theron
Theron 5 supports distributed computing: it allows actors in applications running on remote hosts, but connected via a network, to exchange messages. Messages are sent remotely using the same syntax as within the same process.
This page gives an overview of remote actor support in Theron 5. See the Client/Server tutorial for a worked example.
Remote actor support is currently implemented using Crossroads.io (libxs), a portable message-based network transport library. Because this introduces a dependency on a third-party library, support for remote actors is not enabled by default and must be explicitly enabled in the build.
In order to enable remote actor support, install and build a local copy of Crossroads.io and then enable use of it in the Theron build. For more details see the Getting Started guide, which includes help on building with Crossroads.io in GCC and Visual Studio.
Theron's networking support is designed to be implementation-agnostic, depending on lowest-common-denominator functionality. The use of network API is hidden behind a small number of well-defined interfaces. The intention is that these could in future be implemented with other network libraries.
At the core of the distributed computing support is a new Theron::EndPoint class which represents a network location, host, or endpoint, within a network of connected hosts. Although EndPoints are essential in distributed applications, they are optional in non-distributed ones. Old code still works.
Frameworks and Receivers can now be tied to an EndPoint on construction. Tying a Framework to an EndPoint effectively ties all the actors hosted within that Framework to the same EndPoint. Actors and Receivers that are tied to an EndPoint can send and receive messages to remote entities on the network.
EndPoints are bound on construction to a local network address and port. Once constructed, they can be connected to other EndPoints in other, remote, applications. The remote EndPoints are identified by the remote network addresses and ports to which they are bound.
The EndPoint within each participating application must be explicitly connected to all other EndPoints. It's expected that users will have their own schemes for identifying remote hosts and obtaining their addresses - for example they may be known in advance.
Actors and Receivers can now be assigned user-defined names on construction. These names serve as their addresses. Assigning a specific name to an actor allows remote actors to send it a message without knowing where it is located. For that reason the user-defined names are expected to be globally unique.
Naming Actors and Receivers is optional: if they are not named they are assigned automatically generated names on construction. These generated names are prefixed with the names of the EndPoint (and Framework, in the case of Actors) where they are hosted. As long as EndPoint names are kept globally unique, the automatically generated Actors and Receivers names will be unique too.
The advantage of user-defined names is that they can be known a priori, by common agreement, without needing to be discovered at runtime.
Typically, at least one actor must be given a user-defined name, so that actors on remote hosts can initiate the communication. There is no way to discover the name of an automatically named actor in a remote process except by receiving a message from it.
See Client/Server in the tutorial for a worked example.