Whether built for evaluation or to host production applications, when managing a small Deis cluster (three to five nodes), it is reasonable to accept the platform’s default behavior wherein the Control Plane, Data Plane, and Router Mesh are not isolated from one another. (See Architecture.) This means Control Plane components such as the Controller or Database will be eligible to run on any node, as will the Router Mesh and the Data Plane components such as Logspout, Publisher, and deployed applications.
In larger clusters however, nodes are more easily thought of as a commodity. Operators may scale clusters out to meet demand or in to conserve resources. In such cases, it is beneficial to isolate the Control Plane, which has no significant need to scale (and optionally, the Router Mesh) to a small, fixed number of nodes that are exempt from such scaling events. This eliminates the possibility that Control Plane components running on a decommissioned node will experience downtime as they are rescheduled. Additionally, this reserves the resources of a large (and possibly dynamic) pool of nodes for the workloads that are most likely to scale– applications.
The key to isolating the Control Plane, Data Plane, and Router Mesh is Fleet metadata. Although Deis supports alternate schedulers, Deis components themselves are all scheduled via Fleet.
Deis configures the Fleet daemon executing on each node at the time of
provisioning via cloud-config. Within that configuration, it is possible to tag
nodes with metadata in the form of key/value pairs to arbitrarily describe
attributes of the node. For instance, an operator may tag a node with
ssd=true to indicate that a node’s volumes use solid state disk.
#cloud-config --- coreos: fleet: metadata: ssd=true # ...
When scheduling a unit of work via Fleet, it is also possible to annotate that unit with metadata that is required to be present on any node in order to be considered eligible to host that work. In keeping with our previous example, to restrict a unit of work to only those nodes equipped with SSD, the unit may be annotated thusly:
# ... [X-Fleet] MachineMetadata="ssd=true"
Deis takes advantage of this very mechanism to establish which nodes are eligible to host each of the Control Plane, Data Plane, and Router Mesh.
To configure a Fleet node as eligible to host Control Plane components, the following cloud-config may be used:
#cloud-config --- coreos: fleet: metadata: controlPlane=true
routerMesh=true may be used to establish
eligibility to host components of the Data Plane (including applications) and
Router Mesh, respectively.
It is also possible to configure nodes as eligible to host two or even all three of the Control Plane, Data Plane, and Router Mesh. In fact, this is the default behavior described by Deis’ included cloud-config.
#cloud-config --- coreos: fleet: metadata: controlPlane=true,dataPlane=true,routerMesh=true
It should be obvious that isolating the planes as described here requires subsets of a cluster’s nodes to be configured differently from one another (with different metadata). Deis provisioning scripts do not currently account for this, so managing separate cloud-config for each subset of nodes in the cluster is left as an exercise for the advanced operator.
To complement the cloud-config described above, Deis 1.10.0 and later are capable of seamlessly “decorating” the Fleet units for each Deis platform component with the metadata that describes where each unit may be hosted.
For the purposes of backwards compatibility with Deis clusters provisioned using versions of Deis older than 1.10.0, decorating the platform’s units with metadata is an opt-in. Nodes in older clusters are guaranteed to be lacking the metadata that indicates what components they are eligible to host. As such, decorated units would be ineligible to run anywhere within such a cluster.
To opt in, use the following:
$ deisctl config platform set enablePlacementOptions=true