The University of Quality now uses the orcharhino (see ‘How does orcharhino arrive at the customer?’) for the entire campus. Originally a pure project for the administration of central services, almost all institutes are now involved. Thus a collection of tools and approaches has been concentrated in one tool. In the course of this process, it has become apparent that it is important for the individual institutions involved (department chairs, administrative units) to be able to continue to map the original separation between institutes and departments.
Mechanisms to map this separation in orcharhino are primarily known as organizations and locations. As organizations, for example, there are universities for the central services and the department infrastructure as well as administration and the hospital. The locations are one level below. The administration does not make any further distinctions here; the hospital has opted for a division into infrastructure and teaching operations. For the organization university, there are locationswith the names of the individual departments, data centers, clusters and media. All objects in orcharhino are assigned an organization and a location. This allows to filter lists on the one hand and to assign rights on the other hand.
For user authentication, orcharhino is connected to the central LDAP server of the university and also to that of the hospital. When a user logs on to the web frontend, the system assigns him or her to a user group. The assignment is made according to the group assignment in LDAP. Department administrators, for example, only see their own computers and can only manage these.
The managed objects are primarily the hosts because this is where the administrators work most, but subnets, for example, can also be restricted in their access permissions. Thus the admins can register newly created hosts only in certain IP ranges.
A class C network is available to each administration unit. Almost all computers receive public IPs, only the computing cluster is located in a private network and there is a separate DMZ for certain servers. The cluster network is highly isolated, similar to a DMZ. Only two nodes are accessible from the university network and can establish connections to the outside: a login node for users and the orcharhino proxy. The latter is a separate instance and belongs to the orcharhino infrastructure. It does not have its own web-GUI but only serves as a link for networks to which orcharhino has no direct access or mirrors repositories if this is recommended to minimize the used bandwidth. It also receives puppet reports and serves as a proxy for Ansible Calls. Services running on the orcharhino proxy are similar to orcharhino, but in a subordinate role. Pulp synchronizes selected repositories from orcharhino – in this case Debian – and supplies the nodes with packages, it is DHCP server and name server and provides via PXE and TFTP the necessary data to install new nodes.
Besides the cluster, there are currently 8 subnets attached to orcharhino. This includes a central network in which the university’s server services are located (intranet, mail server, etc.) and several institute networks if these operate their own central services or small local computing clusters, as in physics or chemistry.
In all networks, a orcharhino proxy runs as a link between the hosts and orcharhino. Since the network load is manageable so far, almost all hosts use orcharhino irectly as packet source, but they have to use their respective orcharhino proxy as HTTP proxy, because there is no direct route – der orcharhino itself hangs in an internal network. Workstations are so far excluded from the central administration by the orcharhino, although this is being considered at least for the Linux pools.
Value and Vision operates a comparable infrastructure but has predominantly private networks that customers access via VPN. The administrators automatically create a separate organization for each customer. Hosts authenticate themselves to orcharhino via SSL certificates and therefore only see allowed repositories. Some customers deposit self-developed software and can thus ensure that no competitor can easily access it.
Finesse-Bank has a central network in the main branch and subnets in the remote locations. The orcharhino proxies there mirror the repositories of orcharhino. The operation of orcharhino is completely in the hands of the infrastructure team, so that no fine granular rights management on user level is necessary. This requires more stringent guidelines for the network. This is actually outside the range that orcharhino manages. However, the hardware can be controlled via Ansible, which in turn can be managed via the orcharhino.