Deploying hosts is one of the core functions of orcharhino, along with configuration management and lifecycle management. It provides various ways to provide new hosts with information during installation, and eliminates the need for human interaction.
Value and Vision frequently needs to deploy new hosts, often in large numbers, in response to customer requests. Usually, these are virtual machines, but some customers prefer bare-metal servers. In all cases, Value and Vision’s employees prepare the new host in the web interface of orcharhino. With virtual machines, for example, they define their performance characteristics (CPU, RAM, network, storage). Of course, they also determin in which network the host should be located so that customers can access it. Further specifications are the operating system or the configuration management in use, including roles/classes/states. Value and Vision must be prepared for each and every eventuality, which is why they keep data and clients for almost all operating systems that orcharhino can manage:
- AlmaLinux 8
- Amazon Linux 2
- CentOS Linux 7 and CentOS Stream 8
- Debian 9, 10 and 11
- Oracle Linux 7 and 8
- Red Hat Enterprise Linux (RHEL) 7 and 8
- Rocky Linux 8
- SUSE Linux Enterprise Server 12 SP3, 12 SP4, 12 SP5, 15, 15 SP1, 15 SP2 and 15 SP3
- Ubuntu 18.04 and 20.04
Linux hosts require a kernel and initial ram disk, which they obtain by PXE booting from a boot image. Windows hosts can boot from a special minimal image. After that, various mechanisms are used to create the operating system automatically. In Linux, the following have been established: Kickstart/Anaconda (Red Hat family), AutoYAST (SUSE), and Preseed (Debian family). In all cases, a program queries the basic configuration via HTTP. Templates written by orcharhino admins in advance now create the Kickstart, AutoYAST, or Preseed definition from the configuration parameters. The rendered template then contains, for example, the network configuration of the respective host, password hashes, a basic packet selection, and information on how to prepare the configuration management.
orcharhino can start virtual machines directly by addressing the hypervisor API. Most bare-metal servers are started by orcharhino via BMC (iLO/IPMI/iDRAC) after their network interfaces have been connected to the switch in the correct VLAN. At Value and Vision, all hosts (physical and virtual) are connected to at least two networks: a productive network and an installation network. In the beginning, only the installation network matters. orcharhino or its orcharhino proxies act as DHCP servers and deliver all necessary files for PXE booting via TFTP.
After a few minutes, the new hosts are basically set up and reboot. The central configuration management then takes further steps and installs additional software. For example, some customers want web servers. An Ansible role written for this purpose installs the necessary packages, creates configuration files, and sets up firewall rules. From now on, customers can use their system.
At Finesse Bank, the process is similar, but more manageable, because the only intended operating system is SUSE Linux Enterprise Server. And there are fewer application scenarios. In addition, there is a test environment parallel to the production environment, in which admins simulate adjustments to the infrastructure. Only when everything is running smoothly there, the admins take the changes to the next level.
The University of Quality, however, has to dig a little deeper for the trick up its sleeve. Some applications require Windows servers to run. These are also installed and managed via orcharhino. For this purpose, there is a golden image, which is stored on the virtualization hosts and can be directly integrated. In addition to that, there is also a customizable network installation that uses templates, similar to kickstarting or preseeding in Linux.