We got our PXE systems up and running on the Razor Microkernel. Next step is to image them!
Imaging with Razor is a two-step process:
- Define some tags to classify systems.
- Define some policies to image systems on the basis of their tags.
Step 1: Tags
Razor "tags" are essentially a rule-based system for classifying machines. You set up rules ahead of time, and then Razor automatically tags systems as they are discovered. For example, this rule says that any system with <2G RAM should get the 'small' tag:
[root@razor log]# razor create-tag --name small --rule '["<", ["num", ["fact", "memorysize_mb"]], 2048]'
From http://localhost:8150/api/collections/tags/small:
name: small
rule: ["<", ["num", ["fact", "memorysize_mb"]], 2048]
nodes: 0
policies: 0
command: http://localhost:8150/api/collections/commands/1
The next time the 1G VM checks in it will have the
small tag applied:
[root@razor log]# razor nodes
From http://localhost:8150/api/collections/nodes:
+-------+-------------------+--------+--------+----------------+
| name | dhcp_mac | tags | policy | metadata count |
+-------+-------------------+--------+--------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | --- | 0 |
+-------+-------------------+--------+--------+----------------+
| node2 | 08:00:27:43:84:1d | (none) | --- | 0 |
+-------+-------------------+--------+--------+----------------+
...
Similarly, we can define a rule that tag all systems with >2G as 'large':
[root@razor log]# razor create-tag --name large --rule '[">", ["num", ["fact", "memorysize_mb"]], 2048]'
From http://localhost:8150/api/collections/tags/large:
name: large
rule: [">", ["num", ["fact", "memorysize_mb"]], 2048]
nodes: 0
policies: 0
command: http://localhost:8150/api/collections/commands/2
and then, the next time the 4G node checks in...
[root@razor log]# razor nodes
From http://localhost:8150/api/collections/nodes:
+-------+-------------------+-------+--------+----------------+
| name | dhcp_mac | tags | policy | metadata count |
+-------+-------------------+-------+--------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | --- | 0 |
+-------+-------------------+-------+--------+----------------+
| node2 | 08:00:27:43:84:1d | large | --- | 0 |
+-------+-------------------+-------+--------+----------------+
...
Pretty neat, huh?
One minor shortcoming of Razor is that you can't arbitrarily tag a set of servers; tag application is entirely rule-based. This adds a little bit of complication to the common use case of "I got this bunch of servers I just brought online and I know exactly what I want to use them for". You can fake that functionality using the 'in' operator and a list of MAC addresses:
razor create-tag --name my-set-of-servers \
--rule '["in", ["fact", "macaddress"], "de:ea:db:ee:f0:00", "de:ea:db:ee:f0:01"]'
This seems like its a popular use case, as
later editions of Razor introduced the
has_macaddress and
has_macaddress_like operators to support this type of rule.
Step 2: Policies
The second half of setting up Razor for system imaging is to define policies, which tell Razor what to install and how it should be installed. Policies are triggered via tag matching, which automatically applies the appropriate policy to machines with a particular set of tags. For purposes of this demonstration lets assume that we want to install Ubuntu on small nodes and Centos on large nodes.
Before we can create a policy we need to identify a few things:
- What collection of bits will be used to image the systems?
- What are the mechanics for actually laying the bits down?
- How will the handoff from Razor to a configuration management system be handled?
Bullets one and two are handled via the creation of a
repository. The basic form of the command is
razor create-repo --name --task [ --iso-url | --url ]
One choice which needs to be made at this point is whether the Razor server is going to serve up the bits directly or simply point systems to another location. If you select --iso-url the Razor server will download the ISO and unpack it; make sure you have ample free disk space. --url will cause the Razor server to point to the specified address rather than serving up the content directly.
The other thing you need to do is specify a task, which provides Razor with the instructions on how to bootstrap the automated installation process. Task creation is somewhat involved and not for the faint-of-heart but, thankfully, Razor comes with a bunch of pre-defined tasks for common operating systems:
[root@razor ~]# razor tasks
From http://localhost:8150/api/collections/tasks:
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
| name | description | base | boot_seq |
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
| centos | CentOS Generic Installer | redhat | 1: boot_install, default: boot_local |
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
...
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
| windows/8pro | Microsoft Windows 8 Professional | windows | 1: boot_wim, default: boot_local |
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
...
So let's set up repos for CentOS 7 and Ubuntu Xenial, since there are pre-defined tasks for both of those:
[root@razor ~]# razor create-repo --name centos-7 --task centos/7 --iso-url http://centos.s.uw.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso
From http://localhost:8150/api/collections/repos/centos-7:
name: centos-7
iso_url: http://centos.s.uw.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso
url: ---
task: centos/7
command: http://localhost:8150/api/collections/commands/6
[root@razor ~]# razor create-repo --name ubuntu-xenial --task ubuntu/xenial --iso-url http://releases.ubuntu.com/16.04/ubuntu-16.04.6-server-amd64.iso
From http://localhost:8150/api/collections/repos/ubuntu-xenial:
name: ubuntu-xenial
iso_url: http://releases.ubuntu.com/16.04/ubuntu-16.04.6-server-amd64.iso
url: ---
task: ubuntu/xenial
command: http://localhost:8150/api/collections/commands/11
We now have two repos,
centos-7 and
ubuntu-xenial, that can be referenced in policies. The razor server will download and unpack the associated ISOs in the background.
The other item we have to consider for a policy is the Razor → configuration management system handoff. Razor handles this by means of brokers, and supports several popular configuration management systems (namely Puppet and Chef) out of the box (see razor create-broker --help for a complete listing). Additionally, if you want to integrate with a different system like Salt or Ansible Razor allows you to write your own brokers.
I'm going to keep things simple and just create a no-op broker:
[root@razor ~]# razor create-broker --name=noop --broker-type=noop
From http://localhost:8150/api/collections/brokers/noop:
name: noop
broker_type: noop
configuration: {}
policies: 0
command: http://localhost:8150/api/collections/commands/3
This type of broker doesn't try to do any sort of hand off; it's basically just a placeholder.
Alright, we've got repositories and a broker, let's create the policies:
razor create-policy --name small-nodes --repo ubuntu-xenial --broker noop --tag small --hostname 'ubuntu${id}.localdomain' --root-password not_secure
From http://localhost:8150/api/collections/policies/small-nodes:
name: small-nodes
repo: ubuntu-xenial
task: ubuntu/xenial
broker: noop
enabled: true
max_count:
tags: small
nodes: 0
command: http://localhost:8150/api/collections/commands/12
The policy 'small-nodes' will install Ubuntu Xenial on any node with the 'small' tag. The host will be named according to its ID and have the specified root password. Doing it again for CentOS:
[root@razor etc]# razor create-policy --name large-nodes --repo centos-7 --broker noop --tag large --hostname 'centos${id}.localdomain' --root-password not_secure
From http://localhost:8150/api/collections/policies/large-nodes:
name: large-nodes
repo: centos-7
task: centos/7
broker: noop
enabled: true
max_count:
tags: large
nodes: 0
command: http://localhost:8150/api/collections/commands/13
Same deal for the most part: Nodes with the tag 'large' will get CentOS 7 and the associated hostname.
No additional steps are needed to kick off imaging. The next time either host checks in it will have the policy applied and will start the appropriate imaging process. For example:
[root@razor ~]# razor nodes
From http://localhost:8150/api/collections/nodes:
+-------+-------------------+-------+-------------+----------------+
| name | dhcp_mac | tags | policy | metadata count |
+-------+-------------------+-------+-------------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | small-nodes | 0 |
+-------+-------------------+-------+-------------+----------------+
| node2 | 08:00:27:43:84:1d | large | --- | 0 |
+-------+-------------------+-------+-------------+----------------+
...
node1 has completed its scheduled check-in and has had the
small-nodes policy applied. If you're watching the system on console it should reboot and go into the Ubuntu installation process.
When initially working through this process I got
ipxe no configuration methods succeeded
FATAL: INT18: BOOT FAILURE
on console. Per the suggestion at
http://ipxe.org/err/040ee1 a hard reboot temporarily solved the problem. A permanent fix, at least in the case of VirtualBox, is to
disable the "Enable I/O APIC" feature for the VM.
No further intervention should be required at this point; both VMs should come up with the appropriate operating systems and host names. Here's what I ended up with:
[root@razor etc]# razor nodes
From http://localhost:8150/api/collections/nodes:
+-------+-------------------+-------+-------------+----------------+
| name | dhcp_mac | tags | policy | metadata count |
+-------+-------------------+-------+-------------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | small-nodes | 1 |
+-------+-------------------+-------+-------------+----------------+
| node2 | 08:00:27:43:84:1d | large | large-nodes | 1 |
+-------+-------------------+-------+-------------+----------------+
Note that, in addition to listing a policy, the table also shows that both VMs have some metadata defined now. Let's see what it is:
[root@razor etc]# razor nodes node1
From http://localhost:8150/api/collections/nodes/node1:
name: node1
dhcp_mac: 08:00:27:0c:fd:f4
state:
installed: small-nodes
installed_at: 2019-06-07T13:39:30-07:00
stage: boot_local
policy: small-nodes
last_checkin: 2019-06-07T13:13:21-07:00
metadata:
ip: 192.168.15.74
tags: small
...
In this case the metadata lists the IP assigned to the host.
And that's what imaging with Razor looks like, modulo some configuration management stuff that I decided to elide. That's it for the present; I expect that I'll write up one more post with some concluding thoughts in the near future.