Automated KVM VM Provisioning with Ansible and OSBuild on RHEL9

Automated KVM VM Provisioning with Ansible and OSBuild on RHEL9 #

Introduction #

When I started looking into automating my homelab VM provisioning, I was surprised by the lack of examples combining Ansible with OSBuild for KVM environments. Not many tutorials focus on KVM, so I wanted something that used Red Hat’s tooling - as I run a RHEL homelab.

I used to provision my homelab virtual machines by hand and eventually I got tired of doing it since I like to tinker around a lot and constantly add new VMs. So, I decided to automate the process using the combination of Ansible and OSBuild.

The Solution #

The result is an Ansible playbook that provisions fully configured KVM virtual machines on a bare-metal RHEL9 server. My setup uses OSBuild (Image Builder) with Jinja2 templates to create customized VM images. If I ever want a new VM all I have to do is log into AAP (Ansible Automation Platform) and hit the “Launch template” button and the template’s survey will prompt me for all the values the playbook needs, such as:

  • FQDN
  • IP address
  • CPU
  • RAM

Provisioning VMs by hand used to take me roughly 30 minutes and now it takes around 5 minutes.

Heres what the template’s workflow looks like:

flowchart LR
    subgraph A[Ansible Playbook]
        CHECK[Check DNS A & PTR Records]
        BLUEPRINT[Render Blueprint]
        PUSH[Push Blueprint & Compose Image]
        DOWNLOAD[Download & Extract Image]
        CUSTOMIZE[Customize Image]
        DEFINE[Define VM in libvirt]
        START[Start VM]
        REGISTER[Register VM in AAP Inventory]
        CLEANUP[Cleanup Temporary Files]

        CHECK --> BLUEPRINT
        BLUEPRINT --> PUSH
        PUSH --> DOWNLOAD
        DOWNLOAD --> CUSTOMIZE
        CUSTOMIZE --> DEFINE
        DEFINE --> START
        START --> REGISTER
        REGISTER --> CLEANUP
    end

The Main Playbook #

The playbook follows the workflow shown in the diagram above. Here’s how each step works:

Note: You’ll notice that most of the sensitive variables (like passwords, SSH keys, and controller credentials) are not visible in the playbook below. This is because I use Ansible Vault to securely store all sensitive data in the repository. If you’re new to Ansible Vault, I’ve written a comprehensive guide on how to set it up.

Step 1: Check DNS A & PTR Records #

Before creating any VM, the playbook validates that both forward (A) and reverse (PTR) DNS records exist for the hostname and IP address. This prevents conflicts and ensures proper name resolution.

- name: Check if DNS A record exists for the VM name
  ansible.builtin.set_fact:
    dns_a_record: "{{ lookup('community.general.dig', vm_name + '/A') }}"

- name: Check if DNS PTR record exists for the VM IP
  ansible.builtin.set_fact:
    dns_ptr_record: "{{ lookup('community.general.dig', vm_ip + '/PTR') }}"

- name: Fail if DNS A record does not exist
  ansible.builtin.fail:
    msg: "DNS A record for {{ vm_name }} does not exist."
  when: dns_a_record | length == 0

- name: Fail if DNS PTR record does not exist
  ansible.builtin.fail:
    msg: "DNS PTR record for {{ vm_ip }} does not exist."
  when: dns_ptr_record | length == 0

Step 2: Render Blueprint #

The playbook creates a temporary OSBuild blueprint file from the Jinja2 template, customized with the VM-specific variables from the AAP survey.

- name: Create temporary blueprint file
  ansible.builtin.template:
    src: "{{ blueprint_template }}"
    dest: "/tmp/rhel9-{{ vm_name }}.toml"
    mode: '0600'

Step 3: Push Blueprint & Compose Image #

This step pushes the blueprint to Image Builder and starts the composition process. The playbook extracts the composition UUID for tracking progress.

- name: Push blueprint to Image Builder
  ansible.builtin.command:
    cmd: composer-cli blueprints push "/tmp/rhel9-{{ vm_name }}.toml"
  register: blueprint_push
  changed_when: blueprint_push.rc == 0
  notify: cleanup

- name: Start image composition
  ansible.builtin.command:
    cmd: composer-cli compose start rhel9 qcow2
  register: compose_start
  changed_when: compose_start.rc == 0
  failed_when: compose_start.rc != 0 or 'FAILED' in compose_start.stdout

- name: Extract compose UUID
  ansible.builtin.set_fact:
    compose_uuid: >-
      {{ compose_start.stdout |
      regex_search('([a-f0-9-]{36})') }}

- name: Wait for image composition to finish
  ansible.builtin.command:
    cmd: composer-cli compose info "{{ compose_uuid }}"
  register: compose_status
  until: "'FINISHED' in compose_status.stdout"
  retries: 60
  delay: 30
  timeout: 1800
  changed_when: false

Step 4: Download & Extract Image #

Once the composition is complete, the playbook downloads the results and extracts the qcow2 image file.

- name: Download compose results
  ansible.builtin.command:
    cmd: composer-cli compose results "{{ compose_uuid }}"
    chdir: "{{ iso_dir }}"
  register: compose_download
  changed_when: compose_download.rc == 0

- name: Extract image from tarball
  ansible.builtin.unarchive:
    src: "{{ iso_dir }}/{{ compose_uuid }}.tar"
    dest: "{{ iso_dir }}"
    remote_src: true

- name: Copy composed image to images directory
  ansible.builtin.copy:
    dest: "{{ images_dir }}/{{ vm_name }}.qcow2"
    src: "{{ iso_dir }}/{{ compose_uuid }}-disk.qcow2"
    force: false
    remote_src: true
    mode: '0600'
  notify: cleanup

Step 5: Customize Image #

Using virt-customize, the playbook injects VM-specific configurations like hostname, root password, and static network configuration.

- name: Configure the image
  ansible.builtin.command: >-
    virt-customize -a {{ images_dir }}/{{ vm_name }}.qcow2
    --hostname {{ vm_name }}
    --root-password password:{{ vm_root_pass }}
  register: virt_customize_result
  changed_when: virt_customize_result.rc == 0

- name: Render static network configuration file
  ansible.builtin.template:
    src: "templates/eth0.nmconnection.j2"
    dest: "/tmp/eth0.nmconnection"
    mode: '0600'
  notify: cleanup

- name: Inject static IP config into image
  ansible.builtin.command: >-
    virt-customize -a {{ images_dir }}/{{ vm_name }}.qcow2
    --copy-in /tmp/eth0.nmconnection:/etc/NetworkManager/system-connections/
    --run-command 'chmod 600 /etc/NetworkManager/system-connections/eth0.nmconnection'
    --run-command 'restorecon -v /etc/NetworkManager/system-connections/eth0.nmconnection'
  register: virt_customize_network_result
  changed_when: virt_customize_network_result.rc == 0

Step 6: Define VM in libvirt #

The playbook generates a unique MAC address and UUID, then defines the VM in libvirt using the XML template.

- name: Generate MAC address
  ansible.builtin.set_fact:
    vm_mac: >-
      {{ '52:54:00' |
      community.general.random_mac(seed=inventory_hostname) }}

- name: Generate UUID
  ansible.builtin.set_fact:
    vm_uuid: "{{ lookup('pipe', 'uuidgen') }}"

- name: Define VM from template
  community.libvirt.virt:
    command: define
    xml: >-
      {{ lookup('template',
      'templates/vm-template.xml.j2') }}

Step 7: Start VM #

The VM is started and configured to auto-start on host boot. The playbook includes retry logic to handle any startup delays.

- name: Ensure VM is started
  community.libvirt.virt:
    name: "{{ vm_name }}"
    state: running
    autostart: true
  register: vm_start_results
  until: "vm_start_results is success"
  retries: 15
  delay: 2

Step 8: Register VM in AAP Inventory #

Finally, the VM is automatically added to the AAP inventory, making it available for future automation workflows.

- name: Ensure the VM exists as a Host on Ansible Automation Controller
  awx.awx.host:
    controller_host: "{{ controller_host }}"
    controller_username: "{{ controller_username }}"
    controller_password: "{{ controller_password }}"
    inventory: "{{ inventory_name }}"
    name: "{{ vm_name }}"
    state: present
    enabled: true

Step 9: Cleanup Temporary Files #

The playbook uses handlers to clean up all temporary files and remove the composition from Image Builder, keeping the system tidy.

handlers:
  - name: Clean up temporary files
    ansible.builtin.file:
      path: "{{ item }}"
      state: absent
    loop:
      - "{{ iso_dir }}/{{ compose_uuid }}.tar"
      - "{{ iso_dir }}/{{ compose_uuid }}-disk.qcow2"
      - "{{ iso_dir }}/{{ compose_uuid }}.json"
      - "{{ iso_dir }}/logs"
    listen: cleanup

  - name: Remove temporary blueprint file
    ansible.builtin.file:
      path: "/tmp/rhel9-{{ vm_name }}.toml"
      state: absent
    listen: cleanup

  - name: Remove temporary network config
    ansible.builtin.file:
      path: /tmp/eth0.nmconnection
      state: absent
    listen: cleanup

  - name: Clean up composed image from Image Builder
    ansible.builtin.command:
      cmd: "composer-cli compose delete {{ compose_uuid }}"
    register: compose_delete
    changed_when: compose_delete.rc == 0
    failed_when: compose_delete.rc != 0
    listen: cleanup

  - name: Clean up blueprint from Image Builder
    ansible.builtin.command:
      cmd: "composer-cli blueprints delete rhel9"
    register: blueprint_delete
    changed_when: blueprint_delete.rc == 0
    failed_when: blueprint_delete.rc != 0
    listen: cleanup

Complete Playbook #

If you prefer to see the entire playbook as a single file, here it is:

Click to expand the complete provision.yml playbook
---
- name: Provision VM
  hosts: kvm.home.arpa
  gather_facts: true
  become: true
  vars_files: "../group_vars/all/vault.yml"
  vars:
    iso_dir: "/var/lib/libvirt/iso"
    images_dir: "/var/lib/libvirt/images"
    vm_name: "{{ vm_name }}"
    vm_ip: "{{ vm_ip }}"
    vm_vcpus: "{{ vm_vcpus }}"
    vm_memory: "{{ vm_memory }}"
    blueprint_template: "templates/rhel9.toml.j2"
  tasks:
    - name: List all VMs
      community.libvirt.virt:
        command: list_vms
      register: existing_vms
      changed_when: false

    - name: Check if DNS A record exists for the VM name
      ansible.builtin.set_fact:
        dns_a_record: "{{ lookup('community.general.dig', vm_name + '/A') }}"

    - name: Check if DNS PTR record exists for the VM IP
      ansible.builtin.set_fact:
        dns_ptr_record: "{{ lookup('community.general.dig', vm_ip + '/PTR') }}"

    - name: Fail if DNS A record does not exist
      ansible.builtin.fail:
        msg: "DNS A record for {{ vm_name }} does not exist."
      when: dns_a_record | length == 0

    - name: Fail if DNS PTR record does not exist
      ansible.builtin.fail:
        msg: "DNS PTR record for {{ vm_ip }} does not exist."
      when: dns_ptr_record | length == 0

    - name: Create VM if it doesn't exist
      when: "vm_name not in existing_vms.list_vms"
      block:
        - name: Create temporary blueprint file
          ansible.builtin.template:
            src: "{{ blueprint_template }}"
            dest: "/tmp/rhel9-{{ vm_name }}.toml"
            mode: '0600'

        - name: Push blueprint to Image Builder
          ansible.builtin.command:
            cmd: composer-cli blueprints push "/tmp/rhel9-{{ vm_name }}.toml"
          register: blueprint_push
          changed_when: blueprint_push.rc == 0
          notify: cleanup

        - name: Start image composition
          ansible.builtin.command:
            cmd: composer-cli compose start rhel9 qcow2
          register: compose_start
          changed_when: compose_start.rc == 0
          failed_when: compose_start.rc != 0 or 'FAILED' in compose_start.stdout

        - name: Extract compose UUID
          ansible.builtin.set_fact:
            compose_uuid: >-
              {{ compose_start.stdout |
              regex_search('([a-f0-9-]{36})') }}

        - name: Wait for image composition to finish
          ansible.builtin.command:
            cmd: composer-cli compose info "{{ compose_uuid }}"
          register: compose_status
          until: "'FINISHED' in compose_status.stdout"
          retries: 60
          delay: 30
          timeout: 1800
          changed_when: false

        - name: Download compose results
          ansible.builtin.command:
            cmd: composer-cli compose results "{{ compose_uuid }}"
            chdir: "{{ iso_dir }}"
          register: compose_download
          changed_when: compose_download.rc == 0

        - name: Extract image from tarball
          ansible.builtin.unarchive:
            src: "{{ iso_dir }}/{{ compose_uuid }}.tar"
            dest: "{{ iso_dir }}"
            remote_src: true

        - name: Copy composed image to images directory
          ansible.builtin.copy:
            dest: "{{ images_dir }}/{{ vm_name }}.qcow2"
            src: "{{ iso_dir }}/{{ compose_uuid }}-disk.qcow2"
            force: false
            remote_src: true
            mode: '0600'
          notify: cleanup

        - name: Configure the image
          ansible.builtin.command: >-
            virt-customize -a {{ images_dir }}/{{ vm_name }}.qcow2
            --hostname {{ vm_name }}
            --root-password password:{{ vm_root_pass }}
          register: virt_customize_result
          changed_when: virt_customize_result.rc == 0

        - name: Render static network configuration file
          ansible.builtin.template:
            src: "templates/eth0.nmconnection.j2"
            dest: "/tmp/eth0.nmconnection"
            mode: '0600'
          notify: cleanup

        - name: Inject static IP config into image
          ansible.builtin.command: >-
            virt-customize -a {{ images_dir }}/{{ vm_name }}.qcow2
            --copy-in /tmp/eth0.nmconnection:/etc/NetworkManager/system-connections/
            --run-command 'chmod 600 /etc/NetworkManager/system-connections/eth0.nmconnection'
            --run-command 'restorecon -v /etc/NetworkManager/system-connections/eth0.nmconnection'
          register: virt_customize_network_result
          changed_when: virt_customize_network_result.rc == 0

        - name: Generate MAC address
          ansible.builtin.set_fact:
            vm_mac: >-
              {{ '52:54:00' |
              community.general.random_mac(seed=inventory_hostname) }}

        - name: Generate UUID
          ansible.builtin.set_fact:
            vm_uuid: "{{ lookup('pipe', 'uuidgen') }}"

        - name: Define VM from template
          community.libvirt.virt:
            command: define
            xml: >-
              {{ lookup('template',
              'templates/vm-template.xml.j2') }}

    - name: Ensure VM is started
      community.libvirt.virt:
        name: "{{ vm_name }}"
        state: running
        autostart: true
      register: vm_start_results
      until: "vm_start_results is success"
      retries: 15
      delay: 2

    - name: Ensure the VM exists as a Host on Ansible Automation Controller
      awx.awx.host:
        controller_host: "{{ controller_host }}"
        controller_username: "{{ controller_username }}"
        controller_password: "{{ controller_password }}"
        inventory: "{{ inventory_name }}"
        name: "{{ vm_name }}"
        state: present
        enabled: true
  handlers:
    - name: Clean up temporary files
      ansible.builtin.file:
        path: "{{ item }}"
        state: absent
      loop:
        - "{{ iso_dir }}/{{ compose_uuid }}.tar"
        - "{{ iso_dir }}/{{ compose_uuid }}-disk.qcow2"
        - "{{ iso_dir }}/{{ compose_uuid }}.json"
        - "{{ iso_dir }}/logs"
      listen: cleanup

    - name: Remove temporary blueprint file
      ansible.builtin.file:
        path: "/tmp/rhel9-{{ vm_name }}.toml"
        state: absent
      listen: cleanup

    - name: Remove temporary network config
      ansible.builtin.file:
        path: /tmp/eth0.nmconnection
        state: absent
      listen: cleanup

    - name: Clean up composed image from Image Builder
      ansible.builtin.command:
        cmd: "composer-cli compose delete {{ compose_uuid }}"
      register: compose_delete
      changed_when: compose_delete.rc == 0
      failed_when: compose_delete.rc != 0
      listen: cleanup

    - name: Clean up blueprint from Image Builder
      ansible.builtin.command:
        cmd: "composer-cli blueprints delete rhel9"
      register: blueprint_delete
      changed_when: blueprint_delete.rc == 0
      failed_when: blueprint_delete.rc != 0
      listen: cleanup

Template Files #

The playbook uses several Jinja2 templates to configure the VMs:

OSBuild Blueprint Template #

The rhel9.toml.j2 template defines the VM image specification:

name = "rhel9"
description = "home.arpa rhel9 golden image"
version = "0.0.1"
modules = []
groups = []
distro = "rhel-9"

packages = [
    { name = "firewalld" },
    { name = "firewalld-filesystem" },
    { name = "python3-firewall" },
    { name = "rhc-worker-playbook" },
    { name = "ansible-core" }
]

[customizations]
hostname = "{{ vm_name }}"

[[customizations.sshkey]]
user = "ansible"
key = "{{ ansible_ssh_public_key }}"

[[customizations.user]]
name = "ansible"
password = "{{ ansible_user_password_hash }}"
key = "{{ ansible_ssh_public_key }}"
groups = ["wheel"]

[customizations.locale]
keyboard = "fi"

[customizations.firewall]
[customizations.firewall.services]
enabled = ["ssh"]

[customizations.services]
enabled = ["firewalld"]

[customizations.timezone]
timezone = "Europe/Helsinki"
ntpservers = ["fi.pool.ntp.org"]

[[customizations.disk.partitions]]
type = "plain"
label = "boot"
mountpoint = "/boot"
fs_type = "xfs"
minsize = "1 GiB"

[[customizations.disk.partitions]]
type = "lvm"
name = "sysvg"
minsize = "20 GiB"

[[customizations.disk.partitions.logical_volumes]]
name = "rootlv"
mountpoint = "/"
label = "root"
fs_type = "xfs"
minsize = "10 GiB"

[[customizations.disk.partitions.logical_volumes]]
name = "varlv"
mountpoint = "/var"
label = "var"
fs_type = "xfs"
minsize = "5 GiB"

[[customizations.disk.partitions.logical_volumes]]
name = "homelv"
mountpoint = "/home"
label = "home"
fs_type = "xfs"
minsize = "3 GiB"

[[customizations.disk.partitions.logical_volumes]]
name = "tmplv"
mountpoint = "/tmp"
label = "tmp"
fs_type = "xfs"
minsize = "1 GiB"

Network Configuration Template #

The eth0.nmconnection.j2 template creates a NetworkManager connection profile for static IP configuration:

[connection]
id=eth0
uuid={{ lookup('pipe', 'uuidgen') }}
type=ethernet
autoconnect=true
interface-name=eth0
timestamp={{ ansible_date_time.epoch }}

[ethernet]

[ipv4]
address1={{ vm_ip }}/24
dns=10.10.10.1;
dns-search=home.arpa;
gateway=10.10.10.1
method=manual

[ipv6]
addr-gen-mode=default
method=auto

[proxy]

VM Definition Template #

The vm-template.xml.j2 template defines the libvirt VM configuration (just a big blob of XML with barely any variables):

Click to expand VM XML template
<domain type='kvm'>
 <name>{{ vm_name }}</name>
 <uuid>{{ vm_uuid }}</uuid>
 <metadata xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0" xmlns:cockpit_machines="https://github.com/cockpit-project/cockpit-machines">
  <libosinfo:libosinfo>
   <libosinfo:os id="http://redhat.com/rhel/9.6"/>
  </libosinfo:libosinfo>
  <cockpit_machines:data>
   <cockpit_machines:has_install_phase>false</cockpit_machines:has_install_phase>
   <cockpit_machines:install_source_type>file</cockpit_machines:install_source_type>
   <cockpit_machines:install_source>/var/lib/libvirt/iso/rhel-9.6-x86_64-boot.iso</cockpit_machines:install_source>
   <cockpit_machines:os_variant>rhel9.6</cockpit_machines:os_variant>
  </cockpit_machines:data>
 </metadata>

 <memory unit='KiB'>{{ vm_memory * 1024 }}</memory>
 <currentMemory unit='KiB'>{{ vm_memory * 1024 }}</currentMemory>
 <vcpu placement='static'>{{ vm_vcpus }}</vcpu>

 <os>
  <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
  <boot dev='hd'/>
 </os>

 <features>
  <acpi/>
  <apic/>
 </features>

 <cpu mode='host-passthrough' check='none' migratable='on'>
  <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
 </cpu>

 <clock offset='utc'>
  <timer name='rtc' tickpolicy='catchup'/>
  <timer name='pit' tickpolicy='delay'/>
  <timer name='hpet' present='no'/>
 </clock>

 <on_poweroff>destroy</on_poweroff>
 <on_reboot>restart</on_reboot>
 <on_crash>destroy</on_crash>

 <pm>
  <suspend-to-mem enabled='no'/>
  <suspend-to-disk enabled='no'/>
 </pm>

 <devices>
  <emulator>/usr/libexec/qemu-kvm</emulator>
  <disk type='file' device='disk'>
   <driver name='qemu' type='qcow2' discard='unmap'/>
   <source file='{{ images_dir }}/{{ vm_name }}.qcow2'/>
   <target dev='vda' bus='virtio'/>
   <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
  </disk>

  <disk type='file' device='cdrom'>
   <driver name='qemu' type='raw'/>
   <target dev='sda' bus='sata'/>
   <readonly/>
   <address type='drive' controller='0' bus='0' target='0' unit='0'/>
  </disk>

  <controller type='usb' index='0' model='qemu-xhci' ports='15'>
   <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
  </controller>

  <controller type='pci' index='0' model='pcie-root'/>
  <controller type='pci' index='1' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='1' port='0x10'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
  </controller>

  <controller type='pci' index='2' model='pcie-root-port'>
   <model name='pcie-root-port'/>
   <target chassis='2' port='0x11'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
  </controller>

  <controller type='pci' index='3' model='pcie-root-port'>
            <model name='pcie-root-port'/>
   <target chassis='3' port='0x12'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
  </controller>

        <controller type='pci' index='4' model='pcie-root-port'>
        <model name='pcie-root-port'/>
            <target chassis='4' port='0x13'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='5' port='0x14'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='6' port='0x15'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='7' port='0x16'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='8' port='0x17'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
        </controller>
        <controller type='pci' index='9' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='9' port='0x18'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='10' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='10' port='0x19'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
        </controller>
        <controller type='pci' index='11' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='11' port='0x1a'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
        </controller>
        <controller type='pci' index='12' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='12' port='0x1b'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
        </controller>
        <controller type='pci' index='13' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='13' port='0x1c'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
        </controller>
        <controller type='pci' index='14' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='14' port='0x1d'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
        </controller>
        <controller type='sata' index='0'>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='virtio-serial' index='0'>
            <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </controller>

  <interface type='bridge'>
   <mac address='{{ vm_mac }}'/>
   <source bridge='br0'/>
   <model type='virtio'/>
   <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </interface>

  <serial type='pty'>
   <target type='isa-serial' port='0'>
    <model name='isa-serial'/>
   </target>
  </serial>

  <console type='pty'>
   <target type='serial' port='0'/>
  </console>

  <channel type='unix'>
   <target type='virtio' name='org.qemu.guest_agent.0'/>
   <address type='virtio-serial' controller='0' bus='0' port='1'/>
  </channel>

  <input type='tablet' bus='usb'>
   <address type='usb' bus='0' port='1'/>
  </input>

  <input type='mouse' bus='ps2'/>
  <input type='keyboard' bus='ps2'/>

  <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
   <listen type='address' address='127.0.0.1'/>
  </graphics>

  <audio id='1' type='none'/>
  <video>
   <model type='virtio' heads='1' primary='yes'/>
   <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
  </video>

  <watchdog model='itco' action='reset'/>
  <memballoon model='virtio'>
   <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
  </memballoon>

  <rng model='virtio'>
   <backend model='random'>/dev/urandom</backend>
   <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
  </rng>
 </devices>
</domain>

Further Development #

I have future plans to expand on the survey with things like:

  • gateway
  • network mask
  • disk space
  • LVM partitioning scheme
  • custom packages

The beauty of using Jinja2 templates for the OSBuild blueprint is that adding these new parameters is really simple. I just need to add the variables to the template and expose them through the AAP survey.

Resources #

These two resources gave me the idea to start using OSBuild and Blueprint files for server provisioning:

Conclusion #

If you’re running a RHEL-flavor (Fedora, Centos, Rocky etc.) homelab and looking to level up your provision game, I highly recommend giving this approach a try.

But if you thought that was everything… it’s only the first step!

flowchart TD
    subgraph provision-workflow["AAP provision-workflow"]
        SURVEY["AAP Survey"]
        PROVISION[provision.yml
VM Creation & Setup] REGISTER[register.yml
Red Hat Subscription] IPA[ipa.yml
Domain Join & Groups] SSH[ssh.yml
Security Hardening] TUNED[tuned.yml
Performance Profiles] SNMP[snmp.yml
Monitoring Setup] PACKAGES[packages.yml
Essential Tools] SURVEY --> PROVISION PROVISION --> REGISTER REGISTER --> IPA IPA --> SSH SSH --> TUNED TUNED --> SNMP SNMP --> PACKAGES end style SURVEY fill:#2596be

The provision.yml playbook shown is actually just the beginning of my complete VM provisioning pipeline. In my AAP setup, I’ve built a workflow that takes a VM from initial creation all the way to a ready state without any manual intervention.

After the initial provisioning completes, the workflow automatically continues with:

  • register.yml - Red Hat subscription registration
  • ipa.yml - IPA client installation, domain joining, and user group configuration
  • ssh.yml - SSH security hardening
  • tuned.yml - Performance tuning with appropriate VM profiles
  • snmp.yml - SNMP configuration and LibreNMS monitoring setup
  • packages.yml - Installation of useful packages (vim, htop, mlocate, etc.)

This means when I click “Launch template” in AAP, I get a fully configured, secured, monitored, and production-ready VM in 10 minutes.