Why

A colleague of mine asked me recently whether I have a short 101 about how to build EEs in a regular pipeline (as in, using sth like GitHub Actions, or Gitlab CI etc.). The target audience is people who might have build an EE at some point manually, but have not yet really been exposed to container-y things. So, here’s the gist, removed some internal bits though.

Required Software:

  1. a container runtime, e.g. podman (something to build container images)

  2. ansible-builder (by the time of writing, v3 is the current version)

How It Works - High Level

ansible-builder allows you to define what’s needed in your EE and for your EE build. This is usually comprised of:

  • Base image to start from

  • Dependencies:

    • RPMS

    • Python modules

    • Collections and/or roles

  • Additional files you might require

  • Additional build steps you might need to inject

  • arguments you might need for the build

    • Galaxy options

    • package manager options

    • …​

ansible-builder essentially creates a Containerfile or Dockerfile and a respective build context which contains generated scripts, additional files you might have specified etc. If you inspect the Containerfile, you’ll quickly see that it’s a multi-staged build with the following stages:

  1. base

  2. galaxy

  3. builder

  4. final

The result of each stage is the basis for the next stage. For each of the stages are injection points (prepend and append per stage) available in the defintion file for the EE in case you have specific requirements.

Preparation

  1. Create a Git Repository for the EE.

  2. Create an execution-environment.yml

    version: 3 # (1)
    
    # build_args_defaults: [] # (2)
    
    dependencies:
      galaxy: requirements.yml # (3)
      python: requirements.txt # (4)
      system: bindep.txt # (5)
    
    images:
      base_image:
        name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel9:latest # (6)
    
    options:
      package_manager_path: /usr/bin/microdnf # (7)
    
      additional_build_steps:
        prepend_base:
          - RUN echo This is a prepend base command!
    
        prepend_galaxy:
          # Environment variables used for Galaxy client configurations
          - ENV ANSIBLE_GALAXY_SERVER_LIST=automation_hub
          - ...
          # define a custom build arg env passthru - we still also have to pass
          # `--build-arg ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN` to get it to pick it up from the env
          - ARG ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN # (8)
    
        append_final:
          - RUN echo This is a post-install command!
          - RUN ls -la /etc
    <1> The version `ansible-builder` should use.
    <2> see https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/creating_and_consuming_execution_environments/index#ref-build-args-base-image[docs]
    <3> Galaxy dependencies go in there. See below.
    <4> Python dependencies go in there. See below
    <5> RPMs you need to install go in there. See below.
    <6> Ideally, use a supported base image, ideally start always with a minimal EE.
    <7> Depends on your base image. The minimal EE uses `microdnf`.
    <8> If you do not allow anonymous downloads in your PAH, do not put the token in your repo files. Do not put an `ansible.cfg` into your repo files containing tokens. It's sensitive information, treat it as such.
  3. Galaxy dependencies:

    collections:
      - name: community.general
        version: 9.1.0
      - name: ansible.posix
        version: 1.5.4
  4. Python dependencies:

    kubernetes==30.1.0
    kubernetes-validate==1.30.0
    PyYAML==6.0.1
  5. System dependencies:

    some-package [platform:rpm]
    Note
    bindep Format.

What needs updating?

  • the base image is updated regularly. 'latest' is a floating tag, so latest today is likely a different base image compared to latest in 4 weeks. If you want higher reproducability, consider using image digests

  • Collection versions: you probably want to update versions at some point. You could use versions like >=1.2.3 however, this increases chances that you don’t actually know which versions are actually tested with your automation code and when.

  • Python depds: same as for collections

  • RPMs: there are no versions in the bindep format. So even if the base image has not changed, there might be newer versions for RPM.

How to Build EEs

Locally, for quick testing purposes you can jsut use ansible-builder build <options>. E.g.

ansible-builder build

You can pass options like -t <your-registry>/<repo>/<yourimage>:<your tag>, --build-args as rquired. -h and docs are, as usual, your friends.

It is strongly recommended however, to have a proper process in place. EEs are essentially part of your automation, just like code. They need proper lifecycling and release management.

A few samples for minimalistic pipelines (as in, they do not cover anything else other than actually building the EE), mostly from homelabs and personal projects below to get an idea for starting point.

GitLab

Note
Do not Ignore

This sample is from a homelab, where

  1. ansible-builder is installed via pip in a podman-in-podman runner.

  2. the base image is an upstream one.

  3. Dependencies are automatically updated via Bot. Hence version pinning and the use of digests

EE-Name: ee-example Repo Name: ee-example

Repo:

$ tree
.
├── build-requirements.txt
├── execution-environment.yml
├── renovate.json
├── requirements.txt
└── requirements.yml
Show build-requirements.txt
$ cat build-requirements.txt
ansible-builder==3.0.1
# workaround https://github.com/ansible/ansible-builder/issues/644
setuptools==69.5.1
Show .gitlab-ci.yml
 $ cat .gitlab-ci.yml
---
variables:
  EE_IMAGE_NAME: quay.apps.ocp4.example.com/demo/example-ee

stages:
  - lint
  - build

ee-schema-lint:
  stage: lint
  tags:
    - podman
  image: quay.io/ansible/creator-ee:latest@sha256:335246cec55cde727eba150a4bcaa735bb2bc51d24d7410ff26a512f83871877
  script:
    - echo ${CI_PROJECT_NAME}
    - ansible-lint execution-environment.yml

ee-build:
  image: quay.io/buildah/stable:latest@sha256:3a214f9cf3a32063391a61861ab94423bf3c948963618fa5ac0d3d8469c6c772
  stage: build
  tags:
    - podman
  script:
    - dnf install python3-pip -y
    - pip3 install -r build-requirements.txt
    - ansible-builder create
    - cd context ; buildah --storage-driver=vfs build -f Dockerfile -t ${EE_IMAGE_NAME}:${CI_COMMIT_SHORT_SHA} -t ${EE_IMAGE_NAME}:latest
    - buildah login -u='$app' -p="${QUAY_TOKEN}" quay.apps.ocp4.example.com
    - buildah --storage-driver=vfs push ${EE_IMAGE_NAME}:latest
    - buildah --storage-driver=vfs push ${EE_IMAGE_NAME}:${CI_COMMIT_SHORT_SHA}
  rules:
    - if: '$CI_PIPELINE_SOURCE == "schedule"'
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "main"'
  artifacts:
    when: always
    expire_in: 1d

GitHub

Note

The referenced repo actually contains more than one EE since it’s just a personal repo. It has 1 general workflow for building an EE which is used by concrete workflows to avoid duplication.

Show workflow calling the actual workflow
name: ee-dig | build and push

on:
  push:
    paths:
      - ee-dig/**
    branches:
      - main
  workflow_dispatch:
  schedule:
    - cron: '0 0 * * 0' # weekly

jobs:
  call-build-and-push-latest:
    uses: ./.github/workflows/build-latest.yml
    with:
      subdir: ee-dig
    secrets: inherit
Show the actual workflow
name: REUSE | build and push latest

on:
  workflow_call:
    inputs:
      subdir:
        required: true
        type: string
    secrets:
      REGISTRY_USERNAME:
        required: true
      REGISTRY_PASSWORD:
        required: true

jobs:
  build-and-push-latest:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4.1.7
    - name: Install ansible-builder
      run: pip install -r requirements.txt
    - name: Create conext
      run: ansible-builder create
      working-directory: ${{ inputs.subdir }}
    - name: Build Image with Buildah
      id: build-image
      uses: redhat-actions/buildah-build@v2
      with:
        image: ${{ inputs.subdir }}
        tags: latest ${{ github.sha }}
        containerfiles: |
          ./${{ inputs.subdir }}/context/Containerfile
        context: ${{ inputs.subdir }}/context
    - name: Push to quay.io
      uses: redhat-actions/push-to-registry@v2
      with:
        image: ${{ steps.build-image.outputs.image }}
        tags: ${{ steps.build-image.outputs.tags }}
        registry: quay.io/jaeichle
        username: ${{ secrets.REGISTRY_USERNAME }}
        password: ${{ secrets.REGISTRY_PASSWORD }}

Notes

  • Git Repository: 1 repository for all EEs a team maintains vs. 1 Repo per EE. Lifecycling is cleaner with 1 repo per EE, then you can utilize git versions properly or even think about semantic releases etc.

  • PAH and collection downloads: You want to discuss within your team / with the customer, whether anonymous downloads might be a good option.