Ansible Tower on Minishift
So, in order to have a lightweight and standardized (well, as much lightweight as it can get) development environment for Ansible Tower on OpenShift, I was testing the deployment of Tower on minishift.
From a tower perspective, that’s nothing special. You just run the installer as you would do on OpenShift. However minishift seems to have a problem with mounting secrets as volumes with a subpath. Strange thing - and not really something I figured out right away. Luckily, for config maps it’s working - yay!
So, since it’s a local development environment, any secret isn’t really super-secret. At least it shouldn’t be. So patching the installer bundle a little bit seemed like a viable workaround.
Here’s what I had to do - so everything would still deploy the same way on OpenShift, but working as well on minishift:
- add a variable to the
inventory
file, so that we know we are on minishift:
on_minishift=True
- when tower is deployed on OpenShift, a management pod gets temporarily deployed to setup the database. This pod mounts the bespoke secret, so we want to use a configmap on minishift instead. Therefore we change the file
roles/kubernetes/templates/management-pod.yml.j2
with the following:
{% if on_minishift is not defined or on_minishift == False %}
- name: {{ kubernetes_deployment_name }}-secret-key
secret:
secretName: "{{ kubernetes_deployment_name }}-secrets"
items:
- key: secret_key
path: SECRET_KEY
{% else %}
- name: {{ kubernetes_deployment_name }}-secret-key
configMap:
name: "minishift-secrets-hack"
items:
- key: secret_key
path: SECRET_KEY
{% endif %}
- we do the same for
roles/kubernetes/templates/deployment.yml.j2
:
{% if on_minishift is not defined or on_minishift == False %}
- name: {{ kubernetes_deployment_name }}-secret-key
secret:
secretName: "{{ kubernetes_deployment_name }}-secrets"
items:
- key: secret_key
path: SECRET_KEY
{% else %}
- name: {{ kubernetes_deployment_name }}-secret-key
configMap:
name: minishift-secrets-hack
items:
- key: secret_key
path: SECRET_KEY
{% endif %}
- apparently I need a ConfigMap which is called “minishift-secret-hack”. So, I created a file
roles/kubernetes/templates/configmap_minishift_hack.yml.j2
with the following contents:
apiVersion: v1
kind: ConfigMap
metadata:
name: minishift-secrets-hack
namespace: {{ kubernetes_namespace }}
data:
secret_key: "{{ secret_key | b64encode }}"
- just having this template is not enough, it also has to be rendered and finally the configmap needs to be created in your minishift project. So, we need tasks for that, and since I put them in a separate file, we gotta import the tasks as well. The tasks specifically are:
- name: Render deployment templates
set_fact:
"{{ item }}": "{{ lookup('template', item + '.yml.j2') }}"
with_items:
- 'configmap_minishift_hack'
- name: Apply Deployment
shell: |
echo {{ item | quote }} | {{ kubectl_or_oc }} apply -f -
with_items:
- "{{ configmap_minishift_hack }}"
- I put that in a file called
roles/kubernetes/tasks/minishift-fix.yaml
. And finally import the tasks inroles/kubernetes/tasks/main.yml
:
- name: fix minishift setup
include_tasks: minishift-fix.yaml
when: on_minishift | default(False)
The drawback is, of course, that patching the installer means, you would need to do it for every new release. But, to be honest, having this kind of development setup isn’t really the common use case, so fine by me for now.