Installing OpenShift Origin via Ansible on Fedora 25

While many people referred me to run one of the virtualized setups of OpenShift, I wanted something on baremetal in order to eventually test out KubeVirt.  Just running

oc cluster up

As some people suggested did not work, as it assumes prerequisites are properly set up;  the docker registry was one that I tripped over.  So, I decided to give openshift-ansible a test run.  Here are my notes.

SSH and Ansible has been set up and used for upstream Kubernetes testing on this machine.  Kubernetes has been removed.  There might be artifacts left behind or not explicitly listed here.

There is no ~/.kube directory, which I know has messed me up elsewhere in the past.

 
git clone https://github.com/openshift/openshift-ansible

I have two nodes for the cluster. My head node is munchlax, and dialga the compute node.

 
sudo yum install python3 --best --allowerasing
sudo yum install python3-yaml

I created a local file for inventory that looks like this:

[all]
munchlax
dialga

[all:vars]
ansible_ssh_user=ansible
containerized=true
openshift_deployment_type=origin
ansible_python_interpreter=/usr/bin/python3
openshift_release=v1.5
openshift_image_tag=v1.5.0

[masters]
munchlax

[masters:vars]
ansible_become=true

[nodes]
dialga openshift_node_labels="{'region': 'infra'}"

[nodes:vars]
ansible_become=true

 

Note that, while it might seem silly to specify

ansible_become=true

for each of the groups instead of under all, specifying it for all:vars will break the deployment as it then overrides local commands and performs them via sudo, and those should not be done as root.

I’m still working on getting the versions values right, but these seemed to work, with a couple work arounds.  I’ve  posted a diff at the end.

The value openshift_node_labels=”{‘region’: ‘infra’}” is used to specify where the registry is installed.

To run the install, I ran:

ansible-playbook -vvvi /home/ayoung/devel/local-openshift-ansible/inventory.ini /home/ayoung/devel/openshift-ansible/playbooks/byo/config.yml

To test the cluster.


ssh ansible@munchlax

[ansible@munchlax ~]$ kubectl get pods


NAME READY STATUS RESTARTS AGE
docker-registry-1-deploy 0/1 Pending 0 31m
registry-console-1-g4qml 1/1 Running 0 31m
router-4-deploy 0/1 Pending 0 32m

Update: I also needed one commit from a Pull request:

commit 75da091c3e917dc3cd673d4fd201c1b2606132f2
Author: Jeff Peeler 
Date:   Fri May 12 18:51:26 2017 -0400

    Fix python3 error in repoquery
    
    Explicitly convert from bytes to string so that splitting the string is
    successful. This change works with python 2 as well.
    
    Closes #4182

Here are the change from master I had to make by hand:

  1. the cerficate allocation used the unsupported flag –expired-days  which I removed.
  2. The Ansible sysctl module has a known issue for Python 3.  I converted to running the CLI
  3. The version check betwen the container and RPM versions was too strict an unpassable on my system.  Commented it out.
diff --git a/roles/openshift_hosted/tasks/registry/secure.yml b/roles/openshift_hosted/tasks/registry/secure.yml
index 29c164f..5134fdd 100644
--- a/roles/openshift_hosted/tasks/registry/secure.yml
+++ b/roles/openshift_hosted/tasks/registry/secure.yml
@@ -58,7 +58,7 @@
 - "{{ docker_registry_route_hostname }}"
 cert: "{{ openshift_master_config_dir }}/registry.crt"
 key: "{{ openshift_master_config_dir }}/registry.key"
- expire_days: "{{ openshift_hosted_registry_cert_expire_days if openshift_version | oo_version_gte_3_5_or_1_5(openshift.common.deployment_type) | bool else omit }}"
+# expire_days: "{{ openshift_hosted_registry_cert_expire_days if openshift_version | oo_version_gte_3_5_or_1_5(openshift.common.deployment_type) | bool else omit }}"
 register: server_cert_out
 
 - name: Create the secret for the registry certificates
diff --git a/roles/openshift_node/tasks/main.yml b/roles/openshift_node/tasks/main.yml
index 656874f..e2e187b 100644
--- a/roles/openshift_node/tasks/main.yml
+++ b/roles/openshift_node/tasks/main.yml
@@ -105,7 +105,12 @@
 # startup, but if the network service is restarted this setting is
 # lost. Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1372388
 - name: Persist net.ipv4.ip_forward sysctl entry
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes
+ command: sysctl -w net.ipv4.ip_forward=1 
+
+- name: reload for net.ipv4.ip_forward sysctl entry
+ command: sysctl -p/etc/sysctl.conf
+
+
 
 - name: Start and enable openvswitch service
 systemd:
diff --git a/roles/openshift_version/tasks/main.yml b/roles/openshift_version/tasks/main.yml
index 2e9b4ca..cc14453 100644
--- a/roles/openshift_version/tasks/main.yml
+++ b/roles/openshift_version/tasks/main.yml
@@ -99,11 +99,11 @@
 when: not rpm_results.results.package_found
 - set_fact:
 openshift_rpm_version: "{{ rpm_results.results.versions.available_versions.0 | default('0.0', True) }}"
- - name: Fail if rpm version and docker image version are different
- fail:
- msg: "OCP rpm version {{ openshift_rpm_version }} is different from OCP image version {{ openshift_version }}"
+# - name: Fail if rpm version and docker image version are different
+# fail:
+# msg: "OCP rpm version {{ openshift_rpm_version }} is different from OCP image version {{ openshift_version }}"
 # Both versions have the same string representation
- when: openshift_rpm_version != openshift_version
+# when: openshift_rpm_version != openshift_version
 when: is_containerized | bool
 
 # Warn if the user has provided an openshift_image_tag but is not doing a containerized install

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.