When you enroll CloudForms with an IdM Server, you do not automatically get the HTTPS certificates from that server. It takes a deliberate additional step to do so.
Since I am using Ansible to provision the server, I have a task to enroll it with the IPA server. This is handled by the appliance_console_cli . My Ansible tasks look like this:
- name: Install IPA Client packages tags: - ipaclient yum: name=ipa-client,ipa-admintools,python-memcached state=present - name: Set nameserver tags: - ipaclient lineinfile: path: /etc/sysconfig/network-scripts/ifcfg-eth0 line: DNS1={{ nameserver }} - name: Setup resolv.conf tags: - ipaclient template: src=resolv.conf.j2 dest=/etc/resolv.conf - name: ipa-client shell: > appliance_console_cli --host cfme.{{ ipa_domain }} --ipaserver idm.{{ ipa_domain }} --iparealm {{ ipa_realm }} --ipaprincipal admin --ipapassword {{ ipa_server_password }} tags: - ipaclient creates: /etc/ipa/default.conf |
This does the following:
- Installs the packages required to run IPA client
- Tells the network layer to use the specified DNS value the next time it updates resolv.conf
- Forces an immediate update to resolv.conf with the nameserver needed for IPA client installation
- Uses the console script to run ipa-client-install with the appropriate parameters
Running getcert afterwards show 0 certificates tracking.
I did not see an option on the appliance_console Text UI to update the certificates, but there is an option using the CLI.
Just running it dumps a stack trace.
[root@cfme ~]# appliance_console_cli --http-cert creating ssl certificates ipa: ERROR: did not receive Kerberos credentials /opt/rh/cfme-gemset/gems/awesome_spawn-1.4.1/lib/awesome_spawn.rb:105:in `run!': /usr/bin/ipa exit code: 1 (AwesomeSpawn::CommandResultError) from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/principal.rb:42:in `request' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/principal.rb:22:in `register' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/certificate.rb:43:in `request' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/certificate_authority.rb:109:in `configure_http' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/certificate_authority.rb:47:in `activate' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:339:in `install_certs' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:185:in `run' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:425:in `parse' from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/bin/appliance_console_cli:7:in `' from /opt/rh/cfme-gemset/bin/appliance_console_cli:23:in `load' from /opt/rh/cfme-gemset/bin/appliance_console_cli:23:in `' |
But it does tell you the problem right up front. It is easy enough to kinit and run the script.
[root@cfme ~]# kinit admin Password for admin@AYOUNG.RDUSALAB: [root@cfme ~]# appliance_console_cli --http-cert creating ssl certificates configuring apache to use new certs certificate result: http: complete [root@cfme ~]# getcert list Number of certificates and requests being tracked: 1. Request ID '20180308034344': status: MONITORING stuck: no key pair storage: type=FILE,location='/var/www/miq/vmdb/certs/server.cer.key' certificate: type=FILE,location='/var/www/miq/vmdb/certs/server.cer' CA: IPA issuer: CN=server subject: CN=server expires: 2021-02-21 16:28:07 UTC pre-save command: post-save command: chmod 644 /var/www/miq/vmdb/certs/server.cer /var/www/miq/vmdb/certs/root.crt track: yes auto-renew: yes |
However, it does not automatically restart the server. So you would get an error like this:
[root@cfme ~]# curl https://`hostname` curl: (60) Peer's certificate issuer has been marked as not trusted by the user. More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. |
Use the CLI to restart the server:
appliance_console_cli --server=restart [root@cfme ~]# curl https://`hostname` curl: (60) Peer's certificate issuer has been marked as not trusted by the user. More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. |
Hmmm, still no love. What is wrong? Lets check getcert:
[root@cfme ~]# getcert list Number of certificates and requests being tracked: 1. Request ID '20180308034344': status: MONITORING stuck: no key pair storage: type=FILE,location='/var/www/miq/vmdb/certs/server.cer.key' certificate: type=FILE,location='/var/www/miq/vmdb/certs/server.cer' CA: IPA issuer: CN=server subject: CN=server expires: 2021-02-21 16:28:07 UTC pre-save command: post-save command: chmod 644 /var/www/miq/vmdb/certs/server.cer /var/www/miq/vmdb/certs/root.crt track: yes auto-renew: yes |
The problem can be seen in the issuer: CN=server field. Instead of generating a new Signing request, it reused the old one.
Lets get rid of the old cert and try again:
[root@cfme ~]# getcert stop-tracking -i 20180308034344 Request "20180308034344" removed. [root@cfme ~]# sudo mv /var/www/miq/vmdb/certs/server.cer /tmp [root@cfme ~]# appliance_console_cli --http-cert creating ssl certificates configuring apache to use new certs certificate result: http: complete [root@cfme ~]# getcert list Number of certificates and requests being tracked: 1. Request ID '20180308035841': status: MONITORING stuck: no key pair storage: type=FILE,location='/var/www/miq/vmdb/certs/server.cer.key' certificate: type=FILE,location='/var/www/miq/vmdb/certs/server.cer' CA: IPA issuer: CN=Certificate Authority,O=AYOUNG.RDUSALAB subject: CN=cfme.ayoung.rdusalab,O=AYOUNG.RDUSALAB expires: 2020-03-08 03:58:42 UTC dns: cfme.ayoung.rdusalab principal name: HTTP/cfme.ayoung.rdusalab@AYOUNG.RDUSALAB key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment eku: id-kp-serverAuth,id-kp-clientAuth pre-save command: post-save command: chmod 644 /var/www/miq/vmdb/certs/server.cer /var/www/miq/vmdb/certs/root.crt track: yes auto-renew: yes |
That looks better. The Issuer is the IPA server. Restart the service and check again.
[root@cfme ~]# curl https://`hostname` <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>503 Service Unavailable</title> </head><body> <h1>Service Unavailable</h1> <p>The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.</p> </body></html> |
Crud. What did I do now?
looking in /var/www/miq/vmdb/log/apache/ssl_error.log
[Wed Mar 07 23:11:54.095648 2018] [proxy:error] [pid 15934] (111)Connection refused: AH00957: HTTP: attempt to connect to 0.0.0.0:3009 (0.0.0.0) failed [Wed Mar 07 23:11:54.095661 2018] [proxy:error] [pid 15934] AH00959: ap_proxy_connect_backend disabling worker for (0.0.0.0) for 60s [Wed Mar 07 23:11:54.095667 2018] [proxy_http:error] [pid 15934] [client 10.10.124.229:33360] AH01114: HTTP: failed to make connection to backend: 0.0.0.0 |
Maybe something didn’t get restarted? Reboot the whole server just to force everything to reinitialize. And then:
# curl https://`hostname` <!DOCTYPE html> ... |
Success.
I suspect that if I had done the certificate work prior to starting the services, I would not have had that problem. I did direct Certmonger calls before I realized that there was a CLI option and did not have to reboot.
So I can add tasks to perform these steps in my Ansible role, right after the IPA client install. Or I can use a custom getcert task. But that is a tale for a different day.