OK, since I did it wrong last time, I’m going to try creating an user in OpenShift, and grant that user permissions to do various things.Â
I’m going to start by removing the ~/.kube directory on my laptop and perform operations via SSH on the master node. From my last session I can see I still have:
$ oc get users NAME UID FULL NAME IDENTITIES ayoung cca08f74-3a53-11e7-9754-1c666d8b0614 allow_all:ayoung $ oc get identities NAME IDP NAME IDP USER NAME USER NAME USER UID allow_all:ayoung allow_all ayoung ayoung cca08f74-3a53-11e7-9754-1c666d8b0614 |
What openshift calls projects (perhaps taking the lead from Keystone?) Kubernetes calls namespaces:
$ oc get projects NAME DISPLAY NAME STATUS default Active kube-system Active logging Active management-infra Active openshift Active openshift-infra Active [ansible@munchlax ~]$ kubectl get namespaces NAME STATUS AGE default Active 18d kube-system Active 18d logging Active 7d management-infra Active 10d openshift Active 18d openshift-infra Active 18d |
According to the documentation here I should be able to log in from my laptop, and all of the configuration files just get magically set up. Lets see what happens:
$ oc login Server [https://localhost:8443]: https://munchlax:8443 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y Authentication required for https://munchlax:8443 (openshift) Username: ayoung Password: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. |
Just to make sure I sent something, a typed in the password “test” but it could have been anything. The config file now has this:
$ cat ~/.kube .kube/ .kube.bak/ [ayoung@ayoung541 ~]$ cat ~/.kube/config apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://munchlax:8443 name: munchlax:8443 contexts: - context: cluster: munchlax:8443 user: ayoung/munchlax:8443 name: /munchlax:8443/ayoung current-context: /munchlax:8443/ayoung kind: Config preferences: {} users: - name: ayoung/munchlax:8443 user: token: 4X2UAMEvy43sGgUXRAp5uU8KMyLyKiHupZg7IUp-M3Q |
I’m going to resist the urge to look too closely into that token thing.
I’m going to work under the assumption that a user can be granted roles in several namespaces. Lets see:
$ oc get namespaces Error from server (Forbidden): User "ayoung" cannot list all namespaces in the cluster |
Not a surprise. But the question I have now is “which namespace am I working with?” Let me see if I can figure it out.
$ oc get pods Error from server (Forbidden): User "ayoung" cannot list pods in project "default" |
and via kubectl
$ kubectl get pods Error from server (Forbidden): User "ayoung" cannot list pods in project "default" |
What role do I need to be able to get pods? Lets start by looking at the head node again:
[ansible@munchlax ~]$ oc get ClusterRoles | wc -l 64 [ansible@munchlax ~]$ oc get Roles | wc -l No resources found. 0 |
This seems a bit strange. ClusterRoles are not limited to a namespace, whereas Roles are. Why am I not seeing any roles defined?
Lets start with figuring out who can list pods:
oadm policy who-can GET pods Namespace: default Verb: GET Resource: pods Users: system:admin system:serviceaccount:default:deployer system:serviceaccount:default:router system:serviceaccount:management-infra:management-admin system:serviceaccount:openshift-infra:build-controller system:serviceaccount:openshift-infra:deployment-controller system:serviceaccount:openshift-infra:deploymentconfig-controller system:serviceaccount:openshift-infra:endpoint-controller system:serviceaccount:openshift-infra:namespace-controller system:serviceaccount:openshift-infra:pet-set-controller system:serviceaccount:openshift-infra:pv-binder-controller system:serviceaccount:openshift-infra:pv-recycler-controller system:serviceaccount:openshift-infra:statefulset-controller Groups: system:cluster-admins system:cluster-readers system:masters system:nodes |
And why is this? What roles are permitted to list pods?
$ oc get rolebindings NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS system:deployer /system:deployer deployer, deployer system:image-builder /system:image-builder builder, builder system:image-puller /system:image-puller system:serviceaccounts:default |
I don’t see anything that explains why admin would be able to list pods there. And the list is a bit thin.
Another page advises I try the command
oc describe clusterPolicy |
But the output of that is voluminous. With a little trial and error, I discover I can do the same thing using the kubectl command, and get the output in JSON, to let me inspect. Here is a fragment of the output.
"roles": [ { "name": "admin", "role": { "metadata": { "creationTimestamp": "2017-05-05T02:24:17Z", "name": "admin", "resourceVersion": "24", "uid": "f063233e-3139-11e7-8169-1c666d8b0614" }, "rules": [ { "apiGroups": [ "" ], "attributeRestrictions": null, "resources": [ "pods", "pods/attach", "pods/exec", "pods/portforward", "pods/proxy" ], "verbs": [ "create", "delete", "deletecollection", "get", "list", "patch", "update", "watch" ] }, |
There are many more rules, but this one shows what I want: there is a policy role named “admin” that has a rule that provides access to the pods via the list verbs, among others.
Lets see if I can make my ayoung account into a cluster-reader by adding the role to the user directly.
On the master
$ oadm policy add-role-to-user cluster-reader ayoung role "cluster-reader" added: "ayoung" |
On my laptop
$ kubectl get pods NAME READY STATUS RESTARTS AGE docker-registry-2-z91cq 1/1 Running 3 8d registry-console-1-g4qml 1/1 Running 3 8d router-5-4w3zt 1/1 Running 3 8d |
Back on master, we see that:
$ oadm policy who-can list pods Namespace: default Verb: list Resource: pods Users: ayoung system:admin system:serviceaccount:default:deployer system:serviceaccount:default:router system:serviceaccount:management-infra:management-admin system:serviceaccount:openshift-infra:build-controller system:serviceaccount:openshift-infra:daemonset-controller system:serviceaccount:openshift-infra:deployment-controller system:serviceaccount:openshift-infra:deploymentconfig-controller system:serviceaccount:openshift-infra:endpoint-controller system:serviceaccount:openshift-infra:gc-controller system:serviceaccount:openshift-infra:hpa-controller system:serviceaccount:openshift-infra:job-controller system:serviceaccount:openshift-infra:namespace-controller system:serviceaccount:openshift-infra:pet-set-controller system:serviceaccount:openshift-infra:pv-attach-detach-controller system:serviceaccount:openshift-infra:pv-binder-controller system:serviceaccount:openshift-infra:pv-recycler-controller system:serviceaccount:openshift-infra:replicaset-controller system:serviceaccount:openshift-infra:replication-controller system:serviceaccount:openshift-infra:statefulset-controller Groups: system:cluster-admins system:cluster-readers system:masters system:nodes |
And now to remove the role:
On the master
$ oadm policy remove-role-from-user cluster-reader ayoung role "cluster-reader" removed: "ayoung" |
On my laptop
$ kubectl get pods Error from server (Forbidden): User "ayoung" cannot list pods in project "default" |
Thanks!
You are welcome. Were you working on something that required understanding RBAC?