Script to create cluster-admin and approve trough k8s-cluster:
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
 | K8S_USER=your_user
GROUP=cluster-admin
openssl genrsa -out "$K8S_USER.key" 2048
openssl req -new -key "$K8S_USER.key" \
  -out "$K8S_USER.csr" \
  -subj "/CN=$K8S_USER/O=$GROUP"
openssl req -in "$K8S_USER.csr" -noout -text
BASE64_CSR=$(cat ./$K8S_USER.csr | base64 -w0 )
cat <<EOF | kubectl apply -f-
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: $K8S_USER
spec:
  request: ${BASE64_CSR}
  signerName: kubernetes.io/kube-apiserver-client
  expirationSeconds: 31536000 # 1 year
  usages:
    - client auth
EOF
kubectl certificate approve $K8S_USER
kubectl get csr $K8S_USER -o jsonpath='{.status.certificate}' \
| base64 --decode > ${K8S_USER}.crt
kubectl get cm kube-root-ca.crt -o jsonpath="{['data']['ca\.crt']}" \
  > ca.crt
cat <<EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: akkuyu:${K8S_USER}
subjects:
- kind: User
  name: ${K8S_USER}
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF
# to disable, just delete clusterrolebinding
# kubectl delete ClusterRoleBinding tpetrov@akkuyu.com
### Generate KUBECONFIG
SERVER_ADDR_PORT=https://10.125.22.89:6443
CLUSTER_NAME=test
kubectl --kubeconfig ./$K8S_USER.yaml config set-credentials $K8S_USER \
  --client-certificate=./$K8S_USER.crt \
  --client-key=./$K8S_USER.key
kubectl --kubeconfig ./$K8S_USER.yaml config set-context $K8S_USER-$CLUSTER_NAME \
  --cluster=$CLUSTER_NAME --user=$K8S_USER
kubectl --kubeconfig ./$K8S_USER.yaml config use-context $K8S_USER-$CLUSTER_NAME
kubectl --kubeconfig ./$K8S_USER.yaml config set-cluster $CLUSTER_NAME --server=$SERVER_ADDR_PORT --certificate-authority=./ca.crt
export KUBECONFIG=./$K8S_USER.yaml
 | 
 
Based on a good person’s article
Connect to the master node and create a key and certificate for the user
Let’s set env variables for master node, user and group
Permissions can be set to groups
| 1
2
3
4
5
 | MASTER_HOST=<your_master_host>
ssh $MASTER_HOST
K8S_USER=bob
GROUP=space
 | 
 
Generate the key for the user
| 1
 | openssl genrsa -out "$K8S_USER.key" 2048
 | 
 
Create a request for a public key
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
 | openssl req -new -key "$K8S_USER.key" \
  -out "$K8S_USER.csr" \
  -subj "/CN=$K8S_USER/O=$GROUP"
# several groups
# openssl req -new -key "$K8S_USER.key" \
#   -out "$K8S_USER.csr" \
#   -subj "/CN=$K8S_USER/O=$GROUP1/O=$GROUP2/O=$GROUP3"
openssl req -in "$K8S_USER.csr" -noout -text
 | 
 
Sign ca.crt и ca.key. Cluster key and CA are in /etc/kubernetes/pki.
| 1
2
3
4
5
6
7
8
 | sudo openssl x509 -req -in "$K8S_USER.csr" \
  -CA /etc/kubernetes/pki/ca.crt \
  -CAkey /etc/kubernetes/pki/ca.key \
  -CAcreateserial \
  -out "$K8S_USER.crt" -days 720
openssl x509 -in "$K8S_USER.crt" -noout -text
exit
 | 
 
Get the key and cert, then delete them from our master node. Also, get the public key of the cluster (the same is the certificate :) )
| 1
2
3
4
5
6
7
 | K8S_USER=bob
scp $MASTER_HOST:~/$K8S_USER.crt /tmp/
scp $MASTER_HOST:~/$K8S_USER.key /tmp/
scp $MASTER_HOST:/etc/kubernetes/pki/ca.crt /tmp/
ssh $MASTER_HOST rm ~/{$K8S_USER.crt,$K8S_USER.csr,$K8S_USER.key}
 | 
 
Let’s create a config file for our user
In this file, we can use a certificate inline format, but we need to encode them into base64 and rename fields this way:
- certificate-authority -> certificate-authority-data
- client-certificate    -> client-certificate-data
- client-key            -> client-key-data
Example how to get inline base64: cat /etc/kubernetes/pki/ca.crt | base64 -w 0
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
 | K8S_USER=bob
SERVER_ADDR_PORT=https://10.125.22.89:6443
CLUSTER_NAME=test-env
kubectl --kubeconfig /tmp/$K8S_USER config set-credentials $K8S_USER \
  --client-certificate=/tmp/$K8S_USER.crt \
  --client-key=/tmp/$K8S_USER.key
kubectl --kubeconfig /tmp/$K8S_USER config set-context $K8S_USER-$CLUSTER_NAME \
  --cluster=$CLUSTER_NAME --user=$K8S_USER
kubectl --kubeconfig /tmp/$K8S_USER config use-context $K8S_USER-$CLUSTER_NAME
kubectl --kubeconfig /tmp/$K8S_USER config set-cluster $CLUSTER_NAME --server=$SERVER_ADDR_PORT --certificate-authority=/tmp/ca.crt
# [--insecure-skip-tls-verify=true] [--tls-server-name=example.com] [options]
# more options:
# kubectl config --help
# kubectl config set-cluster --help
 | 
 
Make export KUBECONFIG and try
| 1
2
3
4
5
 | export KUBECONFIG=/tmp/$K8S_USER
kubectl get po
	Error from server (Forbidden): pods is forbidden: User "bob" cannot list resource "pods" in API group "" in the namespace "default"
unset KUBECONFIG
 | 
 
In this task, I need to give only read rules, so cluster role view will feet there.
We need to keep in mind our cluster API version
| 1
2
 | kubectl api-resources | grep ClusterRoleBinding
	clusterrolebindings   rbac.authorization.k8s.io/v1   false   ClusterRoleBinding
 | 
 
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 | cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: $K8S_USER
subjects:
- kind: User
  name: $K8S_USER
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
EOF
 | 
 
It’s convinient to study rules with kubectl auth can-i ...
kubectl auth can-i documentation
| 1
2
 | kubectl auth can-i get pods --all-namespaces --as $K8S_USER
  yes
 | 
 
Also, we can cancel the permissions of our users
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
 | cat << EOF | kubectl delete -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: $K8S_USER
subjects:
- kind: User
  name: $K8S_USER
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
EOF
 | 
 
Let’s check again
| 1
2
 | kubectl auth can-i create pods --all-namespaces --as $K8S_USER
  no
 | 
 
Check with group
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
 | cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: space
subjects:
- kind: Group
  name: space
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
EOF
kubectl auth can-i get pods --all-namespaces --as bob
  no
export KUBECONFIG=/tmp/$K8S_USER
kubectl auth can-i get pods --all-namespaces
  yes
unset KUBECONFIG
cat << EOF | kubectl delete -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: space
subjects:
- kind: Group
  name: space
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
EOF
export KUBECONFIG=/tmp/$K8S_USER
kubectl auth can-i get pods --all-namespaces
  no
unset KUBECONFIG
 | 
 
Krew
Install krew to work with plugins
|  1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
 | (
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)
# добавим строку в ~/.zshrc
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
source ~/.zshrc
kubectl krew version
 | 
 
Install plugin to show permissions
| 1
 | kubectl krew install access-matrix
 | 
 
Show permissions
| 1
2
 | kubectl access-matrix --help
kubectl access-matrix --namespace default --as $K8S_USER
 | 
 
Example
We have:
- user: bob
- namespace: cassandra
- verbs: get,list,create
- resources: pods,services,sts,svc/portforward,pods/portforward,events,pods/log
Let’s apply this manifest of role in namespace cassandra with verbs and resources:
| 1
2
3
4
5
 | kubectl --namespace cassandra \
  create role dev-role \
  --verb=get,list,create \
  --resource=pods,services,sts,svc/portforward,pods/portforward,events,pods/log \
  --dry-run=client -o yaml > dev-role.yaml
 | 
 
And apply the next manifest to link user with role, creating rolebinding:
| 1
2
3
4
 | kubectl -n cassandra \
  create rolebinding dev-bind \
  --role=dev-role --user=bob \
  --dry-run=client -o yaml > dev-bind.yaml
 |