当前位置:首页 > 经验 >

kubernetes证书含金量(目前什么证书含金量最高)

来源:原点资讯(www.yd166.com)时间:2022-11-08 20:35:20作者:YD166手机阅读>>

CKA考试含金量

CKA是目前唯一的 Kubernetes 官方认证考试。

先从市场认可度来看,CKA 证书是云原生计算基金会 CNCF 组织的,它考察的是你是否具备足够管理 Kubernetes 集群的必备知识。考试形式是上机直接在集群上操作,非常考验个人知识的扎实程度和 Kubernetes 实践经验。它本身是非常细致、严谨的官方技术考试。目前来看关于这块技术的认证,不论是发证机构还是考核方式及难度,CKA看来都是很权威的。

再从供需关系来看,云计算时代来临之后,容器技术也被越来越多的人重视,这几年 Kubernetes 已经成为容器编排的事实标准,现在企业都在招这块的人才,未来这块的发展空间也很大,另外就是企业部署云容器后,为了给未来创新优化提供更畅通的开源资源,可能会有申请 KCSP 企业资格、加入 CNCF 的想法。根据 CNCF 的要求,KCSP 的申请前提是公司内至少有 3 名员工有 CKA 认证证书。目前CKA持证人数并不多,市场空间巨大。

综上所述,从市场认可度也好,从供需关系也好,CKA认证以及后面的CKS认证无疑都是含金量很高的认证,不仅是用来检验自己知识经验是不是做到了与时俱进的试金石,更是未来企业岗位所必要的证书之一。

2021年CKA考试时的题目,有良好的参考价值

设置tab键补全命令

kubectl --help | grep bash sudo -i vim /etc/profile source <(kubectl completion bash) source /etc/profile第一题;

为部署管道创建一个新的 ClusterRole 并将其绑定到范围为特定 namespace 的特定ServiceAccount

Task

创建一个名字为 deployment-clusterrole 且仅允许创建创建以下资源类型的新 ClusterRole:

Deployment

StatefulSet

DaemonSet

在现有的 namespace app-team1 中创建有个名为 cicd-token 的新 ServiceAccount。

限 于 namespace app-team1 , 将 新 的 ClusterRole deployment-clusterrole 绑 定 到 新 的

ServiceAccount cicd-token。

[tom@vms20 ~]$ kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefulset,daemonset clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created [tom@vms20 ~]$ kubectl create sa cicd-token -n app-team1 serviceaccount/cicd-token created [tom@vms20 ~]$ kubectl create rolebinding rbinding1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team1 rolebinding.rbac.authorization.k8s.io/rbinding1 created第二题

设置配置环境 kubectl config use-context ek8s

将名为 ek8s-node-0 (vms25)的 node 设置为不可用,并重新调度该 node 上所有运行的 pods。

[tom@vms20 ~]$ kubectl config use-context ek8s Switched to context "ek8s". [tom@vms20 ~]$ kubectl drain vms25.rhce.cc --ignore-daemonsets node/vms25.rhce.cc already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-g5xk8, kube-system/kube-proxy-cfm6q node/vms25.rhce.cc drained --delete-local-data --force #如果报错,看提示要你写什么选项,就写什么选项 #把选项加全的写法 kubectl drain vms25.rhce.cc --ignore-daemonsets --delete-local-data --force第三题

设置配置环境 kubectl config use-context mk8s

现有的 kubernetes 集群正在运行的版本是 1.19.2。仅将主节点上的所有 kubernetes 控制平面

和节点组件升级到版本 1.20.1。

另外,在主节点上升级 kubelet 和 kubectl。

确保在升级前 drain 主节点,并在升级后 uncordon 主节点。请不要升级工作节点,etcd,

container 管理器,CNI 插件,DNS 服务或任何其他插件。

#考试是Ubuntu系统,模拟用的是centos系统 ssh vms28.rhce.cc #主机名没有就直接写ip sudo -i apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.21.x-00 && apt-mark hold kubeadm [root@vms28 ~]# kubectl drain vms28.rhce.cc --ignore-daemonsets node/vms28.rhce.cc cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-d488p, kube-system/kube-proxy-q5tv9 evicting pod kube-system/calico-kube-controllers-65f8bc95db-6jwwt evicting pod kube-system/coredns-6d56c8448f-b6l26 evicting pod kube-system/coredns-6d56c8448f-fw8wh pod/coredns-6d56c8448f-fw8wh evicted pod/calico-kube-controllers-65f8bc95db-6jwwt evicted pod/coredns-6d56c8448f-b6l26 evicted node/vms28.rhce.cc evicted [root@vms28 ~]# kubeadm upgrade apply v1.20.1 --etcd-upgrade=false [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.20.1" [upgrade/versions] Cluster version: v1.19.2 [upgrade/versions] kubeadm version: v1.20.1 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster yum install -y kubelet-1.20.1-0 kubectl-1.20.1-0 --disableexcludes=kubernetes [root@vms28 ~]# systemctl daemon-reload;systemctl restart kubelet [root@vms28 ~]# [root@vms28 ~]# [root@vms28 ~]# kubectl uncorndon vms28.rhce.cc [root@vms28 ~]# kubectl uncordon vms28.rhce.cc node/vms28.rhce.cc uncordoned [root@vms28 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION vms28.rhce.cc Ready control-plane,master 183d v1.20.1 vms29.rhce.cc Ready <none> 183d v1.19.2

#比较完整规范的安装过程 先是切换集群 在ssh到集群的主节点上,用普通用户,再切root [tom@vms20 ~]$ ssh vms28.rhce.cc Warning: Permanently added 'vms28.rhce.cc' (ECDSA) to the list of known hosts. [tom@vms28 ~]$ [tom@vms28 ~]$ sudo -i 驱赶节点 [root@vms28 ~]# kubectl drain vms28.rhce.cc --ignore-daemonsets node/vms28.rhce.cc already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-d488p, kube-system/kube-proxy-q5tv9 evicting pod kube-system/calico-kube-controllers-65f8bc95db-6jwwt evicting pod kube-system/coredns-6d56c8448f-b6l26 evicting pod kube-system/coredns-6d56c8448f-fw8wh pod/calico-kube-controllers-65f8bc95db-6jwwt evicted pod/coredns-6d56c8448f-b6l26 evicted pod/coredns-6d56c8448f-fw8wh evicted node/vms28.rhce.cc evicted 升级 kubeadm [root@vms28 ~]# yum install -y kubeadm-1.20.1-0 --disableexcludes=kubernetes 选择并运用要升级的目标,并且不升级etcd [root@vms28 ~]# kubeadm upgrade apply v1.20.1 --etcd-upgrade=false 升级 kubelet 和 kubectl [root@vms28 ~]# yum install -y kubelet-1.20.1-0 kubectl-1.20.1-0 --disableexcludes=kubernetes 重启 kubelet [root@vms28 ~]# systemctl daemon-reload [root@vms28 ~]# systemctl restart kubelet 通过将节点标记为可调度,让其重新上线: [root@vms28 ~]# kubectl uncordon vms28.rhce.cc 查看升级结果 [root@vms28 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION vms28.rhce.cc Ready control-plane,master 207d v1.20.1 vms29.rhce.cc Ready <none> 207d v1.19.2第四题

此项目无需更改配置环境

首 先 为 运 行 在 https://127.0.0.1:2379 上 的 现 有 etcd 实 例 创 建 快 照 并 将 快 照 保 存 到

/srv/data/etcd-snapshot.db。

为给定实例创建快照预计能在几秒钟内完成。如果该操作似乎挂起,则命令可能有问题。用

ctrl c 来取消操作,然后重试。

然后还原位于/srv/data/etcd-snapshot-previous.db 的现有先前快照。

提供了一下 TLS 证书和密钥,以通过 etcdctl 连接到服务器。

CA 证书:/opt/KUIN00601/ca.crt

客户端证书: /opt/KUIN00601/etcd-client.crt

客户端密钥:/opt/KUIN00601/etcd-client.key

#不需要切换环境就是在控制台上做 #设置etcdctl的版本环境变量 使用root用户做,注意地址是 https://127.0.0.1:2379 export ETCDCTL_API=3 [root@vms20 ~]# etcdctl snapshot save --help NAME: snapshot save - Stores an etcd node backend snapshot to a given file USAGE: etcdctl snapshot save <filename> GLOBAL OPTIONS: --cacert="" verify certificates of TLS-enabled secure servers using this CA bundle --cert="" identify secure client using this TLS certificate file --command-timeout=5s timeout for short running command (excluding dial timeout) --debug[=false] enable client-side debug logging --dial-timeout=2s dial timeout for client connections -d, --discovery-srv="" domain name to query for SRV records describing cluster endpoints --endpoints=[127.0.0.1:2379] gRPC endpoints --hex[=false] print byte strings as hex encoded strings --insecure-discovery[=true] accept insecure SRV records describing cluster endpoints --insecure-skip-tls-verify[=false] skip server certificate verification --insecure-transport[=true] disable transport security for client connections --keepalive-time=2s keepalive time for client connections --keepalive-timeout=6s keepalive timeout for client connections --key="" identify secure client using this TLS key file --user="" username[:password] for authentication (prompt if password is not supplied) -w, --write-out="simple" set the output format (fields, json, protobuf, simple, table) etcdctl snapshot save --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" --endpoints=[127.0.0.1:2379] /srv/data/etcd-snapshot.db etcdctl snapshot restore --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" --endpoints=[127.0.0.1:2379] /srv/data/etcd-snapshot-previous.db 第五题

设置配置环境 kubectl config use-context k8s

创建一个名为allow-port-from-namespace 的新NetworkPolicy,以允许现有namespace my-app中的 Pods 连接到同一 namespace 中其他 pods 的端口 9200。

确保新的 NetworkPolicy:

不允许 对没有在监听端口 9200 的 pods 访问

不允许 不来自 namespace my-app 的 pods 的访问

[tom@vms20 ~]$ kubectl get ns NAME STATUS AGE app-team1 Active 188d default Active 188d ing-internal Active 186d ingress-nginx Active 75d kube-node-lease Active 188d kube-public Active 188d kube-system Active 188d [tom@vms20 ~]$ kubectl create ns my-app namespace/my-app created [tom@vms20 ~]$ kubectl label ns my-app name=my-app namespace/my-app labeled [tom@vms20 ~]$ kubectl get ns --show-labels NAME STATUS AGE LABELS app-team1 Active 188d <none> default Active 188d <none> ing-internal Active 186d <none> ingress-nginx Active 75d app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx kube-node-lease Active 188d <none> kube-public Active 188d <none> kube-system Active 188d <none> my-app Active 3m32s name=my-app #新建网络策略的yaml [tom@vms20 ~]$ cat 5-network.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-port-from-namespace namespace: my-app spec: podSelector: matchLabels: policyTypes: ingress: - from: - namespaceSelector: matchLabels: name: my-app ports: - protocol: TCP port: 9200 [tom@vms20 ~]$ kubectl apply -f 5-network.yaml networkpolicy.networking.k8s.io/allow-port-from-namespace created #考试的时候可能会变化,设置出去流量的网络策略 第六题

设置配置环境 kubectl config use-context k8s

请重新配置现有的部署 front-end 以及添加名为 http 的端口规范来公开现有容器 nginx 的端

口 80/tcp。

创建一个名为 front-end-svc 的新服务,以公开容器端口 http。

配置此服务,以通过在排定的节点上的 NodePort 来公开各个 pods。

yaml文件更改 - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 name: http protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File [tom@vms20 ~]$ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE front-end 1/1 1 1 187d webserver 1/1 1 1 186d [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl edit deploy front-end error: deployments.apps "front-end" is invalid A copy of your changes has been stored to "/tmp/kubectl-edit-u41ie.yaml" error: Edit cancelled, no valid changes were saved. [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl edit deploy front-end Edit cancelled, no changes made. [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ sudo -i [root@vms20 ~]# kubectl edit deploy front-end deployment.apps/front-end edited [root@vms20 ~]# kubectl expose --name=front-end-svc deployment front-end --port=80 --target-port=80 --type=NodePort service/front-end-svc exposed第七题

设置配置环境 kubectl config use-context k8s

如下创建一个新的 nginx ingress 资源:

名称:pong

namespace: ing-internal

使用服务端口 5678 在路径/hello 上公开服务 hello

可以使用一下命令检查服务 hello 的可用性,该命令返回 hello:

curl -kL < INTERNAL_ IP>/hello/

订正

kubectl exec -it pod1 -n ing-internal -- ls /usr/share/nginx/html/如果没有 hello 的话

kubectl exec -it pod1 -n ing-internal -- bash

mkdir /usr/share/nginx/html/hello

echo hello > /usr/share/nginx/html/hello/index.html

exit

再试

#yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pong namespace: ing-internal #annotations: #练习环境注释这两行,考试的时候保留,这样就能得到hello # nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /hello pathType: Prefix backend: service: name: hello port: number: 5678 [tom@vms20 ~]$ kubectl apply -f 7-ingress.yaml #并没有得到hello kubectl get pods -n ing-internal #用这条命令得到要进去的容器 [tom@vms20 ~]$ kubectl get ing -n ing-internal #这条命令来获得ingress NAME CLASS HOSTS ADDRESS PORTS AGE pong <none> * 192.168.26.23 80 13d [tom@vms20 ~]$ kubectl exec -it ingress-nginx-controller-5774fb4dd9-l5p5f -n ingress-nginx -- bash bash-5.0$ mkdir /usr/share/nginx/html/hello mkdir: can't create directory '/usr/share/nginx/html/hello': No such file or directory bash-5.0$ su - su: must be suid to work properly bash-5.0$ sudo mkdir /usr/share/nginx/html/hello bash: sudo: command not found bash-5.0$ sudo echo hello > /usr/share/nginx/html/hello/index.html bash: /usr/share/nginx/html/hello/index.html: No such file or directory bash-5.0$ exit exit command terminated with exit code 1 [tom@vms20 ~]$ curl -kL 192.168.26.23/hello/index.html <html> <head><title>503 Service Temporarily Unavailable</title></head> <body> <center><h1>503 Service Temporarily Unavailable</h1></center> <hr><center>nginx</center> </body> </html> [tom@vms20 ~]$ curl -kL 192.168.26.23/hello <html> <head><title>503 Service Temporarily Unavailable</title></head> <body> <center><h1>503 Service Temporarily Unavailable</h1></center> <hr><center>nginx</center> </body> </html>第八题

设置配置环境 kubectl config use-context k8s

将 deployment 从 webserver 扩展至 6pods

[tom@vms20 ~]$ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE front-end 1/1 1 1 187d webserver 1/1 1 1 186d [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl scale deployment webserver --replicas=6 deployment.apps/webserver scaled第九题

设置配置环境 kubectl config use-context k8s

按如下要求调度一个 pod:

名称:nginx-kusc00401

image: nginx

Node selector: disk=ssd

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ -- [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl run nginx-kusc00401 --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml > 9-pod.yaml [tom@vms20 ~]$ vim 9-pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx-kusc00401 name: nginx-kusc00401 spec: nodeSelector: disk: ssd containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx-kusc00401 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [tom@vms20 ~]$ kubectl apply -f 9-pod.yaml pod/nginx-kusc00401 created第十题

设置配置环境 kubectl config use-context k8s

检查有多少个 worker nodes 已准备就绪(不包括被打上 Taint: NoSchedule 的节点),并将数

量写入/opt/KUSC00402/kusc00402.txt

[tom@vms20 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION vms21.rhce.cc Ready control-plane,master 188d v1.20.1 vms22.rhce.cc Ready <none> 188d v1.20.1 vms23.rhce.cc Ready <none> 188d v1.20.1 [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl describe nodes vms21.rhce.cc | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule [tom@vms20 ~]$ kubectl describe nodes vms22.rhce.cc | grep Taint Taints: <none> [tom@vms20 ~]$ kubectl describe nodes vms23.rhce.cc | grep Taint Taints: <none> [tom@vms20 ~]$ echo 2 > /opt/KUSC00402/kusc00402.txt第十一题

设置配置环境 kubectl config use-context k8s

创建一个名字为kucc4的pod,在pod里面分别为以下每个images单独运行一个app container

(可能会有 1-4 个 images):

nginx redis memcached consul

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl run kucc4 --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml > 11-pod.yaml [tom@vms20 ~]$ vim 11-pod.yaml [tom@vms20 ~]$ cat 11-pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: kucc4 name: kucc4 spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: c1 resources: {} - image: redis imagePullPolicy: IfNotPresent name: c2 resources: {} - image: memcached imagePullPolicy: IfNotPresent name: c3 resources: {} - image: consul imagePullPolicy: IfNotPresent name: c4 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [tom@vms20 ~]$ kubectl apply -f 11-pod.yaml pod/kucc4 created第十二题

设置配置环境 kubectl config use-context k8s

创建名为 app-data 的 persistent volume,容量为 1Gi,访问模式为 ReadWriteMany。volume

类型为 hostPath,位于/srv/app-data

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ vim 12-pv.yaml [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl apply -f 12-pv.yaml error: error validating "12-pv.yaml": error validating data: ValidationError(PersistentVolume.spec): unknown field "hostpath" in io.k8s.api.core.v1.PersistentVolumeSpec; if you choose to ignore these errors, turn validation off with --validate=false [tom@vms20 ~]$ vim 12-pv.yaml [tom@vms20 ~]$ kubectl apply -f 12-pv.yaml persistentvolume/app-data created [tom@vms20 ~]$ cat 12-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: app-data spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle hostPath: path: /srv/app-data第十三题

设置配置环境 kubectl config use-context k8s

创建一个新的 PersistentVolumeClaim:

名称:pvvolume

class:csi-hostpath-sc

容量:10Mi

创建一个新的 pod,此 pod 将作为 volume 挂载到 PersistentVolumeClaim:

名称:web-server

image: nginx

挂载路径: /usr/share/nginx/html

配置新的 pod,以对 volume 具有 ReadWriteOnce 权限。

最后,使用 kubectl edit 或者 kubectl patch 将 PersistentVolumeClaim 的容量扩展为 70Mi,并

记录此次更改

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ vim 13-pvc.yaml [tom@vms20 ~]$ cat 13-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvvolume spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Mi storageClassName: csi-hostpath-sc [tom@vms20 ~]$ kubectl apply -f 13-pvc.yaml persistentvolumeclaim/pvvolume created [tom@vms20 ~]$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvvolume Bound pvc-b53cc659-6542-49b3-9dc9-c7464ae5f8c2 10Mi RWO csi-hostpath-sc 6s [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ vim 13-pod.yaml [tom@vms20 ~]$ cat 13-pod.yaml apiVersion: v1 kind: Pod metadata: name: web-server spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: " /usr/share/nginx/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvvolume [tom@vms20 ~]$ kubectl edit pvc pvvolume --record persistentvolumeclaim/pvvolume edited spec: accessModes: - ReadWriteOnce resources: requests: storage: 70Mi storageClassName: csi-hostpath-sc volumeMode: Filesystem volumeName: pvc-b53cc659-6542-49b3-9dc9-c7464ae5f8c2 status: accessModes: - ReadWriteOnce capacity: storage: 70Mi phase: Bound第十四题

设置配置环境 kubectl config use-context k8s

监控 pod foo 的日志并:

提取与错误 unable-to-access-website 相对应的日志行

将这些日志行写入到/opt/KUTR00101/foo

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl logs foo | grep unable-to-access-website > /opt/KUTR00101/foo [tom@vms20 ~]$ kubectl logs foo | grep unable-to-access-website unable-to-access-website unable-to-access-website unable-to-access-website unable-to-access-website第十五题

设置配置环境 kubectl config use-context k8s

在不更改其现有容器的情况下,需要将一个现有的 pod 集成到 kubernetes 的内置日志记录体系结构中(例如 kubectl logs)。添加 streamimg sidecar 容器是实现此要求的一种好方法。

Task

将一个 busybox sidecar 容器添加到现有的 pod legacy-app。新的 sidecar 容器必须运行一下命令:

/bin/sh -c tail -n 1 -f /var/log/legacy-app.log

使用名为 logs 的 volume mount 来让文件/var/log/legacy-app.log 可用于 sidecar 容器。

不要更改现有容器。不要修改日志文件的路径,两个容器必须通过/var/log/legacy-app.log 来访问该文件。

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE csi-hostpath-attacher-0 1/1 Running 4 3d11h csi-hostpath-provisioner-0 1/1 Running 9 3d11h csi-hostpath-resizer-0 1/1 Running 4 3d11h csi-hostpath-snapshotter-0 1/1 Running 4 3d11h csi-hostpath-socat-0 1/1 Running 3 3d11h csi-hostpathplugin-0 3/3 Running 10 3d11h foo 1/1 Running 5 75d front-end-86bd877494-mctkz 1/1 Running 1 12h kucc4 4/4 Running 4 11h legacy-app 1/1 Running 5 75d nginx-kusc00401 1/1 Running 1 11h web-server 1/1 Running 0 4m30s webserver-7484bc7558-26f4k 1/1 Running 1 11h webserver-7484bc7558-4c5zb 1/1 Running 1 11h webserver-7484bc7558-b9vfj 1/1 Running 1 11h webserver-7484bc7558-qzmpg 1/1 Running 4 3d11h webserver-7484bc7558-spxb9 1/1 Running 1 11h webserver-7484bc7558-sqc56 1/1 Running 1 11h [tom@vms20 ~]$ kubectl get pods legacy-app -o yaml > 15-pod.yaml [tom@vms20 ~]$ kubectl get pods legacy-app -o yaml > 15-pod.yaml.bak [tom@vms20 ~]$ ls 11-pod.yaml 12-pv.yaml 13-pod.yaml 13-pvc.yaml 15-pod.yaml 15-pod.yaml.bak 5-network.yaml 7-ingress.yaml 9-pod.yaml [tom@vms20 ~]$ kubectl delete pod legacy-app --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "legacy-app" force deleted spec: containers: - env: - name: LOG_FILENAME value: /var/log/legacy-app.log image: docker.io/lfcert/monitor:latest imagePullPolicy: IfNotPresent name: liveness resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-5ttw5 readOnly: true - name: logs mountPath: /var/log - name: busybox image: busybox command: ["sh","-c","tail -n 1 -f /var/log/legacy-app.log"] imagePullPolicy: IfNotPresent volumeMounts: - name: logs mountPath: /var/log dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: vms23.rhce.cc status: conditions: - lastProbeTime: null lastTransitionTime: "2021-01-31T11:40:12Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2021-04-17T01:59:06Z" status: "True" volumes: - name: default-token-5ttw5 secret: defaultMode: 420 secretName: default-token-5ttw5 - name: logs emptyDir: {} status: conditions:

第一个yaml文件通过kubectl get pods legacy-app -o yaml > 15-pod.yaml,还可以再做个备份

把volume放在一起

第十六题

设置配置环境 kubectl config use-context k8s

通过 pod label name=cpu-user,找到运行时占用大量 CPU 的 pod,并将占用 CPU 最高的 pod

名称写入到文件/opt/KUTR000401/KUTR00401.txt(已存在)

[tom@vms20 ~]$ kubectl config use-context k8s Switched to context "k8s". [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ kubectl top pods -l name=cpu-user NAME CPU(cores) MEMORY(bytes) webserver-7484bc7558-26f4k 0m 3Mi webserver-7484bc7558-4c5zb 0m 1Mi webserver-7484bc7558-b9vfj 0m 1Mi webserver-7484bc7558-qzmpg 0m 1Mi webserver-7484bc7558-spxb9 0m 1Mi webserver-7484bc7558-sqc56 0m 1Mi [tom@vms20 ~]$ [tom@vms20 ~]$ [tom@vms20 ~]$ #/opt/KUTR000401/KUTR00401.txt [tom@vms20 ~]$ [tom@vms20 ~]$ echo webserver-7484bc7558-26f4k > /opt/KUTR000401/KUTR00401.txt [tom@vms20 ~]$ echo webserver-7484bc7558-26f4k > /opt/KUTR00401/KUTR00401.txt -bash: /opt/KUTR00401/KUTR00401.txt: 没有那个文件或目录 [tom@vms20 ~]$ mkdir /opt/KUTR00401/ [tom@vms20 ~]$ echo webserver-7484bc7558-26f4k > /opt/KUTR00401/KUTR00401.txt第十七题

设置配置环境 kubectl config use-context ek8s

名为 wk8s-node-0(练习环境使用 vms26.rhce.cc)的 kubernetes worker node 处于 Not Ready 状

态。调查发生这种情况的原因,并采取相应措施将 node 恢复为 Ready 状态,确保所做的任何更改永久生效。

可使用以下命令通过 ssh 连接到故障 node:

ssh wk8s-node-0 (vms26.rhce.cc)

可使用一下命令在该 node 上获取更高权限:

sudo -i

[root@vms20 ~]# kubectl config use-context ek8s Switched to context "ek8s". [root@vms20 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION vms24.rhce.cc Ready control-plane,master 191d v1.20.1 vms25.rhce.cc Ready,SchedulingDisabled <none> 191d v1.20.1 vms26.rhce.cc NotReady <none> 191d v1.20.1 [root@vms20 ~]# [root@vms20 ~]# [root@vms20 ~]# ssh tom@vms26.rhce.cc tom@vms26.rhce.cc's password: Last login: Sat Apr 17 11:52:53 2021 from 192.168.26.26 [tom@vms26 ~]$ [tom@vms26 ~]$ [tom@vms26 ~]$ sudo -i [root@vms26 ~]# [root@vms26 ~]# [root@vms26 ~]# systemctl is-active kubelet.service unknown [root@vms26 ~]# systemctl start kubelet [root@vms26 ~]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@vms26 ~]# exit 登出 [tom@vms26 ~]$ exit 登出 Connection to vms26.rhce.cc closed. [root@vms20 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION vms24.rhce.cc Ready control-plane,master 191d v1.20.1 vms25.rhce.cc Ready,SchedulingDisabled <none> 191d v1.20.1 vms26.rhce.cc Ready <none> 191d v1.20.1

栏目热文

kubernetes系列教程(kubernetes基础知识)

kubernetes系列教程(kubernetes基础知识)

写在前面上一篇文章初步介绍了yaml学习kubernetes中重要的一个概念pod,接下来介绍kubernetes系列教...

2022-11-08 20:09:29查看全文 >>

kubernetes二次开发(kubernetes技术栈)

kubernetes二次开发(kubernetes技术栈)

Kubernetes是以应用为中心的技术架构与思想理念,以一套技术体系支持任意负载,运行于任意基础设施之上;向下屏蔽基础...

2022-11-08 19:58:52查看全文 >>

kubernetes思维导图(kubernetes介绍)

kubernetes思维导图(kubernetes介绍)

Kubernetes是一个开源的容器编排引擎,用来对容器化应用进行自动化部署、 扩缩和管理。然而并非所有项目都需要微服务...

2022-11-08 20:21:38查看全文 >>

kubernetes介绍(kubernetes功能性)

kubernetes介绍(kubernetes功能性)

我的专栏教程《Kubernetes核心开发从入门到精通》针对的是Kubernetes的原理实践和核心代码开发。目标群体是...

2022-11-08 20:42:24查看全文 >>

kubernetes整体架构图(kubernetes四个基础)

kubernetes整体架构图(kubernetes四个基础)

Kubernetes是用于管理容器化应用程序集群的工具。在计算机领域中,此过程通常称为编排。用管弦乐编排比喻上面的服务编...

2022-11-08 20:17:46查看全文 >>

kubernetes中文社区(kubernetes最新动态)

kubernetes中文社区(kubernetes最新动态)

Kubernetes 1.14 正式发布已经过去了一段时间,相信你已经从不同渠道看过了各种版本的解读。不过,相比于代码 ...

2022-11-08 20:42:44查看全文 >>

kubernetes详细介绍(kubernetes 四个基础)

kubernetes详细介绍(kubernetes 四个基础)

Kubernetes是一个可移植、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。Kuber...

2022-11-08 20:04:53查看全文 >>

kubernetes认证级别(kubernetes证书含金量)

kubernetes认证级别(kubernetes证书含金量)

前面我们基本上了解了 Kubernetes 中的一些常见资源对象,接下来我们用一个 Wordpress 示例来尽可能将前...

2022-11-08 20:07:30查看全文 >>

kubernetes架构详解(kubernetes架构深度解析)

kubernetes架构详解(kubernetes架构深度解析)

Kubernetes 最初源于谷歌内部的 Borg,提供了面向应用的容器集群部署和管理系统。Kubernetes 的目标...

2022-11-08 20:31:49查看全文 >>

kubernetes最新版(kubernetes更新镜像)

kubernetes最新版(kubernetes更新镜像)

整理 | 梦依丹出品 | CSDN(ID:CSDNnews)Kubernetes 官博宣布,版本发布团队合并了一个 Ku...

2022-11-08 20:23:13查看全文 >>

文档排行