티스토리 뷰

Reflection

cka02 Scheduling 정리한 것

헐리 2021. 9. 11. 10:45

이 글은 udemy의 <Certified Kubernetes Administrator (CKA) with Practice Tests > 강의를 들으며 자격증 공부 목적으로 내용을 정리한 글입니다. 

https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests

 

Certified Kubernetes Administrator (CKA) Practice Exam Tests

Prepare for the Certified Kubernetes Administrators Certification with live practice tests right in your browser - CKA

www.udemy.com

 

Scheduling

cka.docx
0.04MB

□ How scheduling works?

- we don’t specify nodeName when we create the manifest file. Because K8s adds it automatically

- the scheduler goes through all the pods and looks for those that do not have property set

- It then identifies the right node for the POD by running the scheduling algorithm

- Once identified it schedules the POD on the Node by setting the nodeName property to the name of the node by creating a binding object

- You can only specify the nodeName at creation time

- to assign a node to an existing pod is to create a binding object and send a post request to the pod binding API thus mimicking what the actual scheduler does

- in the binding object you specify a target node with the name of the node. Then send a post request to the pods binding API with the data set to the binding object in a JSON format

kubectl get pods --selector env=dev

kubectl get pod --selector env=prod,bu=finance,tier=frontend

 

[Taints and Tolerations]

- used to set restrictions on what parts can be scheduled

Kubectl taint nodes node1 app=blue:NoSchedule

- taint-effect: what happens to PODs that DO NOT TOLERATE this taint?

tolerations:
-key: “app”
 Operator: “Equal”
 Value: blue
 Effect: NoSchedule

- scheduler does not schedule any part on the master node. Because when the k8s cluster is first setup, it taints set on the master node automatically that prevents any parts from being schedule on this node

-kubectl describe node node01 | grep taints

-kubectl describe node controlplane | grep Taint

-untaint: kubectl taint nodes node1 key1=value1:NoSchedule-

 

[Node Selector]

two way to select node

(1)using Node Selector which is the simple and easier method

-spec.nodeSelector.size: Large

           -how does k8s know which is the large node? The key value pair of size and large are in fact labels assigned to the nodes the scheduler uses these labels to match and identify the right node.

- kubectl label nodes <node-name> <label-key>=<label-value>

(2)Node Affinity

 

[Node Affinity]

-Large or Medium

-Not small

-the primary purpose of node affinity feature is to ensure that pods are hosted on particular nodes

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: 또는 preferred Required
        nodeSelectorTerms:
        - matchExpressions:
          - key: size
            operator: In 또는 NotIn, Exist
            values:
            - Large 또는 Medium, Small

 

[Resource Requirements and Limits]

-If the node has no sufficient resources, the scheduler avoids placing the part on that node, instead places the part on one where sufficient resources are available

-If there is no sufficient resources available on any of the nodes, k8s holds back scheduling the pod.

  --> pending state( Insufficient CPU)

-Resouce request for a container: the minimum amount of CPU or memory requested by the container when the scheduler tries to place the pod on a node

  --> It uses these numbers to identify a node which has sufficient amount of  resources available

1G (Gigabyte) 1Gi (Gibibyte)
1,000,000,000 bytes 1,073,741,824 bytes

-Resource Limits: if you do not specify explicitly, a container will be limited to consume only one vCPU from the node. The same goes with memory(by default 512 Mi)

-The limits and requests are set for each container within the pod

-A container can't user more CPU resources than its limit

-However, a container can use more memory resources than its limit. so if a pod tries to consume more memory than its limit constantly, the pod will be terminated

 

apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 512Mi
    defaultRequest:
      memory: 256Mi
    type: Container
    
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-limit-range
spec:
  limits:
  - default:
      cpu: 1
    defaultRequest:
      cpu: 0.5
    type: Container

Remember, you CANNOT edit specifications of an existing POD other than the below.

  • spec.containers[*].image
  • spec.initContainers[*].image
  • spec.activeDeadlineSeconds
  • spec.tolerations

to extract the pod definition in YAML format to a file using the command

kubectl get pod webapp -o yaml > my-new-pod.yaml
kubectl delete pod webapp #delete the existing pod
kubectl create -f my-new-pod.yaml # create a new pod with the edited file

 

[DaemonSets]

-It helps you deploy multiple instances of pod, but it runs one copy of your pod on each node in your cluster

-whenever a new node is added to the cluster, a replica of the pod is automatically added to that node and when a node is removed, the pod is automatically removed

-the daemonset ensures that one copy of the pod is always present in all nodes in the cluster

-Kube-Proxy component can be deployed as a daemonset in the cluster

□How does it work?

-(until k8s 1.0)set the node's name, property and its specification before it is created and when they are created, they automatically land on the respective nodes

-(from version 1.2 onwards), the daemonset uses the default scheduler and node affinity rules

 

[Static PODs]

- You can configure the kubelet to periodically read the pod definition files from a directory on the server designated to store information abour pods

  --> create pod and ensure the pod stays alive

  --> If the application crashes, the kubelet attemps to restart it

  --> If you remove a file from this directory the part is deleted automatically

□So these pods that are created by the kubelet on its own without the intervention from the API server or rest of the k8s cluster components are known as Static PODS.

□You can't create replicasets or deployments or services by placing a definition file in the designated directory

□The kubelet works at a POD level and can only understand pods

□What is designated folder?

-It could be any directory on the host.

1)The location of that directory is passed in to the kubelet as a option while running the service

#kubelet.service
--pod-manifest-path=/etc/Kubernetes/manifests

2)Another way: you could provide a path to another config file using the config option, and define the directory path

#kubelet.service
--config=kubeconfig.yaml

#kubeconfig.yaml
staticPodPath: /etc/Kubernetes/manifests

-since we don't have an API server now, no kubectl utility which is why we're using th docker command.

-the way the kubelet works can take in requests for creating parts from different inputs

  1.   static pods folder
  2.   HTTP API endpoint

-the kubelet can create both kinds of PODs (static pods folder, HTTP API endpoint) at the same time

-if you run the kubectl get pods command, the static pods will be listed as any other pods

  ->how is that happening?:

    when the kubelet creates a static pod, if it is part of a cluster, it also creates a mirror object in the kubeapi server.

    what you see from the kube-apiserver is just a read only mirror of the pod

-you can only delete pod by modifying the fules from the nodes manifest folder

□Why would you want static pod?

-since static pods are not dependent on the k8s control plane, you can use static pods to deploy 

the control plane components ifself as pods on a node

 

Static PODs DaemonSets
Created by Kubelet Created by Kube-API server
Deploy Control Plane components as Static Pods Deploy Monitorting Agents, Logging
Ignored by the Kube-Scheduler

 

□How many static pod?

look for the pod from the list that does not end with -controlplane

□Create a static pod named static-busybox 

kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml

 

[Multiple Schedulers]

you can add your own custom conditions and checks in it

- you can write your own k8s scheduler program, package it  and deploy it as the default scheduler or as an additional scheduler in the kubernetes cluster

- all of the other applications can go through the default scheduler

- when creating a pod or deployment, you can instruct kubernetes to have the pod

--leader-elect=true
--scheduler-name=my-custom-scheduler

- the leader-elect option is used when you have multiple copies of the scheduler running on different master nodes in High Availability setup where you have multiple master nodes with the kube-scheduler process running on both of them

- if multiple copies of the same scheduler are running on different nodes, only one can be active at a time.

   --> that's  where the leader-elect option helps in choosing a leader who will lead scheduling activities.

- to get multiple schedulers working, you must either set the leader-elect option to false in case where you don't have multiple masters.

apiVersion: v1
kind: Pod
metadata:
  labels:
    component: my-scheduler
    tier: control-plane
  name: my-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=false
    - --scheduler-name=my-scheduler
    image: k8s.gcr.io/kube-scheduler:v1.19.0
    imagePullPolicy: IfNotPresent
    name: kube-scheduler
~
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/07   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
글 보관함