티스토리 뷰

Reflection

cka01 core concepts 정리한 것

헐리 2021. 9. 11. 10:36

이 글은 udemy의 <Certified Kubernetes Administrator (CKA) with Practice Tests > 강의를 들으며 자격증 공부 목적으로 내용을 정리한 글입니다. 

https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests

 

Certified Kubernetes Administrator (CKA) Practice Exam Tests

Prepare for the Certified Kubernetes Administrators Certification with live practice tests right in your browser - CKA

www.udemy.com

 

Core concepts

core concepts.pdf
0.11MB

[Etcd] a database that stores information in a key-value format

Etcd

default port: 2379

           --advertise-client-urls https://${internal_ip}:2379

        --this is the url that should be configured on the kube-apiserver

□ key-value datastore (namespace: kube-system)

□ ETCD datastore stores information regarding the cluster such as the nodes, pods, configs, secrets, accounts, role, bindings and others

□ Every information when you run the kubectl get command is from the ETCD server

□ Every change to cluster, such as adding nodes, deploying pods are updated in the ETCD server

□ Only once it is updated in the ETCD server, is the change considered to be complete

□ multiple ETCD instances spread across the master nodes

 

[kube-scheduler] identifies the right note to place a container on based on the containers’ resource

Kube Scheduler

□ kubelet is the captain on the ship who creates the pod on the ships

kube-scheduler only responsible for deciding which pod goes on which node

□ Filter Nodes ⇒ Rank Nodes ⇒ Deciding

□ kube-system namespace

 

 

[controller manager]

           [Node-controller]

           [Replication-controller]

Kube Controller Manager

A controller is like an office or department withing the master ship, and has their own set of responsibilities

□ a process that continuously monitors the state of various components within the system and works towards bringing the whole system to the desired functioning state

□ continuously on the lookout for the status of the ships

□ take necessary actions to remediate the situation

설치후, 명령어: kube-controller-manager.service

 

[kube-apiserver]

- the primary management component of Kubernetes

- orchestrating all operations within the cluster

- exposes the Kubernetes API which is used by external users to perform management operations on the cluster

and various controllers to monitor the state of the cluster and make the necessary

Kube-API server

the primary management component in Kubernetes

□ the center of all the differenct tasks that needs to be performed to make a change in the cluster

□ when running kubectl command, the kubectl utility is reaching to the kube-apiserver

□ the kube-apiserver first, authenticates the request and validates it. Then it retrieves the data from the ETCD cluster and responds back with the requested information

□ once done, the kubelet updates the status back to the API server and the API server then updates the data back in the ETCD cluster

 

[container] our applications are in form if containers the differenct components that form the entire management system

- docker

[kubelet]

- an agent that runs on each node in a cluster

- it listens for instructions from the kube-api server and deploys or destroys containers on the nodes

- kube-apiserver periodically fetches status reports from the kubelet to monitor the state of nodes and containers on them

- kubelet is more like a captain on the ship that manages containers on the ship

Kubelet

□ They load or unload containers on the ship as instructed by the scheduler on the master

□ pull the required images and run an instance

□ They monitor state of the pod and containers, and also send back reports at regular intervals on the status of the ship and the containers on them to the kube-apiserver

□ register the node with the Kubernetes cluster

 

[kube-proxy] ensures that the necessary rules are in place on the worker nodes to allow the containers running on them to reach each other

- pods are able to communicate with each other

-There are many solutions available for deploying such a network

Kube-proxy

□ Kube-proxy is a process that runs on each node in the Kubernetes cluster

□ its job is to look for new services and every time a new service is created, it creates the appropriate rules on each node to forward traffic to those services to the backend pods

- one way it does this is using IPTABLES rules

- in this case it creates an IP tables rule on each node in the cluster to forward traffic heading to the IP of the service which is 10.96.0.12 to the IP of the actual pod which is 10.32.0.15.

□ the service also gets an ip address assigned to it. whenever a pod tries to reach the service using its ip or name, it forwards the traffic to the backend pod

□ the service cannot join the pod network because the service is not an actual thing. It is not a container like pod so it doesn’t have any interfaces or an actively listening process

□ service is a virtual component that only lives in the cabinet as memory

 

[pod]

□ the containers are encapsulated into a commonalties object known as pods

□ a pod is a single instance of an application

□ the smallest object in Kubernetes

□ pods usually have a one to one relationship with containers running application

(do not add additional containers to an existing pod)

□ we would need to create additional pods when we have helper container that might be doing some kind of supporting task for our web application such as enter date, processing a file ETC

□ a container is created, the helper is also created, when it dies, help also dies

□ as part of the same pod, the two containers can also communicate with each other directly by referring to each other as localhost, since they share the same network space

 

[replication controller] kubectl scale rs new-replica-set --replicas=5

□ to prevent users from losing access to our application, one instance or pod running at the same time

□ the replication controller helps to run multiple instances of a single pod in cluster thus providing high availability

□ it helps us balance the load across multiple pods on different nodes as well as scale our application when the demand increases

□ apiVersion: v1

[Replica Set]

           - apiVersion: apps/v1

           - selector: the replications will also take those parts into consideration when creating the replicas

[Deployments]

□ provide the capability to upgrade the underlying instances seamlessly using rolling updates, undo changes, and pause and resume changes as required

 

[Namespace]

□ Each of namespace can have its own set of policies that define who can do what

□ assign quota of resources to each of namespaces

□ mysql.connect(“db-service.dev.svc.cluster.local”)

           -domain: cluster.local

           -service: svc

           -dev: namespace

□ kubectl config set-context $(kubectl config currenct-context) –namespace=dev

□ –all-namespaces

□ ResourceQuota

□ kubectl run redis --image=redis -n finance

 

[Service]

□ service enable communication between various components within and outside of the application

□ connect applications together with other applications or users

□ service enables loose coupling between micro services in our application

Nodeport: the service listens to a port on the Node and forwards requests to PODs.

                  The service makes an internal POD accessible on a Port on the Node

 

Nodeport

□ mapping a port on the Node to a port on the POD

□ service

           -service ip

           -service port

□ pod

           -pod ip

           -TargetPort: 지정하지 않으면 service port와 같다고 가정

□ nodeport (30000~32767) -> service port 80 -> target port 80

□ nodeport도 지정가능

□ when we create a service without having to do any additional configuration, k8s automatically creates a service that sapns across all the nodes in the cluster and maps the target port to the same nodeport on all the nodes in the cluster this way you can access your application using the IP of any node in the cluster and using the same port number

□ nodeIP:nodePort

 

ClusterIP: the creates a virtual IP inside the cluster to enable communication between different services such as a set of front-end servers to a set of back-end servers

ClusterIP

□ pod ips are not static

□ this enables easily and effectively deploy a microservices based application on k8s cluster

□ each layer can scale or move as required without impacting communication between the various services

□ Each service gets an IP name assigned to it inside the cluster and that is the name that should be used by other pods to access the services

 

LoadBalancer: it provisions a load balancer for our service in supported cloud providers

LoadBalancer

□ GCP, Azure, AWS

 

[Imperative vs Declarative]

□ ★Imperative: taxi ex) create, run, scale, replace, delete, set, expose, edit

Declarative: Uber ex) YAML congifuation

□ –dry-run: resource will be created

□ –dry-run=client: this will not create the resource, instead, tell you whether the resource can be created

□ -o yaml: output the resource definition in YAML

kubectl run redis --image=redis:alpine -l tier=

kubectl expose pod redis --port=6379 --name redis-service

kubectl create deployment webapp --image=kodekloud/webapp-color --replicas=3

kubectl run httpd --image=httpd:alpine --port=80 --expose

 

[kubectl]

□ when running apply command, if the object doesn’t already exist, the object is created.

□ the yaml file is converted to a Jason format ⇒ Last applied configuration

□ when the object is created, an object configuration similar to what we created locally is created within k8s

□ but with additional fields to store status of the object ⇒ Live object configuration

□ do not mix the imperative and declarative approach while managing the k8s object

공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/07   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
글 보관함