티스토리 뷰

Reflection

cka04 Networking 정리한 것

헐리 2021. 9. 14. 08:52

이 글은 udemy의 <Certified Kubernetes Administrator (CKA) with Practice Tests > 강의를 들으며 자격증 공부 목적으로 내용을 정리한 글입니다. 

https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests

 

Certified Kubernetes Administrator (CKA) Practice Exam Tests

Prepare for the Certified Kubernetes Administrators Certification with live practice tests right in your browser - CKA

www.udemy.com

 

Networking

[Cluster Networking]

- master node and worker nodes have at least one network interface connected to a network

- each interface must have an address configured

- the hosts must have a unique hostname set. As well as a unique MAC address

master node 6443 for the API server
kubelets on the master and worker nodes 10250
kube-scheduler 10251
kube-controller-manager 10252
worker nodes expose services for external access 30000 - 32767
ETCD server 2379
ip a | grep -B2 10.59.159.3    #controlplane의 네트워크인터페이스 찾을때
arp node01                     #node01의 MAC address 찾을때
ip route show default          #IP address of the Default Gateway

 

[Network Addons]

installing a network plugin in the cluster

 

[Pod Networking]

□networking at the pod layer

- It expects you yo implement a networking solution that solves these challenges

- k8s expects every pod to get its own unique IP address and every pod should be able to  reach other pod within the same node using the IP address

- Networking Solution(flannel, NSX etc ) takes care of automatically assigning IP addresses and establish connectivity between the pods in a node as well as on differenct nodes without having to configure any networks

 

 

1)create a bridge network on each node and bring them up

Node1 (Node2, Node2도 똑같이 하기)

ip link add v-net-0 type bridge
ip link set dev v-net-0 up    #assign 10.244.1.0/24
ip addr add 10.244.1.1/24 dev v-net-0

2)net-script.sh

# creat veth pair
ip link add ...

# Attach veth pair
ip link ser ...
ip link set ...

# Assign IP address
ip -n <namespace> addr add ...
ip -n <namespace> route add ...
ip -n <namespace> link set

the pods all get their won unique IP address and are able to communicate with each other on their own

 

3) To enable pods to reach other pods on other nodes

# node1에서 (192.168.1.11)

ip route add 10.244.2.2 via 192.168.1.12
ping 10.244.2.2   #가능

ip route add 10.244.3.2 via 192.168.1.13
ping 10.244.3.2   #가능

 

4) to do step 3 simple

 

NETWORK GATEWAY
10.244.1.0/24 192.168.1.11
10.244.2.0/24 192.168.1.12
10.244.3.0/24 192.168.1.13

-form a single large network with the address 10.244.0.0/16

-we don't want to wirte scripts as in large environments so we have CNI

-CNI tell k8s that this is how you should call a script as soon as you create a container

-CNI have  add section and delete section that take care of adding or delete a container

- after Kubelet create container, it looks at the CNI configuration, and identifes our scripts name

□what kubelet do

1) looks at the CNI configuration

2) identifes our scripts name

3) looks in the /etc/cni/bin directory

4) executes the scripts with the ADD command and the name and namespace ID of the container

 

[CNI]

-The CNI plugin must be invoked by the component within K8s that is responsible for creating containers

- Because that component must then invoke the appropriate network plugin after the container is created

-on kubelet service file, CNI directory is configured

ps -aux | grep kubelet
--network-plugin=cni
--cni-bin-dir=/opt/cni/bin
--cni-conf-dir=/etc/cni/net.d

In the /etc/cni/net.d, 10-bridge.conf is configured

 

[WeaveWorks]

-solution based on CNI

□How Weave work

-Single POD may be attached to multiple bridge networks

  -For Example) you could have a pod a attached to the weave bridge as well as the docker bridge creatd by Docker

- What path a packet takes to reach destination depends on the route configured on the container

- Weave makes sure that PODs get the correct route configured to reach the agent

- And  the agent then takes care of other PODs

- nowm when a packet is sent from one pod to another on another node, weave intercepts the packet and identifies that it's on a separate network

- It then encapsulates this packet into a new one with new source and destination and sends it across the network

- once on the other side, the other weace agent retrieves the packet, decapsulates and routes the packet to the right POD

□How do we deploy weave on k8s cluster?

- Weave and weave peers can be deployed as services or daemons on each node in the cluster manually or if k8s is setup already, then an easier way to do that is to deploy it as pods in t the cluster with kubectl apply command

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.50.0.0/16"

- Most importantle the weave peers are deployed as a daemonset

- A daemonset ensures that one  pod of the given kind is deployed on all nodes in the cluster

□How do we deploy weave peers on k8s cluster?

-If you deployed your cluster with the kubeadm tools and weave plugin, you can see the weave peers as pods

□IPAM (IP Address Management) -weave

- you can manage that on your owrn or with your own external IPAM solutions

- CNI Plugin manage IP address assignment toi PODs

- managet local IP-list file: DHCP, hostlocal

- The CNI configuration file has a section called ipam in which we can specify the type of plugin to be used.

- the subnet and route to be used

-Weeve

  • weeve by default allocates the ip arrange to10.32.0.0./12 for the entire network
  • then gives the network IP from range 10.32.0.1 - 10.47.255.254 (1,048,574 IPs we can use)
  • The peers decide to split the IP address equally between them and assign one portion to each node
  • pod's ip ranges are configurable with additional options past while deploying the weave plug into a cluster

[Service Networking]

- if you want to pod to access services hosted on another pod, you would always  use a service

□ClusterIP

- when a service  is created, it is accessible from all pods on the cluster, irrespective of what the pods are on

- while a pod is hosted on a node as service is hosted across the cluster, it is not bound to a specific node

- the service is only accessible from within the cluster

□Nodeport

- This service also gets an  IP address assigned to it and works just like clusterIP

- It also exposes the application on a port on all node in the cluster, so external users or application have access to the service

□How are NodePort services getting these IP address and how were they made available across all the nodes on the cluster? How is the service made available to external users through a port on each node?

- Every node runs kubelet process which is reponsible for creating pods

- Each kubelet service on each node  watches the changes in the cluster through the API server, and every time a new pod is to be created, i created the pod on the node

- kubelet then invoke the cni plugin to configure networking for that pod

- Each node also runs another component known as kube-proxy

- kube-pwoxy watches the change in the cluster through kube-api server. and every tume a new service is to be created, kube-proxy gets into action

- Unlike pods, services are not created on each node or assigned to each node

- services are a cluster wide concept. they exist across all the nodes in the cluster

- There is no server or service really listening on the IP of the service. There are no processes or namespaces or interfaces for service. It is just a virtual object

- How do they get an IP address?

  • when we create a service object and coordinators, it is assigned an IP address from a predefined range
  • the kube-proxy components running on each node gets that IP address and creates forwarding rules on each node in the cluster, saying any traffic coming to this IP, the IP of the service should go to the IP of the pod
  • One ip of the pod is in place, whenever a pod tries to reach the IP of the service, it is forwarded to the pod IP address which is accessible from any node in the cluster
  • it is not just the ip. it its an ip import combination
  • whenever services are created or deleted, the kube-proxy components creates or delete these rules

□How are these rules created?

- kube-proxy support different ways such as userspace, work your proxy listen on a report for each service and proxies connections to the pods by creating IPv6 rules for the third and the default option

- using IP tables

- The proxy mode can be set using the proxy mode  option while configuring the kube-proxy service

kube-proxy --proxy-mode [userspace | iptables | ipvs ] ...

-kube-proxy의 iptables

IP:Port 10.99.13.178:80
Forward to 10.244.1.2

-pod

pod DB 10.244.1.2
Service DB (ClusterIP) 10.103.132.104:3306
$ kube-api-server --service-cluster-ip-range ipNet  # Default: 10.0.0.0/24
$ ps aux | grep kube-api
range=10.96.0.0/12

# What is the IP Range configured for the services within the cluster
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep cluster-ip-range

10.96.0.0 => 10.111.255.255

10.244.0.0/16

10.244.0.0 => 10.244.255.255

iptalbes -L -t nat | grep db-service

cat /var/log/kube-proxy.log

 

 

 

[Cluster DNS]

-k8s deploys a built-in DNS server by default when you setup a cluster

- As long as our cluster networking is set up correctly, pods and services can get their own IP address and can reach each other

  • pod 10.244.1.5 : test
  • pod 10.244.2.5 : web
  • service 10.107.37.188 : web-service

- whenever a service is created, the k8s DNS service creates a record for the service.

- It maps the service name to the IP address

- So within the cluster, any pod can now reach the service using its service name

- when the web-service was in a separate namespace named app, then to refer to it from the default

namespace,  it is web-service.apps ( the last name of the service is now the name of the namespace)

curl http://web-service.apps

-SVC: another subdomain where All the services are grouped together into 

-you can reach your application with the name web-service.apps.svc

Hostname Namespace Type Root IP Address
web-service apps svc cluster.local 10.107.37.188
10-244-2-5 apps pod cluster.local 10.244.2.5
curl http://web-service.apps.svc.cluster.local

-k8s makes hostname by its ip address(10.244.2.6) --> 10-244-2-5

 

[coreDNS]

- coreDNS servver is deployed as a POD in the kube-system namespace in the k8s cluster

- they are deployed as two pods for redundancy as part of a replocaset

- coreDNS need configure, file named corefile

cat /etc/coredns/Corefile

- you have a number of plugins configured

- Plugins are configured for handling errors, reporting health, monitoring metrics, cache etc

- kubernetes plugin: the top level domain name of the cluster

  • every record in the coredns server falls under this domain

- any record that this DNS server can't solve, it is forwrded to the nameserver specified in the coredns pods /etc/resolv.conf file

- /etc/resolv.conf file is set to use the nameserver from the k8s node

- coredns pod has a configMap object, that way if you need to modify this configuration you can edit the ConfigMap object

- coredns watches the k8s cluster for new PODs or services and everytime a pod or service is created it adds a record for it in its database

- next step, the pod to point to the coreDNS server

- kube-dns: when we deploy CoreDNS solution, it also creates a service to make it available to other components within a cluster. the service is kube-dns

- the ip address of thie service is configured as the nameserver on the PODs

- the DNS configurations on PODs  are done by k8s automatically when the PODs are created

- kubelet is responsible for this. there is IP address of the DNS server and domain in it

- once the pods are configured with the right nameserver, you can now resolve other pods and services

- /etc/resolv.conf file also has a search entry which is set to default.svc.cluster.local as well as svc.cluster.local and cluster.local

- However, this file only has search entries for service. so you won't be able to reach a pod the same way

- For searching pod, you need to specify the full FQDN of the pods

 

[Ingress]

□if tou were on a public cloud environment like Google

-instead of creating a service of type NodePort for your application, you can set type loadbalaner

  • k8s would still do everything that it has to do for a NodePort which is to provision a high port for the service
  • k8s also sends a request to GCP to provision a network loadbalancer for the service
  • on receiving the request, GCP would then automatically deploy a loadbalancer configured to route traffic to the service ports on all the nodes and return its information to k8s

-the LoadBalancer has an external IP that can be provided to users to access the application

-in this case we set  the DNS yo point to this IP and users access the application using the URL

-New loadBalancer has a new IP, then you must pay for each of these loadBalancer and having many loadbalaner can inversely affect your cloud  bill

- you need yey another proxy or loadbalancer that can redirect traffic based on  URLs to the different services

- you have to reconfigure the loadbalancer and finally you also need to enable SSL for your applications. so your users can access yout application using https

- It can be done at different levels either at the application level itself or at the loadbalancer or proxy server level

□ingress

-helps your users access your application using a single Externally accessible URL, that you can configure to route to different services based on the URL path, at the same time implement SSL security as well

-even with ingress, you still need to expose it to make it accessible outside the cluster so you still have to either publish it as a nodeport or with loadbalancer

-순서

  1. Deploy (Ingress Controller needed - Nginx, HAPROXY, traefik)
  2. Configure (Ingress Resource - URL Routes, SSL certificates)

-Deploy에 필요한파일

  • Deployment
  • service
  • configmap
  • auth
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/07   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
글 보관함