개요
Kubernetes(이하 k8s)는 컨테이너 오케스트레이션 프로그램으로 마이크로 서비스 아키텍쳐(MSA) 구성에 매우 필요한 소프트웨어이다. k8s는 단일 노드에서만 운영하는 것이 아니라, 여러 노드에 걸쳐 구축되어 클러스터를 이루게 된다. 특히, etcd는 키-값의 데이터베이스로 k8s의 구성요소에 대한 정보를 기록한다. k8s 구축시 etcd에 대한 고가용성(HA)이 되지 않는다면 첫 번째 마스터 노드 장애시 k8s 클러스터 전체 장애로 이루어지게 된다. 이에 etcd 마저 클러스터링을 진행하며, k8s 공식 문서에서는 etcd 구성에 있어 kubeadm 에 일임할 것을 권장하고 있다.
구축 환경
- 가상 환경: Windows 10 Pro 19H1 - Hyper-V
- OS : CentOS 7
- etcd : 2 vCore, 2 GB, 3 EA
- master : 4 vCore, 4 GB, 3 EA
- worker : 8 vCore, 8 GB, 3 EA
사전 환경 설정
hosts 파일 설정
cat > /etc/hosts << EOF
127.0.0.1 localhost
etcd
10.10.0.25 etcd-1.k8s.io
192.168.0.25 etcd-1.k8s.io
10.10.0.26 etcd-2.k8s.io
192.168.0.26 etcd-2.k8s.io
10.10.0.27 etcd-3.k8s.io
192.168.0.27 etcd-3.k8s.io
K8s-VIP
10.10.0.30 m.k8s.io
192.168.0.30 m.k8s.io
K8s-Master
10.10.0.31 m-1.k8s.io
192.168.0.31 m-1.k8s.io
10.10.0.32 m-2.k8s.io
192.168.0.32 m-2.k8s.io
10.10.0.33 m-3.k8s.io
192.168.0.33 m-3.k8s.io
K8s-Node
10.10.0.41 n-1.k8s.io
192.168.0.41 n-1.k8s.io
10.10.0.42 n-2.k8s.io
192.168.0.42 n-2.k8s.io
10.10.0.43 n-3.k8s.io
192.168.0.43 n-3.k8s.io
EOF
etcd 노드 2대에 haproxy, ipvsadm, keepalived 구성
haproxy 저장소 추가 및 설치
rpm -Uvh http://www.nosuchhost.net/~cheese/fedora/packages/epel-7/x86_64/cheese-release-7-1.noarch.rpm
yum install haproxy keepalived ipvsadm -y
sysctl 환경 설정
cat << EOF > /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
net.ipv4.ip_forward = 1
EOF
sysctl -p
haproxy 환경 설정
cat << EOF > /etc/haproxy/haproxy.cfg
global
user haproxy
group haproxy
defaults
mode http
log global
timeout connect 3000ms
timeout server 5000ms
timeout client 5000ms
frontend k8s-api
mode tcp
option tcplog
bind 10.10.0.30:6443
default_backend k8s-m
backend k8s-m
mode tcp
balance roundrobin
option tcp-check
server m-1 10.10.0.31:6443 check fall 3 rise 2
server m-2 10.10.0.32:6443 check fall 3 rise 2
server m-3 10.10.0.33:6443 check fall 3 rise 2
listen stats
mode http
bind *:80
log global
stats enable
stats refresh 10s
stats show-node
stats uri /haproxy
EOF
keepalived 환경 설정
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id ysyukr_LVS
enable_traps
}
sync_group VG1 {
group {
VI_1
}
}
vrrp_instance VI_1 {
state MASTER
interface eth0
lvs_sync_daemon_interface eth0
garp_master_delay 3
virtual_router_id 55
priority 102
advert_int 1
authentication {
auth_type PASS
auth_pass ysyukrLVS
}
virtual_ipaddress {
192.168.0.30/24 dev eth0
10.10.0.30/24 dev eth1
}
}
EOF
ip 할당 확인
ip addr show
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:01:90:52 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.25/24 brd 192.168.0.255 scope global dynamic eth0
valid_lft 5385sec preferred_lft 5385sec
inet 192.168.0.30/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:fe01:9052/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:01:90:53 brd ff:ff:ff:ff:ff:ff
inet 10.10.0.25/24 brd 10.10.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.10.0.30/24 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:fe01:9053/64 scope link
valid_lft forever preferred_lft forever
haproxy 확인
netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
...
tcp 0 0 10.10.0.30:6443 0.0.0.0:* LISTEN 16315/haproxy
...
etc/master/worker 노드의 docker 설치
docker 저장소 설치
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
k8s docker 지원 버전 확인
URL: https://kubernetes.io/docs/setup/release/notes/
The list of validated docker versions remains unchanged. The current list is 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. (#72823, #72831)
패키지 버전 확인
yum list docker-ce --showduplicates | sort -r
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
docker 설치
yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io
docker의 cgroup driver를 systemd로 변경
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl enable docker
kube-proxy가 이용할 iptables 설정
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
modprobe br_netfilter # 리부팅 이후에도 해당 모듈이 내려가 있으면, rc.local을 이용하자
kubelet, kubeadm, kubectl 설치하기
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
kubeadm 을 이용한 etcd 구성하기
kubelet 설정
cat << EOF > /usr/lib/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
Restart=always
EOF
systemctl daemon-reload
etcd 설정파일 생성
export HOST0=10.10.0.25
export HOST1=10.10.0.26
export HOST2=10.10.0.27
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("etcd-1" "etcd-2" "etcd-3")
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
인증서 생성
kubeadm init phase certs etcd-ca
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete
인증서 복사
scp -r /tmp/${HOST1}/* ${HOST1}:
scp -r /tmp/${HOST2}/* ${HOST2}:
mv pki /etc/kubernetes/
etcd 클러스터 시작
kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
etcd 클러스터 상태 확인
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.3.15 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://10.10.0.25:2379 cluster-health
k8s 클러스터 구성하기
etcd 클러스터 인증서 복사
scp /etc/kubernetes/pki/etcd/ca.crt 10.10.0.31:
scp /etc/kubernetes/pki/apiserver-etcd-client.crt 10.10.0.31:
scp /etc/kubernetes/pki/apiserver-etcd-client.key 10.10.0.31:
mkdir -p /etc/kubernetes/pki/etcd/
cp /root/ca.crt /etc/kubernetes/pki/etcd/
cp /root/apiserver-etcd-client.crt /etc/kubernetes/pki/
cp /root/apiserver-etcd-client.key /etc/kubernetes/pki/
k8s 클러스터 환경 설정파일 생성
cat << EOF > /root/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "m.k8s.io"
controlPlaneEndpoint: "m.k8s.io:6443"
etcd:
external:
endpoints:
- https://10.10.0.25:2379
- https://10.10.0.26:2379
- https://10.10.0.27:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
EOF
k8s 클러스터 bootstrap
k8s 네트워크 종류에 따라 pod 네트워크 cidr을 지정한다.
kubeadm init --config kubeadm-config.yaml --upload-certs
부트 스트랩이 완료되면 다음의 정보가 확인된다.
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
설정파일 복제를 의미한다. 이게 안되면, kubectl 명령어가 되지 않는다.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
master 노드를 위한 join 구문이다.
kubeadm join m.k8s.knlab.io:6443 --token 27mpuj.8l8rzl2hrwgfpxy0 \
--discovery-token-ca-cert-hash sha256:a7d08d5e503cd6e8293dacc0449ff5921834709faf048fdba5ab6d91286d5946 \
--control-plane --certificate-key 46440db821473f6b2c95dd9b9ee10bce8cbc1de4ba182dde455ec6dc3666138a
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
worker 노드를 위한 koin 구문이다.
kubeadm join m.k8s.knlab.io:6443 --token 27mpuj.8l8rzl2hrwgfpxy0 \
--discovery-token-ca-cert-hash sha256:a7d08d5e503cd6e8293dacc0449ff5921834709faf048fdba5ab6d91286d5946
설정 파일 복사
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
k8s 네트워크 배포하기
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
master/worker 노드에 맞게 명령어 입력
kubeadm join m.k8s.io:6443 --token 27mpuj.8l8rzl2hrwgfpxy0 \
--discovery-token-ca-cert-hash sha256:a7d08d5e503cd6e8293dacc0449ff5921834709faf048fdba5ab6d91286d5946 \
--control-plane --certificate-key 46440db821473f6b2c95dd9b9ee10bce8cbc1de4ba182dde455ec6dc3666138a
kubeadm join m.k8s.io:6443 --token 27mpuj.8l8rzl2hrwgfpxy0 \
--discovery-token-ca-cert-hash sha256:a7d08d5e503cd6e8293dacc0449ff5921834709faf048fdba5ab6d91286d5946
노드 확인으로 완료
kubectl get nodes
NAME STATUS ROLES AGE VERSION
m-1.k8s.io Ready master 6h59m v1.16.2
m-2.k8s.io Ready master 6h53m v1.16.2
m-3.k8s.io Ready master 6h53m v1.16.2
n-1.k8s.io Ready <none> 6h52m v1.16.2
n-2.k8s.io Ready <none> 6h52m v1.16.2
n-3.k8s.io Ready <none> 6h52m v1.16.2