K8s 基于k8s搭建世界知名wordpress博客 v21
基于 k8s 搭建世界知名的 wordpress 博客
项目名称:
基于 k8s 的大型网站电商解决方案
1 项目说明
1.1 项目要求
1、需要 3 台服务器搭建 k8s 高可用集群
2、需要 3 台服务器搭建 ceph 集群
3、需要 3 台服务器搭建 MySQl 高可用集群
4、需要 1 台机器搭建 harbor 镜像仓库
5、需要 1 台机器搭建 rancher 6、需要有清晰的网站拓扑图
1.2 相关项目说明
1、k8s 集群需要做高可用,两台 master 节点和一个 node 节点
2、使用 MySQL 数据库存储网站信息,MySQL 通过 mgr 部署
3、镜像存放在 harbor 仓库,从 harbor 下载 php、nginx 的镜像
4、使用 Prometheus 监控电商平台
5、在 Grafana 可视化展示监控数据
6、搭建 efk+logstash+kafka 日志收集平台
7、基于 k8s 部署 wordpress 博客
1.3 拓扑图
参考拓扑图-1-1:基于 K8S 搭建 wordpress 的架构图图 1:此网站的 K8S 架构图
![]() |
图 2:此网站基于 K8S 架构运行的服务
![]() |
图 3:博客网站架构图
![]() |
图 4:基于 keepalived+Nginx 实现 MGR-MySQL 高可用集群
1.4 主机清单
| 主机名 | 角色 | IP 地址 | 硬件配置 | 备注 |
|---|---|---|---|---|
| k8s-master-1.com | K8s 控制节点 | 192.168.1.63 | 4vCPU/4G /60G | VIP: 192.168.1.199 |
| k8s-master-2.com | K8s 控制节点 | 192.168.1.64 | 4vCPU/4G /60G | |
| k8s-node-1.com | K8s 工作节点 | 192.168.1.65 | 4vCPU/4G /60G | |
| harbor | harbor 私有 仓库 | 192.168.1.66 | 2vCPU/2G /60G | |
| master1-admin | ceph- deploy、 monitor、mds | 192.168.1.81 | 2vCPU/2G /60G | 两个硬盘 |
| node1-monitor | monitor、 osd、mds | 192.168.1.82 | 2vCPU/2G /60G | 两个硬盘 |
| node2-osd | monitor、 osd、mds | 192.168.1.83 | 2vCPU/2G /60G | 两个硬盘 |
| wordpress.default.s vc.cluster.local | wordpress | 在 k8s 中部署 /ip 随机分配 | wordpress 服务 | |
| k8s_rancher.com | Rancher | 192.168.1.51 | 4vCPU/4G /60G | 管理已存在 的 k8s 集群 |
| mysql_master.com | Mgr-Mysql 主从 | 192.168.1.71 | 4vCPU/4G /60G | VIP: 192.168.1.190 |
| mysql_slave_1.com | Mgr-Mysql 主从 | 192.168.1.72 | 4vCPU/4G /60G |
| mysql_slave_2.com | Mgr-Mysql 主从 | 192.168.1.73 | 4vCPU/4G /60G | |
|---|---|---|---|---|
1.5 安装文件包说明
文档需要的压缩包和 yaml 文件在均在课程资料可查看和下载
2 使用 kubeadm 部署多master 节点 k8s 集群本次采用 kubeadm 方式部署Kubernetes 集群
2.1 使用 kubeadm 部署 Kubernetes 集群
2.1.1 初始化集群环境环境说明(centos7.6):
| IP | 主机名 | 角色 | 内存 | cpu |
|---|---|---|---|---|
| 192.168.1.63 | k8s-master-1.com | master1 | 4G | 4vCPU |
| 192.168.1.64 | k8s-master-2.com | master2 | 4G | 4vCPU |
| 192.168.1.65 | k8s-node-1.com | node1 | 8G | 4vCPU |
| 网络:桥接 | ||||
| 硬盘:100G 硬盘 |
2.1.1.1 配置静态 IP
把虚拟机或者物理机配置成静态 ip 地址,这样机器重新启动后 ip 地址也不会发生改变。以 k8s-master-1.com 主机为例,修改静态 IP:
修改/etc/sysconfig/network-scripts/ifcfg-ens33 文件,变成如下:
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no
BOOTPROTO=static IPADDR=192.168.1.63 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33
DEVICE=ens33 ONBOOT=yes
#修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下: service network restart
注:/etc/sysconfig/network-scripts/ifcfg-ens33 文件里的配置说明:
NAME=ens33 #网卡名字,跟 DEVICE 名字保持一致即可
DEVICE=ens33 #网卡设备名,大家 ip addr 可看到自己的这个网卡设备名,每个人的机器可能这个名字不一样,需要写自己的
BOOTPROTO=static #static 表示静态 ip 地址
ONBOOT=yes #开机自启动网络,必须是 yes IPADDR=192.168.1.63 #ip 地址,需要跟自己电脑所在网段一致NETMASK=255.255.255.0 #子网掩码,需要跟自己电脑所在网段一致
GATEWAY=192.168.1.1 #网关,在自己电脑打开 cmd,输入 ipconfig /all 可看到
DNS1=192.168.1.1 #DNS,在自己电脑打开 cmd,输入 ipconfig /all 可看到
2.1.1.2 配置主机名
在 192.168.1.63 上执行如下:
hostnamectl set-hostname k8s-master-1.com && bash
在 192.168.1.64 上执行如下:
hostnamectl set-hostname k8s-master-2.com && bash
在 192.168.1.65 上执行如下:
hostnamectl set-hostname k8s-node-1.com && bash
2.1.1.3 配置 hosts 文件
修改每台机器的/etc/hosts 文件,增加如下三行:
192.168.1.63 k8s-master-1.com
192.168.1.64 k8s-master-2.com
192.168.1.65 k8s-node-1.com
修改之后的 hosts 文件如下:
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.63 k8s-master-1.com
192.168.1.64 k8s-master-2.com
192.168.1.65 k8s-node-1.com
2.1.1.4 配置主机之间无密码登录生成 ssh 密钥对
[root@k8s-master-1.com ~]# ssh-keygen #一路回车,不输入密码把本地的 ssh 公钥文件安装到远程主机对应的账户
[root@k8s-master-1.com ~]# ssh-copy-id k8s-master-1.com [root@k8s-master-1.com ~]# ssh-copy-id k8s-master-2.com [root@k8s-master-1.com ~]# ssh-copy-id k8s-node-1.com [root@k8s-master-2.com ~]# ssh-keygen #一路回车,不输入密码把本地的 ssh 公钥文件安装到远程主机对应的账户
[root@k8s-master-2.com ~]# ssh-copy-id k8s-master-1.com [root@k8s-master-2.com ~]# ssh-copy-id k8s-master-2.com
[root@k8s-master-2.com ~]# ssh-copy-id k8s-node-1.com
2.1.1.5 关闭 firewalld 防火墙
[root@k8s-master-1.com ~]# systemctl stop firewalld ; systemctl disable firewalld [root@k8s-master-2.com ~]# systemctl stop firewalld ; systemctl disable firewalld [root@k8s-node-1.com ~]# systemctl stop firewalld ; systemctl disable firewalld
2.1.1.6 关闭 selinux
[root@k8s-master-1.com ~]# setenforce 0 #临时关闭#永久关闭
[root@k8s-master-1.com ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@k8s-master-2.com ~]# setenforce 0
[root@k8s-master-2.com ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@k8s-node-1.com ~]# setenforce 0
[root@k8s-node-1.com ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#注意:修改 selinux 配置文件之后,重启机器,selinux 才能永久生效
[root@k8s-master-1.com ~]#getenforce Disabled
[root@k8s-master-2.com ~]#getenforce Disabled
[root@k8s-node-1.com ~]#getenforce Disabled
2.1.1.7 关闭交换分区 swap
[root@k8s-master-1.com ~]# swapoff -a [root@k8s-master-2.com ~]# swapoff -a [root@k8s-node-1.com~]# swapoff -a 永久关闭:注释 swap 挂载
[root@k8s-master-1.com ~]# vim /etc/fstab #给 swap 这行开头加一下注释#
[root@k8s-master-2.com ~]# vim /etc/fstab
[root@k8s-node-1.com ~]# vim /etc/fstab #给 swap 这行开头加一下注释#
注:如果是克隆主机请删除网卡中的 UUID 并重启网络服务。[root@k8s-master-1.com ~]# service network restart [root@k8s-master-2.com ~]# service network restart [root@k8s-node-1.com ~]# service network restart
互动 1:swap 是什么?
当内存不足时,linux 会自动使用 swap,将部分内存数据存放到磁盘中,这个这样会使性能下降
互动 2:为什么要关闭 swap 交换分区?
关闭 swap 主要是为了性能考虑。设计者在设计 k8s 的时候,初衷就是解决性能问题如果没有关闭 swap,可以指定--ignore-preflight-errors=Swap 忽略报错。
2.1.1.8 修改内核参数
[root@k8s-master-1.com ~]# modprobe br_netfilter [root@k8s-master-1.com ~]# lsmod | grep br_netfilter br_netfilter 22256 0
bridge 151336 1 br_netfilter
[root@k8s-master-1.com ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 EOF
[root@k8s-master-1.com ~]# sysctl -p /etc/sysctl.d/k8s.conf
[root@k8s-master-2.com ~]# modprobe br_netfilter [root@k8s-master-2.com~]# lsmod | grep br_netfilter br_netfilter 22256 0
bridge 151336 1 br_netfilter
[root@k8s-master-2.com ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 EOF
[root@k8s-master-2.com ~]# sysctl -p /etc/sysctl.d/k8s.conf
[root@k8s-node-1.com ~]# modprobe br_netfilter [root@k8s-node-1.com~]# lsmod | grep br_netfilter br_netfilter 22256 0
bridge 151336 1 br_netfilter
[root@k8s-node-1.com ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 EOF
[root@k8s-node-1.com ~]# sysctl -p /etc/sysctl.d/k8s.conf
互动 3:为什么要要开启 ip_forward
如果容器的宿主机上的 ip_forward 未打开,那么该宿主机上的容器则不能被其他宿主机访问
互动 4:为什么要开启 net.bridge.bridge-nf-call-ip6tables
默认情况下,从容器发送到默认网桥的流量,并不会被转发到外部。要开启转发:net.bridge.bridge-nf-call- ip6tables = 1
互动 5:为什么要加载 br_netfilter 模块? 在/etc/sysctl.conf 中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
![]() |
执行 sysctl -p 时出现:
解决办法:
modprobe br_netfilter
2.1.1.9 配置阿里云 repo 源
#配置在线安装 docker 的 repo 源
[root@k8s-master-1.com ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
[root@k8s-master-2.com ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
[root@k8s-node-1.com ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
2.1.1.10 配置阿里云安装 k8s 需要的 repo 源配置阿里云 Kubernetes yum 源
[root@k8s-master-1.com ~]# tee /etc/yum.repos.d/kubernetes.repo <<-'EOF' [kubernetes]
name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1
gpgcheck=0 EOF
将 k8s-master-1.com 上 Kubernetes 的 yum 源复制给k8s-master-2.com 和k8s-node-1.com [root@k8s-master-1.com ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-master- 2.com:/etc/yum.repos.d/
[root@k8s-master-1.com ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node-1.com:/etc/yum.repos.d/
6.1.1.12 安装基础软件包
[root@k8s-master-1.com ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net- tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-
server socat ipvsadm conntrack ntpdate telnet
[root@k8s-master-2.com ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net- tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh- server socat ipvsadm conntrack ntpdate telnet
[root@k8s-node-1.com ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh- server socat ipvsadm conntrack ntpdate telnet
2.1.1.11 配置服务器时间跟网络时间同步在 k8s-master-1.com 上执行如下:
[root@k8s-master-1.com ~]# ntpdate cn.pool.ntp.org [root@k8s-master-1.com ~]# crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org [root@k8s-master-1.com ~]#service crond restart
在 k8s-master-2.com 上执行如下:
[root@k8s-master-2.com ~]#ntpdate cn.pool.ntp.org [root@k8s-master-2.com ~]#crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org [root@k8s-master-2.com ~]#service crond restart
在 k8s-node-1.com 上执行如下:
[root@k8s-node-1.com ~]#ntpdate cn.pool.ntp.org [root@k8s-node-1.com ~]#crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org [root@k8s-node-1.com ~]#service crond restart
2.1.1.12 安装 docker-ce
[root@k8s-master-1.com ~]# yum install docker-ce -y
[root@k8s-master-1.com ~]# systemctl start docker && systemctl enable docker.service [root@k8s-master-1.com ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker- cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
} EOF
[root@k8s-master-1.com ~]# systemctl daemon-reload
[root@k8s-master-1.com ~]# systemctl restart docker [root@k8s-master-1.com]# systemctl enable docker [root@k8s-master-1.com]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2021-07-11 14:22:40 CST; 7s ago
Docs: https://docs.docker.com Main PID: 16748 (dockerd)
Active 是 running,表示 docker 运行正常
互动 1:为什么要指定 native.cgroupdriver=systemd? 在安装 kubernetes 的过程中,会出现:
failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
文件驱动默认由 systemd 改成 cgroupfs, 而我们安装的 docker 使用的文件驱动是 systemd, 造成不一致, 导致镜像无法启动
docker info 查看
Cgroup Driver: systemd
修改 docker:
修改或创建/etc/docker/daemon.json,加入下面的内容:
"exec-opts": ["native.cgroupdriver=systemd"]
重启 docker 即可
[root@k8s-master-2.com ~]# yum install docker-ce -y
[root@k8s-master-2.com ~]# systemctl start docker && systemctl enable docker.service [root@k8s-master-2.com ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker- cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
} EOF
[root@k8s-master-2.com ~]# systemctl daemon-reload [root@k8s-master-2.com ~]# systemctl restart docker [root@k8s-master-2.com]# systemctl enable docker [root@k8s-master-2.com ~]# systemctl status docker
[root@k8s-node-1.com ~]# yum install docker-ce -y
[root@k8s-node-1.com ~]# systemctl start docker && systemctl enable docker.service
[root@k8s-node-1.com ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker- cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@k8s-node-1.com ~]# systemctl daemon-reload [root@k8s-node-1.com ~]# systemctl restart docker [root@k8s-node-1.com]# systemctl enable docker [root@k8s-node-1.com ~]# systemctl status docker
2.1.1.13 安装 containerd
三台 k8s 节点均安装,步骤如下: yum install containerd.io-1.6.6 -y mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
修改配置文件,打开/etc/containerd/config.toml
把 SystemdCgroup = false 修改成 SystemdCgroup = true
把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7" 找到 config_path = "",修改成如下目录:
config_path = "/etc/containerd/certs.d"
启动 containerd、并设置开启自启动
[root@xuegod63 ~]#systemctl enable containerd --now
2.1.1.14 安装初始化 k8s 需要的组件
在 master 和 node 上安装 kubeadm、kubelet、kubectl 组件,用于后期安装 k8s 使用:
[root@k8s-master-1.com ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@k8s-master-1.com ~]# systemctl enable kubelet
[root@k8s-master-2.com ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@k8s-master-2.com~]# systemctl enable kubelet
[root@k8s-node-1.com ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@k8s-node-1.com ~]# systemctl enable kubelet
2.1.2 通过 keepalived+nginx 实现 k8s apiserver 节点高可用
1、安装 nginx 主备:
在 k8s-master-1.com 和 k8s-master-2.com 上做 nginx 主备安装[root@k8s-master-1.com ~]# yum install nginx keepalived -y [root@k8s-master-2.com ~]# yum install nginx keepalived -y
2、修改 nginx 配置文件。主备一样
[root@k8s-master-1.com ~]# cat /etc/nginx/nginx.conf user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log; pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf; events {
worker_connections 1024;
}
# 四层负载均衡,为两台 Master apiserver 组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.1.63:6443; # Master1 APISERVER IP:PORT server 192.168.1.64:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于 nginx 与master 节点复用,这个监听端口不能是 6443,否则会冲突proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on; keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types; default_type application/octet-stream;
server {
listen 80 default_server; server_name _;
location / {
}
}
}
[root@k8s-master-2.com ~]# cat /etc/nginx/nginx.conf user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log; pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf; events {
worker_connections 1024;
}
# 四层负载均衡,为两台 Master apiserver 组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.1.63:6443; # Master1 APISERVER IP:PORT server 192.168.1.64:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于 nginx 与master 节点复用,这个监听端口不能是 6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main; sendfile on;
tcp_nopush on;
tcp_nodelay on; keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types; default_type application/octet-stream;
server {
listen 80 default_server; server_name _;
location / {
}
}
}
3、keepalive 配置主 keepalived
[root@k8s-master-1.com ~]# cat /etc/keepalived/keepalived.conf global_defs {
notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1
smtp_connect_timeout 30 router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 { state MASTER
interface ens33 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID 实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定 VRRP 心跳包通告间隔时间,默认 1 秒
authentication { auth_type PASS auth_pass 1111
}
# 虚拟 IP virtual_ipaddress {
192.168.1.199/24
}
track_script {
check_nginx
}
}
#vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移)
#virtual_ipaddress:虚拟 IP(VIP)
[root@k8s-master-1.com ~]# cat /etc/keepalived/check_nginx.sh #!/bin/bash
#1、判断 Nginx 是否存活
counter=ps -C nginx --no-header | wc -l if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动 Nginx service nginx start
sleep 2
#3、等待 2 秒后再次获取一次 Nginx 状态
counter=ps -C nginx --no-header | wc -l
#4、再次进行判断,如 Nginx 还不存活则停止 Keepalived,让地址进行漂移if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi
[root@k8s-master-1.com ~]# chmod +x /etc/keepalived/check_nginx.sh
备 keepalive
[root@k8s-master-2.com ~]# cat /etc/keepalived/keepalived.conf global_defs {
notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1
smtp_connect_timeout 30 router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 { state BACKUP interface ens33
virtual_router_id 51 # VRRP 路由 ID 实例,每个实例是唯一的
priority 90
advert_int 1 authentication {
auth_type PASS auth_pass 1111
}
virtual_ipaddress { 192.168.1.199/24
}
track_script {
check_nginx
}
}
[root@k8s-master-2.com ~]# cat /etc/keepalived/check_nginx.sh #!/bin/bash
#1、判断 Nginx 是否存活
counter=ps -C nginx --no-header | wc -l if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动 Nginx service nginx start
sleep 2
#3、等待 2 秒后再次获取一次 Nginx 状态
counter=ps -C nginx --no-header | wc -l
#4、再次进行判断,如 Nginx 还不存活则停止 Keepalived,让地址进行漂移if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi
[root@k8s-master-2.com ~]# chmod +x /etc/keepalived/check_nginx.sh
#注:keepalived 根据脚本返回状态码(0 为工作正常,非 0 不正常)判断是否故障转移。
4、启动服务:
[root@k8s-master-1.com ~]# systemctl daemon-reload [root@k8s-master-1.com ~]# systemctl enable nginx keepalived [root@k8s-master-1.com ~]# yum install nginx-mod-stream -y [root@k8s-master-1.com ~]# systemctl start nginx
[root@k8s-master-1.com ~]# systemctl start keepalived
[root@k8s-master-2.com ~]# systemctl daemon-reload [root@k8s-master-2.com ~]# systemctl enable nginx keepalived [root@k8s-master-2.com ~]# yum install nginx-mod-stream -y [root@k8s-master-2.com ~]# systemctl start nginx
[root@k8s-master-2.com ~]# systemctl start keepalived
5、测试 vip 是否绑定成功
[root@k8s-master-1.com ~]# ip addr
1: lo:
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33:
link/ether 00:0c:29:79:9e:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.63/24 brd 192.168.40.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever
inet 192.168.1.199/24 scope global secondary ens33 valid_lft forever preferred_lft forever
inet6 fe80::b6ef:8646:1cfc:3e0c/64 scope link noprefixroute valid_lft forever preferred_lft forever
6、测试 keepalived:
停掉 k8s-master-1.com 上的 keepalived。Vip 会漂移到 k8s-master-2.com [root@k8s-master-1.com ~]# service keepalived stop
[root@k8s-master-2.com ~]# ip addr
1: lo:
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33:
link/ether 00:0c:29:83:4d:9e brd ff:ff:ff:ff:ff:ff
inet 192.168.1.64/24 brd 192.168.40.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever
inet 192.168.1.199/24 scope global secondary ens33 valid_lft forever preferred_lft forever
inet6 fe80::a5e0:c74e:d0f3:f5f2/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
inet6 fe80::b6ef:8646:1cfc:3e0c/64 scope link noprefixroute valid_lft forever preferred_lft forever
inet6 fe80::91f:d383:3ce5:b3bf/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
启动 k8s-master-1.com 上的 keepalived。Vip 又会漂移到 k8s-master-1.com [root@k8s-master-1.com ~]# service keepalived start
[root@k8s-master-1.com ~]# ip addr
1: lo:
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33:
link/ether 00:0c:29:79:9e:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.63/24 brd 192.168.40.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever
inet 192.168.1.199/24 scope global secondary ens33 valid_lft forever preferred_lft forever
inet6 fe80::b6ef:8646:1cfc:3e0c/64 scope link noprefixroute valid_lft forever preferred_lft forever
2.1.3 初始化集群
#使用 kubeadm 初始化 k8s 集群
[root@k8s-master-1.com ~]kubeadm config print init-defaults > kubeadm.yaml
根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的 containerd 作为运行时,所以在初始化节点的时候需要指定 cgroupDriver 为 systemd
kubeadm.yaml 配置文件如下: apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef
ttl: 24h0m0s usages:
- signing
- authentication kind: InitConfiguration
| #localAPIEndpoint | #前面加注释 |
|---|---|
| #advertiseAddress | #前面加注释 |
| #bindPort | #前面加注释 |
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock #指定 containerd 容器运行时
imagePullPolicy: IfNotPresent #name: node #前面加注释
apiServer:
timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {}
dns: {} etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #指定阿里云镜像仓库
kind: ClusterConfiguration kubernetesVersion: 1.26.0 #新增加如下内容:
controlPlaneEndpoint: 192.168.1.199:16443 networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #指定 pod 网段
serviceSubnet: 10.96.0.0/12 scheduler: {}
#追加如下内容
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
#基于 kubeadm.yaml 初始化 k8s 集群
[root@k8s-master-1.com ~] ctr -n=k8s.io images import k8s_1.26.0.tar.gz [root@k8s-master-1.com ~]ctr -n=k8s.io images import k8s_1.26.0.tar.gz [root@k8s-master-1.com ~] ctr -n=k8s.io images import k8s_1.26.0.tar.gz [root@k8s-master-1.com ~] ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@k8s-master-1.com ~] kubeadm init --config=kubeadm.yaml --ignore-preflight- errors=SystemVerification
![]() |
显示如下,说明安装完成:
#配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理
[root@k8s-master-1.com ~] mkdir -p $HOME/.kube
[root@k8s-master-1.com ~] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-1.com ~] sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master-1.com ~] kubectl get nodes
NAME STATUS ROLES AGE VERSION
xuegod63 NotReady control-plane 2m25s v1.26.0
此时集群状态还是 NotReady 状态,因为网络组件没有启动。
2.2 实战-master 节点加入集群
#把 k8s-master-1.com 节点的证书拷贝到 k8s-master-2.com 上在 k8s-master-2.com 上创建证书存放目录:
[root@k8s-master-2.com ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
#把 k8s-master-1.com 节点的证书拷贝到 k8s-master-2.com 上:
[root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/ca.crt k8s-master-2.com:/etc/kubernetes/pki/ [root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/ca.key k8s-master-2.com:/etc/kubernetes/pki/ [root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/sa.key k8s-master-2.com:/etc/kubernetes/pki/ [root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/sa.pub k8s-master-2.com:/etc/kubernetes/pki/ [root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-master- 2.com:/etc/kubernetes/pki/
[root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/front-proxy-ca.key k8s-master- 2.com:/etc/kubernetes/pki/
[root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/etcd/ca.crt k8s-master- 2.com:/etc/kubernetes/pki/etcd/
[root@k8s-master-1.com ~]# scp /etc/kubernetes/pki/etcd/ca.key k8s-master- 2.com:/etc/kubernetes/pki/etcd/
#证书拷贝之后在 k8s-master-2.com 上执行如下命令,大家复制自己的,这样就可以把 k8s-master-2.com 加入到集群,成为控制节点:
[root@k8s-master-1.com]# kubeadm token create --print-join-command
[root@k8s-master-2.com]# kubeadm join 192.168.1.199:16443 --token elw403.d9fqrxqo7znw4ced \
--discovery-token-ca-cert-hash sha256:bc7fa5a077882a1c6d2eb6bca71acd4ffefcdd3de02afc971ab16e5ae417e98b \
![]() |
--control-plane --ignore-preflight-errors=SystemVerification
在 k8s-master-1.com 上查看集群状况:
[root@k8s-master-1.com ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
| k8s-master-1.com | NotReady | control-plane,master | 8m51s | v1.26.0 |
|---|---|---|---|---|
| k8s-master-2.com | NotReady | control-plane,master | 29s | v1.26.0 |
上面可以看到 k8s-master-2.com 已经加入到集群了
2.3 实战-node 节点加入集群
在 k8s-master-1.com 上查看加入节点的命令:
[root@k8s-master-1.com ~]# kubeadm token create --print-join-command
显示如下:
kubeadm join 192.168.1.199:16443 --token 2d2gm8.71bo8fca3e6fhiwh --discovery-token-ca-cert- hash sha256:bc7fa5a077882a1c6d2eb6bca71acd4ffefcdd3de02afc971ab16e5ae417e98b
把 k8s-node-1.com 加入k8s 集群:
[root@k8s-node-1.com~]# kubeadm join 192.168.1.199:16443 --token 2d2gm8.71bo8fca3e6fhiwh -
-discovery-token-ca-cert-hash sha256:bc7fa5a077882a1c6d2eb6bca71acd4ffefcdd3de02afc971ab16e5ae417e98b --ignore-preflight- errors=SystemVerification
![]() |
#在 k8s-master-1.com 上查看集群节点状况:
[root@k8s-master-1.com ~]# kubectl get nodes
| NAME | STATUS | ROLES | AGE | VERSION |
|---|---|---|---|---|
| k8s-master-1.com | NotReady | control-plane,master | 9m41s | v1.26.0 |
| k8s-master-2.com | NotReady | control-plane,master | 79s | v1.26.0 |
| k8s-node-1.com | NotReady | 8s | v1.26.0 |
可以看到 k8s-node-1.com 的 ROLES 角色为空,我们可以手工搭上标签,这里不影响集群正常工作。
[root@k8s-node-1.com~]# kubectl label node k8s-node-1.com node-role.kubernetes.io/worker=worker
2.4 安装 kubernetes 网络组件-Calico
上传 calico.yaml 到 k8s-master-1.com 中,calico.yaml 在课件,使用 yaml 文件安装 calico 网络插件 。
[root@k8s-master-1.com ~]# kubectl apply -f calico.yaml
注:在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml
拉取镜像需要一定时间,所以我们查看 pod 状态为 running 则安装成功。
[root@k8s-master-1.com ~]# kubectl get pod --all-namespaces
再次查看集群状态。
[root@k8s-master-1.com ~]# kubectl get nodes
| NAME | STATUS | ROLES | AGE | VERSION |
|---|---|---|---|---|
| k8s-master-1.com | Ready | control-plane,master | 11m | v1.26.0 |
| k8s-master-2.com | Ready | control-plane,master | 3m7s | v1.26.0 |
| k8s-node-1.com | Ready | worker | 116s | v1.26.0 |
上面可以看到 STATUS 状态变成 running 了,说明 pod 启动正常
2.5 测试在 k8s 创建 pod 是否可以正常访问网络和 coredns 解析是否正常
[root@k8s-master-1.com ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=127 time=39.3 ms #通过上面可以看到能访问网络
/ # nslookup kubernetes.default.svc.cluster.local Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
看到上面内容,说明 k8s 的 dns 解析正常
3 安装和配置 ceph 集群
3.1 初始化实验环境机器配置: Centos7.6
网络模式:NAT
准备三台机器,每台机器需要三个硬盘,配置 4GiB/4vCPU/60G master1-admin 是管理节点 :192.168.1.81
node1-monitor 是监控节点: 192.168.1.82
node2-osd 是对象存储节点: 192.168.1.83
3.1.1 配置静态 IP
把虚拟机或者物理机配置成静态 ip 地址,这样机器重新启动后 ip 地址也不会发生改变。以 master1-admin 主机为例,修改静态 IP:
修改/etc/sysconfig/network-scripts/ifcfg-ens33 文件,变成如下:
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no
BOOTPROTO=static IPADDR=192.168.1.81 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33
DEVICE=ens33 ONBOOT=yes
#修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下: service network restart
注:/etc/sysconfig/network-scripts/ifcfg-ens33 文件里的配置说明:
NAME=ens33 #网卡名字,跟 DEVICE 名字保持一致即可
DEVICE=ens33 #网卡设备名,大家 ip addr 可看到自己的这个网卡设备名,每个人的机器可能这个名字不一样,需要写自己的
BOOTPROTO=static #static 表示静态 ip 地址
ONBOOT=yes #开机自启动网络,必须是 yes IPADDR=192.168.1.81 #ip 地址,需要跟自己电脑所在网段一致NETMASK=255.255.255.0 #子网掩码,需要跟自己电脑所在网段一致
GATEWAY=192.168.1.1 #网关,在自己电脑打开 cmd,输入 ipconfig /all 可看到
DNS1=192.168.1.1 #DNS,在自己电脑打开 cmd,输入 ipconfig /all 可看到
3.1.2 配置主机名
在 192.168.1.81 上执行如下:
hostnamectl set-hostname master1-admin && bash
在 192.168.1.82 上执行如下:
hostnamectl set-hostname node1-monitor && bash
在 192.168.1.83 上执行如下:
hostnamectl set-hostname node2-osd && bash
3.1.3 配置 hosts 文件
修改 master1-admin、node1-monitor、node2-osd 机器的/etc/hosts 文件,增加如下三行:
192.168.1.81 master1-admin
192.168.1.82 node1-monitor
192.168.1.83 node2-osd
3.1.4 配置互信生成 ssh 密钥对
[root@master1-admin ~]# ssh-keygen -t rsa #一路回车,不输入密码把本地的 ssh 公钥文件安装到远程主机对应的账户
![]() |
[root@master1-admin ~]# ssh-copy-id node1-monitor
[root@master1-admin ~]# ssh-copy-id node2-osd [root@master1-admin ~]# ssh-copy-id master1-admin
[root@node1-monitor ~]# ssh-keygen #一路回车,不输入密码把本地的 ssh 公钥文件安装到远程主机对应的账户
[root@node1-monitor ~]# ssh-copy-id master1-admin [root@node1-monitor ~]# ssh-copy-id node1-monitor [root@node1-monitor ~]# ssh-copy-id node2-osd
[root@node2-osd ~]# ssh-keygen #一路回车,不输入密码把本地的 ssh 公钥文件安装到远程主机对应的账户[root@node2-osd ~]# ssh-copy-id master1-admin [root@node2-osd ~]# ssh-copy-id node1-monitor [root@node2-osd ~]# ssh-copy-id node2-osd
3.1.5 关闭防火墙
[root@master1-admin ~]# systemctl stop firewalld ; systemctl disable firewalld [root@node1-monitor ~]# systemctl stop firewalld ; systemctl disable firewalld [root@node2-osd ~]# systemctl stop firewalld ; systemctl disable firewalld
3.1.6 关闭 selinux #临时关闭setenforce 0
#永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config #注意:修改 selinux 配置文件之后,重启机器,selinux 才能永久生效
3.1.7 配置 Ceph 安装源方法 1:离线安装
创建本地 yum 源
[root@master1-admin ~]# tar xf ceph-14-v2.tar.gz -C /opt/ [root@master1-admin ~]# vim /etc/yum.repos.d/ceph.repo [ceph]
name=ceph baseurl=file:///opt/ceph-14 enable=1
gpgcheck=0
复制离线 yum 仓库至node1-monitor
[root@ master1-admin ~]# scp -r /opt/ceph-14/ root@node1-monitor:/opt/
[root@ master1-admin ~]# scp /etc/yum.repos.d/ceph.repo root@node1-monitor:/etc/yum.repos.d/
复制离线 yum 仓库至node2-osd
[root@ master1-admin ~]# scp -r /opt/ceph-14/ root@node2-osd:/opt/
[root@ master1-admin ~]# scp /etc/yum.repos.d/ceph.repo root@node2-osd:/etc/yum.repos.d/
复制离线 yum 仓库至k8s 控制节点 k8s-master-1.com
[root@ master1-admin ~]# scp -r /opt/ceph-14/ root@192.168.1.63:/opt/
[root@ master1-admin ~]# scp /etc/yum.repos.d/ceph.repo 192.168.1.63:/etc/yum.repos.d/
复制离线 yum 仓库至k8s 工作节点
[root@ master1-admin ~]# scp -r /opt/ceph-14/ root@192.168.1.64:/opt/
[root@ master1-admin ~]# scp /etc/yum.repos.d/ceph.repo 192.168.1.64:/etc/yum.repos.d/
复制离线 yum 仓库至k8s 工作节点 mysql_slave_2.com
[root@ master1-admin ~]# scp -r /opt/ceph-14/ root@192.168.1.65:/opt/
[root@ master1-admin ~]# scp /etc/yum.repos.d/ceph.repo 192.168.1.65:/etc/yum.repos.d/
8、配置机器时间跟网络时间同步,在 ceph 的每台机器上操作
service ntpd stop ntpdate cn.pool.ntp.org crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
service crond restart 9、安装基础软件包
在 ceph 的每台节点上都操作
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-
devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet deltarpm
3.2 安装 ceph 集群
master1-admin 是管理节点 :192.168.1.81 node1-monitor 是监控节点: 192.168.1.82
node2-osd 是对象存储节点: 192.168.1.83
3.2.1 安装 ceph-deploy
#在 master1-admin 节点安装 ceph-deploy
[root@master1-admin ~]# yum install python-setuptools ceph-deploy -y
#在 master1-admin、node1-monitor 和 node2-osd 节点安装 ceph [root@master1-admin]# yum install ceph ceph-radosgw -y [root@node1-monitor ~]# yum install ceph ceph-radosgw -y [root@node2-osd ~]# yum install ceph ceph-radosgw -y
[root@master1-admin ~]# ceph --version
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
3.2.2 创建 monitor 节点
#创建一个目录,用于保存 ceph-deploy 生成的配置文件信息的
[root@master1-admin ~]# cd /etc/ceph
[root@master1-admin ceph]# ceph-deploy new master1-admin node1-monitor node2-osd [root@master1-admin ceph]# ls
#生成了如下配置文件
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
Ceph 配置文件、日志文件、keyring 是在增加 mon 的时候,mon 之间加密会用到的备注:
ceph-deploy new 后面接的是要初始化成 monitor 的节点
3.2.3 安装 monitor 服务
1、修改 ceph 配置文件
#把 ceph.conf 配置文件里的默认副本数从 3 改成 2。把 osd_pool_default_size = 2
加入[global]段,这样只有 2 个 osd 也能达到 active+clean 状态:
[root@master1-admin]# vim /etc/ceph/ceph.conf [global]
fsid = af5cd413-1c53-4035-90c6-95368eef5c78 mon_initial_members = node1-monitor
mon_host = 192.168.1.81,192.168.1.82,192.168.1.83
auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true osd_pool_default_size = 2
mon clock drift allowed = 0.500 mon clock drift warn backoff = 10
#mon clock drift allowed #监视器间允许的时钟漂移量默认值 0.05 #mon clock drift warn backoff #时钟偏移警告的退避指数。默认值 5
ceph 对每个 mon 之间的时间同步延时默认要求在 0.05s 之间,这个时间有的时候太短了。如果时钟同步有问题可能导致 monitor 监视器不同步,可以适当增加时间
2、安装 monitor、收集所有的密钥[root@master1-admin]# cd /etc/ceph
[root@master1-admin]# ceph-deploy mon create-initial [root@master1-admin]# ls *.keyring
ceph.bootstrap-mds.keyring ceph.bootstrap-mgr.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-rgw.keyring ceph.client.admin.keyring ceph.mon.keyring
3.2.4 部署 osd 服务
[root@ master1-admin ceph]# cd /etc/ceph/
[root@ master1-admin ceph]# ceph-deploy osd create --data /dev/sdb master1-admin [root@ master1-admin ceph]# ceph-deploy osd create --data /dev/sdb node1-monitor [root@ master1-admin ceph]# ceph-deploy osd create --data /dev/sdb node2-osd
查看状态:
[root@ master1-admin ceph]# ceph-deploy osd list master1-admin node1-monitor node2-osd
要使用 Ceph 文件系统,你的Ceph 的存储集群里至少需要存在一个 Ceph 的元数据服务器(mds)。
3.2.5 创建 ceph 文件系统创建 mds
[root@ master1-admin ceph]# ceph-deploy mds create master1-admin node1-monitor node2-osd
查看 ceph 当前文件系统
[root@ master1-admin ceph]# ceph fs ls No filesystems enabled
一个 cephfs 至少要求两个 librados 存储池,一个为 data,一个为 metadata。当配置这两个存储池时,注意:
1. 为 metadata pool 设置较高级别的副本级别,因为 metadata 的损坏可能导致整个文件系统不用
2. 建议,metadata pool 使用低延时存储,比如 SSD,因为 metadata 会直接影响客户端的响应速度。
创建存储池
[root@ master1-admin ceph]# ceph osd pool create cephfs_data 128 pool 'cephfs_data' created
[root@ master1-admin ceph]# ceph osd pool create cephfs_metadata 128 pool 'cephfs_metadata' created
关于创建存储池命令:
ceph osd pool create {pool-name} {pg-num}
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
*少于 5 个 OSD 时可把 pg_num 设置为 128
*OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
*OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
*OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
*自己计算 pg_num 取值时可借助 pgcalc 工具
随着 OSD 数量的增加,正确的 pg_num 取值变得更加重要,因为它显著地影响着集群的行为、以及出错时的数据持久性(即灾难性事件导致数据丢失的概率)。
创建文件系统
创建好存储池后,你就可以用 fs new 命令创建文件系统了
[root@ master1-admin ceph]# ceph fs new xuegod cephfs_metadata cephfs_data new fs with metadata pool 2 and data pool 1
其中:new 后的 fsname 可自定义
[root@ master1-admin ceph]# ceph fs ls #查看创建后的 cephfs [root@ master1-admin ceph]# ceph mds stat #查看 mds 节点状态
xuegod:1 {0=master1-admin=up:active} 2 up:standby active 是活跃的,另 1 个是处于热备份的状态
3.2.6 部署 mgr
安装 mgr 用于后面我们配置dashboard 监控,而且避免挂载 ceph 时可能会提示 warring 信息。
[root@ master1-admin ceph]# ceph-deploy mgr create master1-admin node1-monitor node2-osd
安装后查看集群状态正常
[root@ master1-admin ceph]# ceph -s
cluster 84469565-5d11-4aeb-88bd-204dc25b2d50 health HEALTH_OK
monmap e1: 1 mons at {node1-monitor=192.168.1.82:6789/0} election epoch 4, quorum 0 node1-monitor
osdmap e9: 1 osds: 1 up, 1 in
flags sortbitwise,require_jewel_osds
pgmap v107: 320 pgs, 2 pools, 16 bytes data, 3 objects
7371 MB used, 43803 MB / 51175 MB avail
320 active+clean #查看 ceph 守护进程
[root@master1-admin ~]# systemctl status ceph-osd.target [root@master1-admin ~]# systemctl status ceph-mon.target
#注意:如果 ceph -s 看到报错:
cluster 84469565-5d11-4aeb-88bd-204dc25b2d50 health HEALTH_WARN
too many PGs per OSD (320 > max 300)
互动 1:报错什么意思?
问题原是集群 osd 数量较少,测试过程中建立了大量的 pool,每个 pool 都要用一些 pg_num 和 pgs
解决办法如下:
修改 node1-monitor 节点 ceph 的配置文件 ceph.conf [root@node1-monitor ~]# vim /etc/ceph/ceph.conf 最后一行增加如下内容:
![]() |
mon_pg_warn_max_per_osd = 1000
#重启 monitor 进程
[root@node1-monitor ~]# systemctl restart ceph-mon.target #在测试 ceph -s
[root@master1-admin ~]# ceph -s
cluster 84469565-5d11-4aeb-88bd-204dc25b2d50 health HEALTH_OK
monmap e1: 1 mons at {node1-monitor=192.168.1.82:6789/0} election epoch 4, quorum 0 node1-monitor
osdmap e9: 1 osds: 1 up, 1 in
flags sortbitwise,require_jewel_osds
pgmap v107: 320 pgs, 2 pools, 16 bytes data, 3 objects
7371 MB used, 43803 MB / 51175 MB avail
320 active+clean
3.3 测试 k8s 挂载 ceph rbd
[root@k8s-master-1.com ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-1.com Ready control-plane,master 9m41s v1.26.0 k8s-master-2.com Ready control-plane,master 79s v1.26.0 k8s-node-1.com Ready
# kubernetes 要想使用 ceph,需要在 k8s 的每个 node 节点安装 ceph-common,把 ceph 节点上的 ceph.repo 文件拷贝到 k8s 各个节点/etc/yum.repos.d/目录下,然后在 k8s 的各个节点 yum install ceph-common -y
#将 ceph 配置文件拷贝到 k8s 的控制节点和工作节点[root@k8s-master-1.com ~]# mkdir /etc/ceph [root@k8s-master-2.com ~]# mkdir /etc/ceph [root@k8s-node-1.com ~]# mkdir /etc/ceph
[root@k8s-master-1.com ~]# yum install ceph ceph-common -y [root@k8s-master-2.com~]# yum install ceph ceph-common -y [root@k8s-node-1.com ~]# yum install ceph ceph-common -y
[root@master1-admin ~]# scp /etc/ceph/ 192.168.1.63:/etc/ceph/ [root@master1-admin ~]# scp /etc/ceph/ 192.168.1.64:/etc/ceph/ [root@master1-admin ~]# scp /etc/ceph/* 192.168.1.65:/etc/ceph/
#创建 ceph rbd
[root@master1-admin ~]# ceph osd pool create k8srbd1 6 pool 'k8srbd' created
[root@master1-admin ~]# rbd create rbda -s 1024 -p k8srbd1
[root@master1-admin ~]# rbd feature disable k8srbd1/rbda object-map fast-diff deep-flatten
备注:
rbd create rbda -s 1024 -p k8srbd1 #在 k8srbd1zh 这个 pool 中创建 rbd,大小是 1024M
rbd 关闭一些特性:
关闭 centOS7 内核不支持的选项:
object-map:是否记录组成 image 的数据对象存在状态位图,通过查表加速类似于导入、导出、克隆分离、已使用容量统计等操作、同时有助于减少 COW 机制带来的克隆 image 的 I/O 时延,依赖于 exclusive-lock 特性
fast-diff:用于计算快照间增量数据等操作加速,依赖于 object-map
deep-flatten:克隆分离时是否同时解除克隆 image 创建的快照与父 image 之间的关联关
#创建 pod,挂载 ceph rbd
#把 nginx.tar.gz 上传到 k8s-node-1.com 上,手动解压
[root@k8s-node-1.com ~]# ctr -n=k8s.io images import nginx.tar.gz [root@k8s-master-1.com ~]# vim pod.yaml
apiVersion: v1 kind: Pod metadata:
name: testrbd spec:
containers:
- image: nginx name: nginx
imagePullPolicy: IfNotPresent volumeMounts:
- name: testrbd mountPath: /mnt
volumes:
- name: testrbd rbd:
monitors:
- '192.168.1.82:6789'
- '192.168.1.81:6789'
- '192.168.1.83:6789'
pool: k8srbd1 image: rbda fsType: xfs readOnly: false user: admin
keyring: /etc/ceph/ceph.client.admin.keyring
#更新资源清单文件
[root@k8s-master-1.com ~]# kubectl apply -f pod.yaml #查看 pod 是否创建成功
![]() |
[root@k8s-master-1.com ~]# kubectl get pods -o wide
注意: k8srbd1 下的 rbda 被 pod 挂载了,那其他 pod 就不能占用这个 k8srbd1 下的 rbda 了例:创建一个 pod-1.yaml
[root@k8s-master-1.com ~]# cat pod-1.yaml apiVersion: v1
kind: Pod metadata:
name: testrbd1 spec:
containers:
- image: nginx name: nginx volumeMounts:
- name: testrbd mountPath: /mnt
volumes:
- name: testrbd rbd:
monitors:
- '192.168.1.82:6789'
pool: k8srbd1 image: rbda fsType: xfs readOnly: false user: admin
keyring: /etc/ceph/ceph.client.admin.keyring
[root@k8s-master-1.com ~]# kubectl apply -f pod-1.yaml pod/testrbd1 created
[root@k8s-master-1.com ~]# kubectl get pods NAME READY STATUS RESTARTS AGE
| testrbd | 1/1 | Running | 0 | 51s |
|---|---|---|---|---|
| testrbd1 | 0/1 | Pending | 0 | 15s |
#查看 testrbd1 详细信息
[root@k8s-master-1.com ~]# kubectl describe pods testrbd1
Warning FailedScheduling 48s (x2 over 48s) default-scheduler 0/2 nodes are available: 1 node(s) had no available disk, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
上面一直 pending 状态,通过 warnning 可以发现是因为我的 pool: k8srbd1 image: rbda
已经被其他 pod 占用了。
3.4 基于 ceph rbd 生成 pv
1、创建 ceph-secret 这个k8s secret 对象,这个 secret 对象用于 k8s volume 插件访问 ceph 集群,获取client.admin 的 keyring 值,并用 base64 编码,在 master1-admin(ceph 管理节点)操作[root@master1-admin ~]# ceph auth get-key client.admin | base64
#显示如下: QVFBWk0zeGdZdDlhQXhBQVZsS0poYzlQUlBianBGSWJVbDNBenc9PQ==
2. 创建 ceph 的 secret,在 k8s 的控制节点操作: [root@k8s-master-1.com ~]# cat ceph-secret.yaml apiVersion: v1
kind: Secret metadata:
name: ceph-secret data:
key: QVFBWk0zeGdZdDlhQXhBQVZsS0poYzlQUlBianBGSWJVbDNBenc9PQ== [root@k8s-master-1.com ~]# kubectl apply -f ceph-secret.yaml
3. 回到 ceph 管理节点创建 pool 池
[root@master1-admin ~]# ceph osd pool create k8stest 16 pool 'k8stest' created
[root@master1-admin ~]# rbd create rbda -s 1024 -p k8stest
[root@master1-admin ~]# rbd feature disable k8stest/rbda object-map fast-diff deep-flatten
4、创建 pv
[root@k8s-master-1.com ~]# cat pv.yaml apiVersion: v1
kind: PersistentVolume metadata:
name: ceph-pv spec:
capacity: storage: 1Gi
accessModes:
- ReadWriteOnce rbd:
monitors:
- 192.168.1.82:6789
pool: k8stest image: rbda user: admin secretRef:
name: ceph-secret fsType: xfs
readOnly: false persistentVolumeReclaimPolicy: Recycle
[root@k8s-master-1.com ~]# kubectl apply -f pv.yaml persistentvolume/ceph-pv created
![]() |
[root@k8s-master-1.com ~]# kubectl get pv
5、创建 pvc
[root@k8s-master-1.com ~]# cat pvc.yaml kind: PersistentVolumeClaim
apiVersion: v1 metadata:
name: ceph-pvc
spec:
accessModes:
- ReadWriteOnce resources:
requests:
storage: 1Gi
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-pvc Bound ceph-pv 1Gi RWO 6s
[root@k8s-master-1.com ~]# kubectl apply -f pvc.yaml [root@k8s-master-1.com]# kubectl get pvc
6、测试挂载 pvc
[root@k8s-master-1.com ~]# cat pod-2.yaml apiVersion: apps/v1
kind: Deployment metadata:
name: nginx-deployment spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template
metadata:
labels:
app: nginx spec:
containers:
- name: nginx image: nginx
imagePullPolicy: IfNotPresent ports:
- containerPort: 80 volumeMounts:
- mountPath: "/ceph-data" name: ceph-data
volumes:
- name: ceph-data persistentVolumeClaim:
claimName: ceph-pvc
[root@k8s-master-1.com ~]# kubectl apply -f pod-2.yaml [root@k8s-master-1.com ~]# kubectl get pods -l app=nginx
| NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|
| nginx-deployment-fc894c564-8tfhn | 1/1 | Running | 0 | 78s |
| nginx-deployment-fc894c564-l74km | 1/1 | Running | 0 | 78s |
[root@k8s-master-1.com ~]# cat pod-3.yaml apiVersion: apps/v1
kind: Deployment metadata:
name: nginx-1-deployment spec:
selector:
matchLabels:
appv1: nginxv1
replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template
metadata:
labels:
appv1: nginxv1 spec:
containers:
- name: nginx image: nginx
imagePullPolicy: IfNotPresent ports:
- containerPort: 80 volumeMounts:
- mountPath: "/ceph-data" name: ceph-data
volumes:
- name: ceph-data persistentVolumeClaim:
claimName: ceph-pvc
[root@k8s-master-1.com ~]# kubectl apply -f pod-3.yaml [root@k8s-master-1.com ~]# kubectl get pods -l appv1=nginxv1
NAME READY STATUS RESTARTS AGE
nginx-1-deployment-cd74b5dd4-jqvxj 1/1 Running 0 56s nginx-1-deployment-cd74b5dd4-lrddc 1/1 Running 0 56s
通过上面实验可以发现 pod 是可以 ReadWriteOnce 共享挂载相同的 pvc 的
4 使用 harbor 搭建 Docker 私有仓库
部署 harbor 仓库的机器,内存至少要 4G,另外系统根分区的可用空间需要大于 20G,否则安装时会报空间不足,故本机配置 60G 硬盘空间。
harbor 介绍
Docker 容器应用的开发和运行离不开可靠的镜像管理,虽然 Docker 官方也提供了公共的镜像仓库,但是从安全和效率等方面考虑,部署我们私有环境内的 Registry 也是非常必要的。Harbor 是由 VMware 公司开源的企业级的Docker Registry 管理项目,它包括权限管理(RBAC)、LDAP、日志审核、管理界面、自我注册、镜像复制和中文支持等功能。
官网地址:https://github.com/goharbor/harbor
4.1 初始化实验环境-安装 docker #配置主机名
[root@localhost ~] # hostnamectl set-hostname harbor && bash
#配置静态 IP
把虚拟机或者物理机配置成静态 ip 地址,这样机器重新启动后 ip 地址也不会发生改变。以 harbor 主机为例,修改静态 IP:
修改/etc/sysconfig/network-scripts/ifcfg-ens33 文件,变成如下:
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no
BOOTPROTO=static IPADDR=192.168.1.66 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33
DEVICE=ens33 ONBOOT=yes
#修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下: [root@harbor ~] #service network restart
#在 k8s-master-1.com、k8s-master-2.com、k8s-node-1.com、harbor 上配置 hosts 文件,让两台主机 hosts
文件保持一致
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.63 k8s-master-1.com
192.168.1.64 k8s-master-2.com
192.168.1.65 k8s-node-1.com
192.168.1.66 harbor
#关闭 firewalld 防火墙
[root@harbor ~]# systemctl stop firewalld ; systemctl disable firewalld
#关闭 iptables 防火墙
[root@harbor ~]# yum install iptables-services -y #安装 iptables
#禁用 iptables
[root@harbor ~]# service iptables stop && systemctl disable iptables
清空防火墙规则
[root@harbor ~]# iptables -F
#关闭 selinux
[root@harbor ~]# setenforce 0 #临时禁用#永久禁用
[root@harbor ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
注意:修改 selinux 配置文件之后,重启机器,selinux 才能永久生效
[root@harbor ~]# getenforce Disabled
#配置时间同步
[root@harbor ~]# ntpdate cn.pool.ntp.org #编写计划任务
crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org
重启 crond 服务使配置生效:
service crond restart
方法 1:在线安装 docker-ce , 配置国内 docker-ce 的 yum 源(阿里云)
[root@harbor ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
配置 docker-ce 的离线 yum 源:
安装基础软件包
[root@harbor ~]# yum install -y wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet
安装 docker 环境依赖
[root@harbor ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
安装 docker-ce
[root@harbor ~]# yum install docker-ce -y
#启动 docker 服务
[root@harbor ~]# systemctl start docker && systemctl enable docker #查看 Docker 版本信息
[root@harbor ~]# docker version [root@harbor ~]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-04-20 10:07:23 CST; 9s ago
4.2 开启包转发功能和修改内核参数内核参数修改:
[root@harbor ~]# modprobe br_netfilter
[root@harbor ~]# echo "modprobe br_netfilter" >> /etc/profile [root@harbor ~]# cat > /etc/sysctl.d/docker.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 EOF
[root@harbor ~]# sysctl -p /etc/sysctl.d/docker.conf #重启 docker
[root@harbor ~]# systemctl restart docker
4.3 为 harbor 签发证书[root@harbor~]# mkdir /data/ssl -p [root@harbor ~]# cd /data/ssl/
生成 ca 证书:
[root@harbor ssl]# openssl genrsa -out ca.key 3072 #生成一个 3072 位的 key,也就是私钥
[root@harbor ssl]# openssl req -new -x509 -days 3650 -key ca.key -out ca.pem
#生成一个数字证书 ca.pem,3650 表示证书的有效时间是 10 年,按箭头提示填写即可,没有箭头标注的为空:
[root@harbor ssl]# openssl req -new -x509 -days 3650 -key ca.key -out ca.pem You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank
For some fields there will be a default value, If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:guangdong Locality Name (eg, city) [Default City]: shenzhen
Organization Name (eg, company) [Default Company Ltd]:xuegod Organizational Unit Name (eg, section) []:CA
Common Name (eg, your name or your server's hostname) []:harbor Email Address []:mk@163.com
#生成域名的证书:
[root@harbor ssl]# openssl genrsa -out harbor.key 3072 #生成一个 3072 位的 key,也就是私钥
[root@harbor ssl]# openssl req -new -key harbor.key -out harbor.csr
#生成一个证书请求,一会签发证书时需要的,标箭头的按提示填写,没有箭头标注的为空: [root@xuegod64 ssl]# openssl req -new -key harbor.key -out harbor.csr
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank
For some fields there will be a default value, If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:guangdong Locality Name (eg, city) [Default City]: shenzhen
Organization Name (eg, company) [Default Company Ltd]:xuegod Organizational Unit Name (eg, section) []:CA
Common Name (eg, your name or your server's hostname) []:harbor Email Address []:mk@163.com
Please enter the following 'extra' attributes to be sent with your certificate request
A challenge password []:
An optional company name []:
签发证书:
[root@harbor ssl]# openssl x509 -req -in harbor.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out harbor.pem -days 3650
查看证书是否有效:
openssl x509 -noout -text -in harbor.pem
显示如下,说明有效:
Certificate:
Data:
Version: 1 (0x0) Serial Number:
cd:21:3c:44:64:17:65:40
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=CH, ST=BJ, L=BJ, O=Default Company Ltd Validity
Not Before: Dec 26 09:29:19 2020 GMT
Not After : Dec 24 09:29:19 2030 GMT
Subject: C=CN, ST=BJ, L=BJ, O=xuegod Ltd, CN=harbor Subject Public Key Info:
Public Key Algorithm: rsaEncryption Public-Key: (3072 bit)
Modulus:
00:b0:60:c3:e6:35:70:11:c8:73:83:38:9a:7e:b8:
。。。
4.4 安装 harbor
创建安装目录
[root@harbor ssl]# mkdir /data/install -p [root@harbor ssl]# cd /data/install/
安装 harbor
/data/ssl 目录下有如下文件:
ca.key ca.pem ca.srl harbor.csr harbor.key harbor.pem
[root@xuegod64 install]# cd /data/install/
#把 harbor 的离线包 harbor-offline-installer-v2.3.0-rc3.tgz 上传到这个目录,离线包在课件里提供了
下载 harbor 离线包的地址:
https://github.com/goharbor/harbor/releases/tag/
解压:
[root@harbor install]# tar zxvf harbor-offline-installer-v2.3.0-rc3.tgz [root@xuegod64 install]# cd harbor
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml [root@harbor harbor]# vim harbor.yml
修改配置文件:
hostname: harbor
#修改 hostname,跟上面签发的证书域名保持一致#协议用 https
certificate: /data/ssl/harbor.pem private_key: /data/ssl/harbor.key
邮件和 ldap 不需要配置,在 harbor 的 web 界面可以配置其他配置采用默认即可
修改之后保存退出
注:harbor 默认的账号密码:admin/Harbor12345
安装 docker-compose
方法 1:离线上传 docker-compose 到服务器上
下载二进制文件上传至 linux(课程资料已提供 docker-compose 二进制文件可直接上传)
![]() |
[root@xuegod64 ~]# rz
[root@harbor ~]# mv docker-compose-Linux-x86_64.64 /usr/local/bin/docker-compose
添加执行权限
[root@harbor ~]# chmod +x /usr/local/bin/docker-compose
注: docker-compose 项目是 Docker 官方的开源项目,负责实现对 Docker 容器集群的快速编排。Docker- Compose 的工程配置文件默认为 docker-compose.yml,Docker-Compose 运行目录下的必要有一个 docker- compose.yml。docker-compose 可以管理多个 docker 实例。
安装 harbor 需要的离线镜像包 docker-harbor-2-3-0.tar.gz 在课件,可上传到 xuegod64,通过 ctr -n=k8s.io images import 解压
[root@harbor install]# docker load -i docker-harbor-2-3-0.tar.gz [root@harbor install]# cd /data/install/harbor
[root@harbor harbor]# ./install.sh
在自己电脑修改 hosts 文件
在 hosts 文件添加如下一行,然后保存即可
192.168.1.66 harbor
扩展:
如何停掉 harbor:
[root@ harbor harbor]# cd /data/install/harbor [root@ harbor harbor]# docker-compose stop 如何启动 harbor:
[root@ harbor harbor]# cd /data/install/harbor [root@ harbor harbor]#docker-compose up -d
4.5 harbor 图像化界面使用说明在浏览器输入:
![]() |
接收风险并继续,出现如下界面,说明访问正常
账号:admin
密码:Harbor12345
![]() |
输入账号密码出现如下:
所有基础镜像都会放在 library 里面,这是一个公开的镜像仓库
新建项目->起个项目名字 test(把访问级别公开那个选中,让项目才可以被公开使用)
![]() |
![]() |
4.6 测试使用 harbor 的 harbor 镜像仓库
#修改 k8s-master-1.com 的 docker 配置
[root@k8s-master-1.com ~]# vim /etc/docker/daemon.json
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker- cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com","https://rncxm540.mirror.aliyuncs.com","https:
//e9yneuy4.mirror.aliyuncs.com"], "insecure-registries": ["192.168.1.66"]
}
修改配置之后使配置生效:
[root@k8s-master-1.com ~]# systemctl daemon-reload && systemctl restart docker #查看 docker 是否启动成功
[root@k8s-master-1.com ~]# systemctl status docker #显示如下,说明启动成功:
Active: active (running) since Fri … ago 注意:
配置新增加了一行内容如下:
"insecure-registries":["192.168.1.66"],
上面增加的内容表示我们内网访问 harbor 的时候走的是 http,192.168.1.64 是安装 harbor 机器的 ip
登录 harbor:
[root@k8s-master-1.com]# docker login 192.168.1.66
Username:admin Password: Harbor12345
输入账号密码之后看到如下,说明登录成功了:
Login Succeeded
#导入 tomcat 镜像,tomcat.tar.gz 在课件里
[root@k8s-master-1.com ~]# docker pull busybox:latest #把 tomcat 镜像打标签
[root@k8s-master-1.com ~]# docker tag busybox:latest 192.168.1.66/test/busybox:v1 执行上面命令就会把 192.168.1.66/test/tomcat:v1 上传到 harbor 里的 test 项目下[root@k8s-master-1.com ~]# docker push 192.168.1.66/test/busybox:v1
#修改 k8s-master-2.com 的 docker 配置
[root@k8s-master-2.com ~]# vim /etc/docker/daemon.json
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-
cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com","https://rncxm540.mirror.aliyuncs.com","https:
//e9yneuy4.mirror.aliyuncs.com"], "insecure-registries": ["192.168.1.66"]
}
修改配置之后使配置生效:
[root@k8s-master-2.com ~]# systemctl daemon-reload && systemctl restart docker #查看 docker 是否启动成功
[root@k8s-master-1.com ~]# systemctl status docker #显示如下,说明启动成功:
Active: active (running) since Fri … ago 注意:
配置新增加了一行内容如下:
"insecure-registries":["192.168.1.66"],
上面增加的内容表示我们内网访问 harbor 的时候走的是 http,192.168.1.64 是安装 harbor 机器的 ip
登录 harbor:
[root@k8s-master-2.com]# docker login 192.168.1.66
Username:admin Password: Harbor12345
输入账号密码之后看到如下,说明登录成功了:
Login Succeeded
#修改 k8s-node-1.com 的 docker 配置
[root@k8s-node-1.com ~]# vim /etc/docker/daemon.json
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker- cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com","https://rncxm540.mirror.aliyuncs.com","https:
//e9yneuy4.mirror.aliyuncs.com"], "insecure-registries": ["192.168.1.66"]
}
修改配置之后使配置生效:
[root@k8s-node-1.com ~]# systemctl daemon-reload && systemctl restart docker #查看 docker 是否启动成功
[root@k8s-node-1.com ~]# systemctl status docker #显示如下,说明启动成功:
Active: active (running) since Fri … ago 注意:
配置新增加了一行内容如下:
"insecure-registries":["192.168.1.66"],
上面增加的内容表示我们内网访问 harbor 的时候走的是 http,192.168.1.64 是安装 harbor 机器的 ip
登录 harbor:
[root@k8s-node-1.com]# docker login 192.168.1.66
Username:admin Password: Harbor12345
输入账号密码之后看到如下,说明登录成功了:
Login Succeeded
4.7 从 harbor 仓库下载镜像
在 k8s-master-2.com 拉取镜像
[root@k8s-master-2.com ~]#docker pull 192.168.1.66/test/busybox:v1
5、mgr 配置 mysql 高可用集群
5.1 Mysql 数据库主机基本环境
5.1.1 Mysql 环境准备
| 主机名 | IP | 系统/Mysql 版本 | 角色 |
|---|---|---|---|
| mysql_master.com | 192.168.1.71 | Cetons7.6/5.7 | 主从 |
| mysql_slave_1.com | 192.168.1.72 | Cetons7.6/5.7 | 主从 |
| mysql_slave_2.com | 192.168.1.73 | Cetons7.6/5.7 | 主从 |
| VIP:192.168.1.190 |
5.2 部署 MGR 一主多从集群
5.2.1 环境准备数据库服务器规划
| IP 地址 | 主机名 | 数据库 | 端口号 | Server ID | 操作系统 |
|---|---|---|---|---|---|
| 192.168.1.71 | mysql_master.com | MySQL-5.7.25 | 3306 | 63 | CentOS 7.6 |
| 192.168.1.72 | mysql_slave_1.com | MySQL-5.7.25 | 3306 | 64 | CentOS 7.6 |
| 192.168.1.73 | mysql_slave_2.com | MySQL-5.7.25 | 3306 | 65 | CentOS 7.6 |
5.2.2 安装 mysql
配置主机名:
在 192.168.1.71 上执行:
hostnamectl set-hostname mysql_master.com && bash
在 192.168.1.72 上执行:
hostnamectl set-hostname mysql_slave_1.com && bash
在 192.168.1.73 上执行:
hostnamectl set-hostname mysql_slave_2.com && bash
配置 hosts 文件
在三台数据库服务器上都设置 ip 与主机名的对应关系: [root@xuegod63 ~]# vim /etc/hosts 192.168.1.71 mysql_master.com
192.168.1.72 mysql_slave_1.com
192.168.1.73 mysql_slave_2.com
所有节点安装
上传 mysql-5.7.tar.gz 到 Linux 主机上,并解压:
注:mysql-5.7.tar.gz 中包括了安装 mysql5.7 主要的软件包。 这样部署起来更方便[root@mysql_master.com ~]# scp mysql-5.7.tar.gz mysql_slave_2.com:/root/ [root@mysql_master.com ~]# scp mysql-5.7.tar.gz mysql_slave_1.com:/root/
安装 mysql
注:所有 mysql 节点全部安装,由于临时密码不同,此处省略 65、64 节点安装初始化过程
[root@mysql_master.com ~]# tar xvf mysql-5.7.tar.gz [root@mysql_master.com ~]# yum -y install ./mysql*.rpm
[root@mysql_master.com ~]# systemctl start mysqld #启动 MySQL 会生成临时密码。
在 MySQL 的配置文件/etc/my.cnf 中关闭密码强度审计插件,并重启 MySQl 服务。
[root@mysql_master.com ~]# vim /etc/my.cnf #修改 MySQL 的配置文件,在[myqld]标签处末行添加以下项:
validate-password=OFF #不使用密码强度审计插件
[root@mysql_master.com ~]# systemctl restart mysqld #重启 MySQL 服务
[root@mysql_master.com ~]# grep 'password' /var/log/mysqld.log #获取临时密码。
2018-08-01T09:59:33.918961Z 1 [Note] A temporary password is generated for root@localhost: buL.UJp!T2Od #临时密码
[root@mysql_master.com ~]# mysql -u root -p'buL.UJp!T2Od' #使用临时密码登录 MySQl,注意临时密码要引号
mysql> set password for root@localhost = password('123456'); #修改 root 用户密码为 123456 mysql> use mysql;
mysql> update user set host = '%' where user = 'root'; mysql> flush privileges;
[root@mysql_slave_1.com ~]# tar xvf mysql-5.7.tar.gz
[root@ mysql_slave_1.com ~]# yum -y install ./mysql*.rpm
[root@ mysql_slave_1.com ~]# systemctl start mysqld #启动 MySQL 会生成临时密码。
在 MySQL 的配置文件/etc/my.cnf 中关闭密码强度审计插件,并重启 MySQl 服务。
[root@ mysql_slave_1.com ~]# vim /etc/my.cnf #修改 MySQL 的配置文件,在[myqld]标签处末行添加以下项:
validate-password=OFF #不使用密码强度审计插件
[root@ mysql_slave_1.com ~]# systemctl restart mysqld #重启 MySQL 服务
[root@mysql_master.com ~]# grep 'password' /var/log/mysqld.log #获取临时密码。
2018-08-01T09:59:33.918961Z 1 [Note] A temporary password is generated for root@localhost: buL.UJp!T2Od #临时密码
[root@ mysql_slave_1.com ~]# mysql -u root -p'buL.UJp!T2Od' #使用临时密码登录 MySQl,注意临时密码要引号
mysql> set password for root@localhost = password('123456'); #修改 root 用户密码为 123456 mysql> use mysql;
mysql> update user set host = '%' where user = 'root'; mysql> flush privileges;
[root@ mysql_slave_2.com ~]# tar xvf mysql-5.7.tar.gz [root@ mysql_slave_2.com ~]# yum -y install ./mysql*.rpm
[root@ mysql_slave_2.com ~]# systemctl start mysqld #启动 MySQL 会生成临时密码。
在 MySQL 的配置文件/etc/my.cnf 中关闭密码强度审计插件,并重启 MySQl 服务。
[root@ mysql_slave_2.com ~]# vim /etc/my.cnf #修改 MySQL 的配置文件,在[myqld]标签处末行添加以下项:
validate-password=OFF #不使用密码强度审计插件
[root@ mysql_slave_2.com ~]# systemctl restart mysqld #重启 MySQL 服务
[root@mysql_master.com ~]# grep 'password' /var/log/mysqld.log #获取临时密码。
2018-08-01T09:59:33.918961Z 1 [Note] A temporary password is generated for root@localhost: buL.UJp!T2Od #临时密码
[root@ mysql_slave_2.com ~]# mysql -u root -p'buL.UJp!T2Od' #使用临时密码登录 MySQl,注意临时密码要引号
mysql> set password for root@localhost = password('123456'); #修改 root 用户密码为 123456 mysql> use mysql;
mysql> update user set host = '%' where user = 'root'; mysql> flush privileges;
5.2.3 配置 mysql_master.com 主节点
group_name 可以通过uuidgen 生成
[root@mysql_master.com ~]# uuidgen ce9be252-2b71-11e6-b8f4-00212844f856
服务器 mysql_master.com 配置/etc/my.cnf
参考:官网配置 https://dev.mysql.com/doc/refman/5.7/en/group-replication-configuring-instances.html [root@mysql_master.com ~]# vim /etc/my.cnf #在 [mysqld] 配置组中,增加以下红色内容:
[mysqld]
server_id = 63 #服务 ID gtid_mode = ON #全局事务
enforce_gtid_consistency = ON #强制 GTID 的一致性master_info_repository = TABLE #将 master.info 元数据保存在系统表中relay_log_info_repository = TABLE #将 relay.info 元数据保存在系统表中binlog_checksum = NONE #禁用二进制日志事件校验log_slave_updates = ON #级联复制
log_bin = binlog #开启二进制日志记录binlog_format = ROW #以行的格式记录
plugin_load_add='group_replication.so' transaction_write_set_extraction=XXHASH64 #使用哈希算法将其编码为散列
group_replication_group_name="ce9be252-2b71-11e6-b8f4-00212844f856" #加入的组名group_replication_start_on_boot=off #不自动启用组复制集群group_replication_local_address="mysql_master.com:33063" #以本机端口 33063 接受来自组中成员的传入连接
group_replication_group_seeds="mysql_master.com:33063, mysql_slave_1.com:33064, mysql_slave_2.com:33065" #组中成员访问表
group_replication_bootstrap_group=off #不启用引导组
#这些设置将服务器配置为使用唯一标识符编号 1,启用全局事务标识符,并将复制元数据存储在系统表而不是文件中。此外,它还指示服务器打开二进制日志记录,使用基于行的格式并禁用二进制日志事件校验和。
# plugin load add 将组复制插件添加到服务器启动时加载的插件列表中。在生产部署中,这比手动安装插件更可取。group_replication_group_name 的值必须是有效的 UUID。在二进制日志中为组复制事件设置 GTID 时,此 UUID 在内部使用。可以使用 SELECT UUID()生成 UUID。
将 group_replication_start_on_boot 变量配置为 off 会指示插件在服务器启动时不会自动启动操作。这在设置组复制时很重要,因为它确保您可以在手动启动插件之前配置服务器。配置成员后,您可以将组启动时的组复制设置为打开, 以便在服务器启动时自动启动组复制。
配置 group_replication_bootstrap_group 指示插件是否引导组。在这种情况下,即使 mysql_master.com 是组的第一个成员,我们在选项文件中将此变量设置为 off。相反,我们在实例运行时配置group_replication_bootstrap_group,以确保只有一个成员实际引导组。
group_replication_bootstrap_group 变量在任何时候都只能在属于某个组的一个服务器实例上启用,通常是在您第一次引导该组时启用(或者在整个组关闭并再次备份的情况下)。如果多次引导组,例如当多个服务器实例设置了此选项时,则它们可能会创建一个人工分割大脑场景,其中存在两个具有相同名称的不同组。在第一个服务器实例联机后,
始终将 group_replication_bootstrap_group 设置为 off。
#什么是元数据
元数据(Metadata),又称中介数据、中继数据,为描述数据的数据(data about data),主要是描述数据属性
(property)的信息,用来支持如指示存储位置、历史数据、资源查找、文件记录等功能。元数据算是一种电子式目录,为了达到编制目录的目的,必须在描述并收藏数据的内容或特色,进而达成协助数据检索的目的。
#什么是级联复制
级联复制(cascade):是指从主场地复制过来的又从该场地再次复制到其他场地,即 A 场地把数据复制到 B 场地,B场地又把这些数据或其中部分数据再复制到其他场地。
重启 MySQL 服务
服务器 mysql_master.com.cn 上建立复制账号:
[root@mysql_master.com ~]# mysql -u root -p123456
mysql> set SQL_LOG_BIN=0; #停掉日志记录
mysql> grant replication slave on *.* to repl@'192.168.1.%' identified by '123456'; #创建一个用户 repl,后期用于其他节点,同步数据。
注:为什么创建 repl 用户需要关闭二进制日志功能? 创建成功后,再开启?
因为,用于同步数据的用户名 repl 和密码,必须在所有节点上有,且用户名和密码必须一样。所以,创建用户和密码时,先关闭二进制日志功能,这样不产生二进制日志数据。创建成功后,再开启二进制日志功能。这样 repl 用户信息, 每台机器上都有了,而且不会在 MGR 同步数据时产生冲突。
mysql> change master to master_user='repl',master_password='123456' for channel 'group_replication_recovery'; #构建 group replication 集群
查看 MySQL 服务器 mysql_master.com.cn 上安装的 group replication 插件,这个组件在 my.cnf 配置文件中,
plugin_load_add='group_replication.so' 有指定。
如果没有安装成功,可以执行这个命令,手动再次安装:安装插件
启动服务器 mysql_master.com.cn 上 MySQL 的 group replication
设置 group_replication_bootstrap_group 为 ON,表示,这台机器是集群中的 master,以后加入集群的服务器都是 salve。引导只能由一台服务器完成。
作为首个节点启动 mgr 集群
mysql> set global group_replication_bootstrap_group=OFF; #启动成功后,这台服务器已经是 master 了。有了 master 后,引导器就没有用了,就可以关闭了。
查看 mgr 的状态
查询表 performance_schema.replication_group_members
测试服务器 mysql_master.com.cn 上的 MySQL
mysql> use test;
Database changed
mysql> create table t1 (id int primary key,name varchar(20));
#注意创建主键Query OK, 0 rows affected (0.01 sec)
mysql> insert into t1 values (1,'man');
Query OK, 1 row affected (0.01 sec)
+----+------+
| id | name |
+----+------+
| 1 | man |
+----+------+
5.2.4 集群中添加 mysql_slave_1.com 主机复制组添加新实例 mysql_slave_1.com.cn
修改/etc/my.cnf 配置文件,方法和之前相同。
[mysqld]
server_id = 112
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
plugin_load_add='group_replication.so'
transaction_write_set_extraction=XXHASH64
group_replication_group_name="729fd07e-1d5d-4e20-846f-b2759d6f51fe"
group_replication_start_on_boot=off
group_replication_local_address="node1-monitor-mysql02:33064"
group_replication_group_seeds="master1-admin-mysql01:33063,node1-monitor-mysql02:33064,node2-osd-mysql03:33065"
group_replication_bootstrap_group=off
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
validate-password=OFF
重启 MySQL 服务
用户授权
mysql> set SQL_LOG_BIN=0; #停掉日志记录
mysql> grant replication slave on *.* to repl@'192.168.1.%' identified by '123456'; mysql> flush privileges;
mysql> set SQL_LOG_BIN=1; #开启日志记录
mysql> change master to master_user='repl',master_password='123456' for channel 'group_replication_recovery'; #构建 group replication 集群
把实例添加到之前的复制组
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON;
Query OK, 0 rows affected (0.00 sec)
mysql> start group_replication;
Query OK, 0 rows affected (6.65 sec)
在 mysql_master.com.cn 上查看复制组状态
在新加的实例上查看数据库发现 test 库和 t1 表已经同步
5.2.5 集群中添加 mysql_slave_2.com 主机
[root@mysql_slave_2.com~]# vim /etc/my.cnf
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html
[mysqld]
server_id = 113 #注意服务 id 不一样
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
plugin_load_add='group_replication.so'
transaction_write_set_extraction=XXHASH64
group_replication_group_name="729fd07e-1d5d-4e20-846f-b2759d6f51fe"
group_replication_start_on_boot=off
group_replication_local_address="node2-osd-mysql03:33065"
group_replication_group_seeds="master1-admin-mysql01:33063,node1-monitor-mysql02:33064,node2-osd-mysql03:33065"
group_replication_bootstrap_group=off
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
validate-password=OFF
用户授权
[root@mysql_slave_2.com~]# mysql -u root -p123456
mysql> set SQL_LOG_BIN=0; #停掉日志记录
mysql> grant replication slave on *.* to repl@'192.168.1.%' identified by '123456';
mysql> flush privileges;
mysql> set SQL_LOG_BIN=1; #开启日志记录
mysql> change master to master_user='repl',master_password='123456' for channel 'group_replication_recovery'; #构建 group replication 集群
安装 group replication 插件
mysql> install PLUGIN group_replication SONAME 'group_replication.so';
Function 'group_replication' already exists
把实例添加到之前的复制组
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON;
Query OK, 0 rows affected (0.00 sec)
mysql> start group_replication;
Query OK, 0 rows affected (6.65 sec)
在 mysql_master.com.cn 上查看复制组状态
以上单 master 节点的集群就搭建完毕!
5.2.6
![]() |
测试 MGR 一主多从读写功能mysql_master.com 查看集群参数设置列表: mysql> show variables like 'group_replication%';
查看 mysql_master.com 是否只读,发现 mysql_master.com master,可读可写。
另外两台查看,发现 mysql_slave_1.com 和 mysql_slave_2.com,是只读的
5.3 Multi-primary 多主模式实现多节点同时读写由单主模式修改为多主模式方法
1、关闭 mysql_master.com 单主模式,在原来单主模式的主节点执行操作如下:
[root@mysql_master.com ~]# mysql -u root -p123456
mysql> stop GROUP_REPLICATION;
mysql> set global group_replication_single_primary_mode=OFF; #关闭单 master 模式
mysql> set global group_replication_enforce_update_everywhere_checks=ON; #设置多主模式下各个节点严格一致性检查
#执行多点写入了,多点写入会存在冲突检查,这耗损性能挺大的,官方建议采用网络分区功能,在程序端把相同的业务定位到同一节点,尽量减少冲突发生几率。
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
2、mysql_slave_1.com,执行下面的操作即可
[root@mysql_slave_1.com ~]# mysql -u root -p123456
mysql> stop GROUP_REPLICATION;
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON; #即使,含有组中不存在的事务,也允许当前 server 加入组。
mysql> set global group_replication_single_primary_mode=OFF;
mysql> set global group_replication_enforce_update_everywhere_checks=ON;
mysql> start group_replication;
查看效果
mysql> show variables like '%read_only%';
read_only 为 off
3、mysql_slave_2.com,执行下面的操作即可
[root@mysql_slave_2.com~]# mysql -u root -p123456
mysql> stop GROUP_REPLICATION;
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON; #即使,含有组中不存在的事务,也允许当前 server 加入组。
mysql> set global group_replication_single_primary_mode=OFF;
mysql> set global group_replication_enforce_update_everywhere_checks=ON;
mysql> start group_replication;
查看效果
mysql> show variables like '%read_only%';
read_only 为 off
4、查看集群,3 个节都正常
5、测试 Multi-primary 多主模式实现多节点同时读写
mysql_master.com 上执行: mysql> insert into test.t1 values (2,'mk');
mysql_slave_1.com 上执行: mysql> insert into test.t1 values (3,'cd');
mysql_slave_2.com 上执行: mysql> insert into test.t1 values (4,'yum');
mysql> select * from test.t1;
+----+------+
| id | name |
+----+------+
| 1 | man |
| 2 | mk |
| 3 | cd |
| 4 | yum |
+----+------+
4 rows in set (0.00 sec)
6、测试 MGR 集群中一些节点坏了, 数据还可以正常读写
[root@mysql_master.com ~]# systemctl stop mysqld
[root@mysql_slave_2.com~]# systemctl stop mysqld
[root@mysql_slave_1.com ~]# mysql -u root -p123456
mysql> select * from performance_schema.replication_group_members;
mysql> insert into test.t1 values (5,'free');
mysql> select * from test.t1;
[root@mysql_slave_2.com~]# systemctl start mysqld
[root@mysql_slave_2.com~]# mysql -u root -p123456
mysql> select * from test.t1;
mysql> stop GROUP_REPLICATION;
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON;
mysql> set global group_replication_single_primary_mode=OFF;
mysql> set global group_replication_enforce_update_everywhere_checks=ON;
mysql> start group_replication;
mysql> select * from test.t1;
mysql> select * from performance_schema.replication_group_members;
7、把修好的的 MGR 结点,重新加入到 MGR 集群中
[root@mysql_master.com ~]# systemctl start mysqld
[root@mysql_master.com ~]# mysql -u root -p123456
mysql> select * from test.t1;
mysql> stop GROUP_REPLICATION;
mysql> set global group_replication_allow_local_disjoint_gtids_join=ON;
mysql> set global group_replication_single_primary_mode=OFF;
mysql> set global group_replication_enforce_update_everywhere_checks=ON;
mysql> start group_replication;
mysql> select * from test.t1;
mysql> select * from performance_schema.replication_group_members;
总结:到此 MRG,多主同时读写高可用,已经完成。
5.4 基于 keepalived+nginx 实现 MGR MySQL 高可用拓扑图:
![]() |
1、安装 nginx 主备:
在 mysql_master.com 和 mysql_slave_1.com 上做 nginx 主备安装
[root@mysql_master.com ~]# yum install epel-release -y
[root@mysql_master.com ~]# yum install nginx keepalived nginx-mod-stream -y
[root@mysql_slave_1.com ~]# yum install epel-release -y
[root@mysql_slave_1.com ~]# yum install nginx keepalived nginx-mod-stream -y
2、修改 nginx 配置文件。主备一样
[root@mysql_master.com ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/mysql-access.log main;
upstream mysql-server {
server 192.168.2.111:3306 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.2.112:3306 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.2.113:3306 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 13306;
proxy_pass mysql-server;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
location / {
}
}
}
[root@mysql_slave_1.com ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/mysql-access.log main;
upstream mysql-server {
server 192.168.2.111:3306 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.2.112:3306 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.2.113:3306 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 13306;
proxy_pass mysql-server;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
location / {
}
}
}
3、keepalive 配置主 keepalived
[root@mysql_master.com ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens224
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 111111
}
virtual_ipaddress {
192.168.2.119
}
track_script {
check_nginx
}
}
[root@mysql_master.com~]# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
#1、判断Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动Nginx
service nginx start
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi
备 keepalived
[root@mysql_slave_1.com~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP1
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state SLAVE_1
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 111111
}
virtual_ipaddress {
192.168.2.119
}
track_script {
check_nginx
}
}
[root@mysql_slave_1.com ~]# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
#1、判断Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动Nginx
service nginx start
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi
注:keepalived 根据脚本返回状态码(0 为工作正常,非 0 不正常)判断是否故障转移。
4、启动服务:
[root@mysql_master.com~]# systemctl daemon-reload
[root@mysql_master.com ~]# systemctl enable nginx keepalived
[root@mysql_master.com ~]# systemctl start nginx
[root@mysql_master.com ~]# systemctl start keepalived
[root@mysql_slave_1.com~]# systemctl daemon-reload
[root@mysql_slave_1.com ~]# systemctl enable nginx keepalived
[root@mysql_slave_1.com ~]# systemctl start nginx
[root@mysql_slave_1.com ~]# systemctl start keepalived
5、测试通过 VIP 访问 mysql
输入密码:123456
可以访问 MySQL,模拟 mysql_master.com、mysql_slave_1.com、mysql_slave_2.com 机器任意两个 mysql 被停掉,均可访问
6 安装 rancher
在 k8s_rancher.com 上操作
6.1 初始化实验环境新创建一台虚拟机
环境说明(centos7.6):
| IP | 主机名 | 内存 | cpu |
|---|---|---|---|
| 192.168.1.51 | k8s_rancher.com | 6G | 4vCPU |
| 配置主机名: |
在 192.168.1.51 上执行如下:
hostnamectl set-hostname k8s_rancher.com
#配置 hosts 文件:
k8s-node-1.com、k8s-master-1.com、k8s-master-2.com、k8s_rancher.com 主机hosts 文件要保持一致: 打开/etc/hosts 文件,新增加如下几行:
192.168.1.63 k8s-master-1.com
192.168.1.64 k8s-master-2.com
192.168.1.65 k8s-node-1.com
192.168.1.51 k8s_rancher.com
配置 rancher 到k8s 主机互信生成 ssh 密钥对
[root@k8s_rancher.com ~]# ssh-keygen #一路回车,不输入密码#把本地的 ssh 公钥文件安装到远程主机对应的账户[root@k8s_rancher.com ~]# ssh-copy-id k8s-master-1.com [root@k8s_rancher.com ~]# ssh-copy-id k8s-master-2.com [root@k8s_rancher.com ~]# ssh-copy-id k8s-node-1.com [root@k8s_rancher.com ~]# ssh-copy-id k8s_rancher.com
关闭防火墙
[root@k8s_rancher.com ~]# systemctl stop firewalld ; systemctl disable firewalld
关闭 selinux
[root@k8s_rancher.com ~]# setenforce 0
[root@k8s_rancher.com ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
注意:修改 selinux 配置文件之后,重启机器,selinux 才能永久生效
内核参数修改:br_netfilter 模块用于将桥接流量转发至 iptables 链,br_netfilter 内核参数需要开启转发。
[root@k8s_rancher.com ~]# modprobe br_netfilter [root@k8s_rancher.com ~]# echo "modprobe br_netfilter" >> /etc/profile [root@k8s_rancher.com ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 EOF
[root@k8s_rancher.com ~]# sysctl -p /etc/sysctl.d/k8s.conf
在 k8s_rancher.com 上配置docker 镜像源:
选择一种安装 docker 的方式:1. 在线安装 或 2. 离线安装
\1. 在线安装
安装 docker-ce 环境依赖
[root@k8s_rancher.com ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
配置 docker-ce 国内 yum 源(阿里云)
[root@k8s_rancher.com ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
在 k8s_rancher.com 上安装docker-ce。我们已经配置了docker 本地源,直接安装 docker-ce 服务。
[root@k8s-node-1.com ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh- server socat ipvsadm conntrack ntpdate telnet
安装 docker-ce
[root@k8s_rancher.com ~]# yum install docker-ce -y
[root@k8s_rancher.com ~]# systemctl start docker && systemctl enable docker.service
修改 docker 配置文件,配置镜像加速器
[root@k8s_rancher.com ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker- cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub- mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"]
} EOF
[root@k8s_rancher.com ~]# systemctl daemon-reload [root@k8s_rancher.com ~]# systemctl restart docker [root@k8s_rancher.com ~]# systemctl status docker
显示如下,说明 docker 安装成功了
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-17 12:39:06 CST
6.2 安装 Rancher
Rancher2.7.3 支持 k8s1.26.0,所以我们安装 rancher2.7.3 版本,
rancher-agent_2.7.3.tar.gz 和 rancher_2.7.3.tar.gz 这两个封装了镜像的离线文件在课件里:
把 rancher_2.7.3.tar.gz 上传到 k8s-node-1.com,把 rancher-agent_2.7.3.tar.gz 上传到 k8s_rancher.com 上在 k8s-node-1.com、k8s_rancher.com 上操作如下命令,解压镜像:
[root@k8s_rancher.com ~]# docker load -i rancher_2.7.3.tar.gz
[root@k8s-node-1.com ~]# ctr -n=k8s.io images import rancher-agent_2.7.3.tar.gz [root@k8s-master-1.com ~]# ctr -n=k8s.io images import rancher-agent_2.7.3.tar.gz [root@k8s-master-2.com ~]# ctr -n=k8s.io images import rancher-agent_2.7.3.tar.gz
[root@k8s_rancher.com ~]#docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.7.3
注:unless-stopped,在容器退出时总是重启容器,但是不考虑在 Docker 守护进程启动时就已经停止了的容器验证 rancher 是否启动:
[root@k8s_rancher.com ~]# docker ps | grep rancher
显示如下,说明启动成功:
f9321083e186 rancher/rancher:v2.7.3 "entrypoint.sh" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp loving_johnson
6.3 登录 Rancher 平台
![]() |
在浏览器访问 k8s_rancher.com 的 ip 地址:
选择高级
继续
(1)获取密码:
在 k8s_rancher.com 上,docker ps 查看正在运行的容器,显示如下:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cbaf7caf86d4 rancher/rancher:v2.6.4 "entrypoint.sh" 16 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp tender_jepsen
通过上面可以看到容器的 id 是 cbaf7caf86d4
[root@k8s_rancher.com ~]# docker logs cbaf7caf86d4 2>&1 | grep "Bootstrap Password:" 2022/04/05 04:56:09 [INFO] Bootstrap Password: bdsm8qhm8cmbqzcrzwwwggtpmkrbmjl989ts8stxt6w9dksw2275nc
通过上面可以看到获取到的密码是 bdsm8qhm8cmbqzcrzwwwggtpmkrbmjl989ts8stxt6w9dksw2275nc:
把获取到的密码 bdsm8qhm8cmbqzcrzwwwggtpmkrbmjl989ts8stxt6w9dksw2275nc 复制到 password 位置:
点击 Log in 登录,显示如下:
设置新的密码:
点击 Continue 之后,需要重新登录 Rancher,显示如下:
![]() |
6.4 通过 Rancher 管理已存在的 k8s 集群
#把已经存在的 k8s 集群导入到 rancher 了
![]() |
选择 Import Existing,出现如下:
选择 Generic,出现如下:
![]() |
Cluster Name:xuegod
点击 Create
![]() |
出现如下:
在 k8s 控制节点复制上图红色箭头标注的一串命令:
[root@k8s-master-1.com ~]# curl --insecure -sfL https://192.168.1.51/v3/import/dtf5mvkdqnp8797c5lxhlnjbzmncqhlxxvz2vqhx2w2g98pf5gbbp7_c-m- 8f6nfqbt.yaml | kubectl apply -f -
验证 rancher-agent 是否部署成功:
[root@k8s-master-1.com ~]# kubectl get pods -n cattle-system
NAME READY STATUS RESTARTS AGE
cattle-cluster-agent-67df6b5cf8-lh6k5 1/1 Running 0 80s cattle-cluster-agent-67df6b5cf8-pq2xl 1/1 Running 0 77s
cattle-cluster-agent-67df6b5cf8-lh6k5 1/1 Running 0 80s
![]() |
看到 cattle-cluster-agent-67df6b5cf8-lh6k5 是 running,说明 rancher-agent 部署成功了,在 rancher ui 界面可以看到如下内容:
![]() |
打开 rancher 主页面:https://192.168.1.51/dashboard/home 可以看到如下内容:
通过上图可以看到 xuegod 集群已经导入到 rancher 了,导入的 k8s 版本是 1.26.0
7 基于 MGR 高可用搭建世界出名 wordpress 博客程序
把 mgr 安装的mysql 映射到k8s 集群内部[root@mysql_master.com ~]#kubectl apply -f mysql-svc.yaml [root@mysql_master.com ~]#kubectl apply -f mysql-end.yaml
备注:mysql-svc.yaml 内容如下:
apiVersion: v1 kind: Service metadata:
name: wordpress-mysql spec:
type: ClusterIP ports:
- port: 3306
mysql-end.yaml 内容如下:
apiVersion: v1 kind: Endpoints metadata:
name: wordpress-mysql subsets:
- addresses:
- ip: 192.168.1.190
ports:
- port: 13306
安装 wordpress 集群
[root@k8s-node-1.com ~]# ctr -n=k8s.io images import wordpress.tar.gz
[root@mysql_master.com ~]#kubectl create secret generic mysql-pass --from-literal=password=123456 [root@mysql_master.com ~]#kubectl apply -f wordpress.yaml
备注:wordpress.yaml 文件内容如下:
apiVersion: v1 kind: Service metadata:
name: wordpress labels:
app: wordpress spec:
type: NodePort ports:
- port: 80
nodePort: 30090 selector:
app: wordpress tier: frontend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment
metadata:
name: wordpress labels:
app: wordpress spec:
selector:
matchLabels:
app: wordpress tier: frontend
strategy:
type: Recreate template:
metadata:
labels:
app: wordpress tier: frontend
spec:
containers:
- image: wordpress:4.8-apache name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom: secretKeyRef:
name: mysql-pass key: password
ports:
- containerPort: 80 name: wordpress
volumeMounts:
- name: wordpress-persistent-storage mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage hostPath:
path: /datamysqlwordpress type: DirectoryOrCreate
查看 svc:
[root@mysql_master.com ~]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
通过上面可以看到 wordpress 这个 service 在物理机映射的端口是 30090,可以基于浏览器访问 k8s 任一个节点
![]() |
ip:30090 端口即可访问到 wordpress 服务,初始界面如下:
![]() |
8 实战-ingress-对外发布服务
8.1 Ingress 和 Ingress Controller 深度解读
互动:为什么要使用 k8s 原生的 Ingress controller 做七层负载均衡?
8.1.1 使用 Ingress Controller 代理 k8s 内部应用的流程
(1) 部署 Ingress controller,我们 ingress controller 使用的是 nginx
(2) 创建 Pod 应用,可以通过控制器创建 pod
(3) 创建 Service,用来分组 pod
(4) 创建 Ingress http,测试通过 http 访问应用
(5) 创建 Ingress https,测试通过 https 访问应用客户端通过七层调度器访问后端 pod 的方式
使用七层负载均衡调度器 ingress controller 时,当客户端访问 kubernetes 集群内部的应用时,数据包走向如下图流程所示:
![]() |
8.1.2 安装 Nginx Ingress Controller
[root@k8s-node-1.com ~]# ctr -n=k8s.io images pull registry.cn- hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 [root@k8s-node-1.com ~]# ctr -n=k8s.io images pull registry.cn- hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.1
#更新 yaml 文件,下面需要的 yaml 文件在课件,可上传到 k8s-master-1.com 机器上:
[root@mysql_master.com ~]# mkdir ingress [root@mysql_master.com ~]# cd ingress/ [root@mysql_master.com ingress]# kubectl apply -f deploy.yaml
[root@mysql_master.com ingress]# kubectl create clusterrolebinding clusterrolebinding-user-3 -- clusterrole=cluster-admin --user=system:serviceaccount:ingress-nginx:ingress-nginx
[root@mysql_master.com ingress]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-57zq2 0/1 Completed 0 4m44s ingress-nginx-admission-patch-8nb79 0/1 Completed 1 4m44s
ingress-nginx-controller-7db874b8c7-lq5gd 1/1 Running 0 4m44s
8.2 实战:测试 Ingress HTTP 代理 k8s 内部站点
#把 tomcat-8-5.tar.gz 上传到 k8s-node-1.com 上,手动解压
[root@ k8s-node-1.com ~]# ctr -n=k8s.io images import tomcat-8-5.tar.gz
1.部署后端 tomcat 服务
[root@k8s-master-1.com ~]# cat ingress-demo.yaml apiVersion: v1
kind: Service metadata:
name: tomcat namespace: default
spec:
selector:
app: tomcat release: canary
ports:
- name: http targetPort: 8080
port: 8080
- name: ajp targetPort: 8009
port: 8009
---
apiVersion: apps/v1 kind: Deployment metadata:
name: tomcat-deploy namespace: default
spec:
replicas: 2 selector:
matchLabels: app: tomcat release: canary
template: metadata:
labels:
app: tomcat release: canary
spec:
containers:
- name: tomcat
image: tomcat:8.5.34-jre8-alpine imagePullPolicy: IfNotPresent ports:
- name: http containerPort: 8080 name: ajp containerPort: 8009
#更新资源清单 yaml 文件:
[root@k8s-master-1.com ~]# kubectl apply -f ingress-demo.yaml service/tomcat created
deployment.apps/tomcat-deploy created #查看 pod 是否部署成功
[root@k8s-master-1.com ~]# kubectl get pods -l app=tomcat
| NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|
| tomcat-deploy-66b67fcf7b-9h9qp | 1/1 | Running | 0 | 32s |
| tomcat-deploy-66b67fcf7b-hxtkm | 1/1 | Running | 0 | 32s |
| 2、编写 ingress 规则 | ||||
| #编写 ingress 的配置清单 |
[root@k8s-master-1.com ~]# cat ingress-myapp.yaml apiVersion: networking.k8s.io/v1
kind: Ingress metadata:
name: ingress-myapp namespace: default annotations:
kubernetes.io/ingress.class: "nginx" spec:
rules: #定义后端转发的规则
- host: tomcat.lucky.com #通过域名进行转发http:
paths:
- path: / #配置访问路径,如果通过 url 进行转发,需要修改;空默认为访问的路径为"/" pathType: Prefix
backend: #配置后端服务service:
name: tomcat port:
number: 8080
#更新 yaml 文件:
[root@k8s-master-1.com ~]# kubectl apply -f ingress-myapp.yaml #查看 ingress-myapp 的详细信息
[root@k8s-master-1.com ~]# kubectl describe ingress ingress-myapp
Name: ingress-myapp
Namespace: default Address:
Default backend: default-http-backend:80 (10.244.187.118:8080) Rules:
Host Path Backends
----
----
--------
tomcat.lucky.com
tomcat:8080 (10.244.209.172:8080,10.244.209.173:8080)
Annotations: kubernetes.io/ingress.class: nginx Events:
Type Reason Age From Message
----
------
----
----
-------
Normal CREATE 22s nginx-ingress-controller Ingress default/ingress-myapp
#修改电脑本地的 host 文件,增加如下一行,下面的 ip 是 k8s-node-1.com 节点 ip 192.168.1.65 tomcat.lucky.com
![]() |
浏览器访问 tomcat.lucky.com,出现如下:
[root@k8s-master-1.com ~]# kubectl delete -f ingress-myapp.yaml
8.3 实战:测试 Ingress HTTPS 代理 tomcat 1、构建 TLS 站点
(1) 准备证书,在 k8s-master-1.com 节点操作
[root@k8s-master-1.com ~]# openssl genrsa -out tls.key 2048
[root@k8s-master-1.com ~]# openssl req -new -x509 -key tls.key -out tls.crt -subj
/C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=tomcat.lucky.com
(2) 生成 secret,在k8s-master-1.com 节点操作
[root@k8s-master-1.com ~]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key
(3) 查看 secret
[root@k8s-master-1.com ~]# kubectl get secret
显示如下:
tomcat-ingress-secret kubernetes.io/tls 2 56s
(4) 查看 tomcat-ingress-secret 详细信息
[root@k8s-master-1.com ~]# kubectl describe secret tomcat-ingress-secret
显示如下:
Name: tomcat-ingress-secret Namespace: default
Labels:
====
tls.crt: 1294 bytes
tls.key: 1679 bytes 2、创建 Ingress
Ingress 规则可以参考官方:
https://kubernetes.io/zh/docs/concepts/services-networking/ingress/
[root@k8s-master-1.com ingress-tomcat]# kubectl explain ingress.spec.rules.http.paths.backend.service
[root@k8s-master-1.com ~]# cat ingress-tomcat-tls.yaml apiVersion: networking.k8s.io/v1
kind: Ingress metadata:
name: ingress-tomcat-tls namespace: default annotations:
kubernetes.io/ingress.class: "nginx" spec:
tls:
- hosts:
- tomcat.lucky.com
secretName: tomcat-ingress-secret rules:
- host: tomcat.lucky.com http:
paths:
- path: /
pathType: Prefix backend:
service:
name: tomcat port:
number: 8080
更新 yaml 文件
[root@k8s-master-1.com ~]# kubectl apply -f ingress-tomcat-tls.yaml
![]() |
浏览器访问 https://tomcat.lucky.com
[root@k8s-master-1.com ~]# kubectl delete -f ingress-demo.yaml [root@k8s-master-1.com ~]# kubectl delete -f ingress-tomcat-tls.yaml
Ingress 规则可以参考官方:
https://kubernetes.io/zh/docs/concepts/services-networking/ingress/
8.4
![]() |
通过 Ingress 配置wordpress 博客创建 ingress 规则
点击 Create
添加注解:
![]() |
kubernetes.io/ingress.class: nginx
点击 Create
添加本地 hosts 解析。
![]() |
C:\Windows\System32\drivers\etc
添加行:
192.168.1.64 xuegod.wordpress.com
浏览器访问:http://xuegod.wordpress.com/
上述地址可访问到博客
9 在 k8s 中部署 EFK 日志收集平台
9.1
![]() |
EFK 日志处理流程
filebeat 从各个节点的 Docker 容器中提取日志信息filebeat 将日志转发到 ElasticSearch 进行索引和保存Kibana 负责分析和可视化日志信息
EFK+logstash+kafka EFK 架构:
9.2 安在 k8s 中安装 EFK 组件
我们先来配置启动一个可扩展的 Elasticsearch 集群,然后在 Kubernetes 集群中创建一个 Kibana 应用,最后通过
DaemonSet 来运行 Fluentd,以便它在每个 Kubernetes 工作节点上都可以运行一个 Pod
把 elasticsearch-7-12-1.tar.gz 和 fluentd-v1-9-1.tar.gz 和 kibana-7-12-1.tar.gz 上传到 k8s-master-1.com、k8s-master-2.com 和 k8s-node-1.com 机器上,手动解压
ctr -n=k8s.io images import elasticsearch-7-12-1.tar.gz ctr -n=k8s.io images import kibana-7-12-1.tar.gz
ctr -n=k8s.io images import fluentd-v1-9-1.tar.gz
9.2.1 安装 elasticsearch 组件
在安装 Elasticsearch 集群之前,我们先创建一个名称空间,在这个名称空间下安装日志收工具 elasticsearch、fluentd、kibana。我们创建一个 kube-logging 名称空间,将 EFK 组件安装到该名称空间中。
下面安装步骤均在 k8s 控制节点操作:
1.创建 kube-logging 名称空间
[root@k8s-master-1.com efk]# cat kube-logging.yaml kind: Namespace
apiVersion: v1 metadata:
name: kube-logging
[root@k8s-master-1.com efk]# kubectl apply -f kube-logging.yaml
[root@k8s-master-1 efk]# cat ceph-secret-2.yaml apiVersion: v1
kind: Secret metadata:
name: ceph-secret-1 namespace: kube-logging
type: "ceph.com/rbd" data:
key: QVFESkZrdGhQTmVJTHhBQWE0MlpyNEc2RW1JelpOYlVCNmtoaUE9PQ==
[root@k8s-master-1 efk]# kubectl apply -f ceph-secret-2.yaml 2.查看 kube-logging 名称空间是否创建成功
kubectl get namespaces | grep kube-logging
显示如下,说明创建成功
kube-logging Active 1m 3.安装 elasticsearch 组件
#创建 headless service
[root@k8s-master-1.com efk]# cat elasticsearch_svc.yaml kind: Service
apiVersion: v1 metadata:
name: elasticsearch namespace: kube-logging labels:
app: elasticsearch spec:
selector:
app: elasticsearch clusterIP: None ports:
- port: 9200 name: rest
- port: 9300
name: inter-node
[root@k8s-master-1.com efk]# kubectl apply -f elasticsearch_svc.yaml
查看 elasticsearch 的 service 是否创建成功
[root@k8s-master-1.com efk]# kubectl get services --namespace=kube-logging
| NAME | TYPE | CLUSTER-IP | EXTERNAL-IP PORT(S) |
|---|---|---|---|
| elasticsearch | ClusterIP | None |
#创建 storageclass
[root@k8s-master-1.com ~]# cat es_class.yaml apiVersion: storage.k8s.io/v1
kind: StorageClass metadata:
name: do-block-storage provisioner: ceph.com/rbd
parameters:
monitors: 192.168.1.82:6789 adminId: admin adminSecretName: ceph-secret-1 pool: k8stest1
userId: admin userSecretName: ceph-secret-1 fsType: xfs
imageFormat: "2" imageFeatures: "layering"
[root@k8s-master-1.com ~]# kubectl apply -f storageclass.yaml
[root@k8s-master-1.com efk]# kubectl apply -f es_class.yaml [root@k8s-master-1.com efk]# cat elasticsearch-statefulset.yaml apiVersion: apps/v1
kind: StatefulSet metadata:
name: es-cluster namespace: kube-logging
spec:
serviceName: elasticsearch replicas: 3
selector:
matchLabels:
app: elasticsearch template:
metadata:
labels:
app: elasticsearch spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1 imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m requests:
cpu: 100m
ports:
- containerPort: 9200 name: rest
protocol: TCP
- containerPort: 9300 name: inter-node protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data env:
- name: cluster.name value: k8s-logs
- name: node.name valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m" initContainers:
- name: fix-permissions image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext:
privileged: true volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map image: busybox imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext:
privileged: true
- name: increase-fd-ulimit image: busybox imagePullPolicy: IfNotPresent
command: ["sh", "-c", "ulimit -n 65536"] securityContext:
privileged: true volumeClaimTemplates:
- metadata:
name: data labels:
app: elasticsearch spec:
accessModes: [ "ReadWriteOnce" ] storageClassName: do-block-storage resources:
requests:
storage: 10Gi
[root@k8s-master-1.com efk]# kubectl apply -f elasticsearch-statefulset.yaml [root@k8s-master-1.com efk]# kubectl get pods -n kube-logging
NAME READY STATUS RESTARTS AGE
| es-cluster-0 | 1/1 | Running | 6 | 11h |
|---|---|---|---|---|
| es-cluster-1 | 1/1 | Running | 2 | 11h |
| es-cluster-2 | 1/1 | Running | 2 | 11h |
[root@k8s-master-1.com efk]# kubectl get svc -n kube-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
elasticsearch ClusterIP None
pod 部署完成之后,可以通过 REST API 检查 elasticsearch 集群是否部署成功,使用下面的命令将本地端口 9200 转发到 Elasticsearch 节点(如 es-cluster-0)对应的端口:
[root@k8s-master-1.com efk]# kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging
然后,在另外的终端窗口中,执行如下请求,新开一个 k8s-master-1.com 终端:
curl http://localhost:9200/_cluster/state?pretty
第一次请求结果可能为空,之后有数据,输出如下:
{
"cluster_name" : "k8s-logs", "compressed_size_in_bytes" : 348, "cluster_uuid" : "QD06dK7CQgids-GQZooNVw", "version" : 3,
"state_uuid" : "mjNIWXAzQVuxNNOQ7xR-qg", "master_node" : "IdM5B7cUQWqFgIHXBp0JDg", "blocks" : { },
"nodes" : { "u7DoTpMmSCixOoictzHItA" : {
"name" : "es-cluster-1",
"ephemeral_id" : "ZlBflnXKRMC4RvEACHIVdg", "transport_address" : "10.244.8.2:9300", "attributes" : { }
},
"IdM5B7cUQWqFgIHXBp0JDg" : {
"name" : "es-cluster-0",
"ephemeral_id" : "JTk1FDdFQuWbSFAtBxdxAQ",
"transport_address" : "10.244.44.3:9300", "attributes" : { }
},
"R8E7xcSUSbGbgrhAdyAKmQ" : { "name" : "es-cluster-2",
"ephemeral_id" : "9wv6ke71Qqy9vk2LgJTqaA", "transport_address" : "10.244.40.4:9300", "attributes" : { }
}
},
...
看到上面的信息就表明我们名为 k8s-logs 的 Elasticsearch 集群成功创建了 3 个节点:es-cluster-0,es-cluster- 1,和 es-cluster-2,当前主节点是 es-cluster-0。
9.2.2 安装 kibana 组件
[root@k8s-master-1.com efk]# cat kibana.yaml apiVersion: v1
kind: Service metadata:
name: kibana
namespace: kube-logging labels:
app: kibana spec:
ports:
- port: 5601 selector:
app: kibana
---
apiVersion: apps/v1 kind: Deployment metadata:
name: kibana
namespace: kube-logging labels:
app: kibana spec:
replicas: 1 selector:
matchLabels: app: kibana
template: metadata:
labels:
app: kibana spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.12.1 imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m requests:
cpu: 100m env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200 ports:
- containerPort: 5601
#上面我们定义了两个资源对象,一个 Service 和 Deployment,为了测试方便,我们将 Service 设置为了NodePort 类型,Kibana Pod 中配置都比较简单,唯一需要注意的是我们使用 ELASTICSEARCH_URL 这个环境变量来设置 Elasticsearch 集群的端点和端口,直接使用 Kubernetes DNS 即可,此端点对应服务名称为elasticsearch,由于是一个 headless service,所以该域将解析为 3 个 Elasticsearch Pod 的 IP 地址列表。
配置完成后,直接使用 kubectl 工具创建:
es-cluster-0 1/1 Running 6 11h es-cluster-1 1/1 Running 2 11h es-cluster-2 1/1 Running 2 11h
[root@k8s-master-1.com efk]# kubectl apply -f kibana.yaml [root@k8s-master-1.com efk]# kubectl get pods -n kube-logging NAME READY STATUS RESTARTS AGE
kibana-84cf7f59c-vvm6q 1/1 Running 2 11h [root@k8s-master-1.com efk]# kubectl get svc -n kube-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
elasticsearch ClusterIP None
修改 service 的 type 类型为NodePort:
[root@k8s-master-1 efk]# kubectl edit svc kibana -n kube-logging
把 type: ClusterIP 变成 type: NodePort
保存退出之后
[root@k8s-master-1.com efk]# kubectl get svc -n kube-logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
elasticsearch ClusterIP None
在浏览器中打开错误!超链接引用无效。集群任意节点 IP>:32462 即可,如果看到如下欢迎界面证明 Kibana 已经成功部署到了 Kubernetes 集群之中。
9.2.3 安装 fluentd 组件
我们使用 daemonset 控制器部署 fluentd 组件,这样可以保证集群中的每个节点都可以运行同样 fluentd 的 pod 副本,这样就可以收集 k8s 集群中每个节点的日志,在 k8s 集群中,容器应用程序的输入输出日志会重定向到 node 节点里的 json 文件中
,fluentd 可以 tail 和过滤以及把日志转换成指定的格式发送到 elasticsearch 集群中。除了容器日志,fluentd 也可以采集 kubelet、kube-proxy、docker 的日志。
[root@k8s-master-1.com efk]# cat fluentd.yaml apiVersion: v1
kind: ServiceAccount metadata:
name: fluentd namespace: kube-logging labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole
metadata: name: fluentd labels:
app: fluentd rules:
- apiGroups:
- ""
resources:
- pods
- namespaces verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1 metadata:
name: fluentd roleRef:
kind: ClusterRole name: fluentd
apiGroup: rbac.authorization.k8s.io subjects:
- kind: ServiceAccount name: fluentd namespace: kube-logging
---
apiVersion: apps/v1 kind: DaemonSet metadata:
name: fluentd namespace: kube-logging labels:
app: fluentd spec:
selector:
matchLabels:
app: fluentd template:
metadata:
labels:
app: fluentd spec:
serviceAccount: fluentd serviceAccountName: fluentd tolerations:
- key: node-role.kubernetes.io/master effect: NoSchedule
containers:
- name: fluentd
image: fluentd:v1.9.1-debian-1.0 imagePullPolicy: IfNotPresent env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.kube-logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable resources:
limits:
memory: 512Mi requests:
cpu: 100m memory: 200Mi
volumeMounts:
- name: varlog mountPath: /var/log
- name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true
terminationGracePeriodSeconds: 30 volumes:
- name: varlog hostPath:
path: /var/log
- name: varlibdockercontainers hostPath:
path: /var/lib/docker/containers
[root@k8s-master-1.com efk]# kubectl apply -f fluentd.yaml [root@k8s-master-1.com efk]# kubectl get pods -n kube-logging NAME READY STATUS RESTARTS AGE
| es-cluster-0 | 1/1 | Running | 6 | 11h |
|---|---|---|---|---|
| es-cluster-1 | 1/1 | Running | 2 | 11h |
| es-cluster-2 | 1/1 | Running | 2 | 11h |
fluentd-m8rgp 1/1 Running 3 11h fluentd-wbl4z 1/1 Running 0 11h kibana-84cf7f59c-vvm6q 1/1 Running 2 11h
Fluentd 启动成功后,我们可以前往 Kibana 的 Dashboard 页面中,点击左侧的 Discover,可以看到如下配置页面:
![]() |
在这里可以配置我们需要的 Elasticsearch 索引,前面 Fluentd 配置文件中我们采集的日志使用的是 logstash 格式,这里只需要在文本框中输入 logstash-*即可匹配到 Elasticsearch 集群中的所有日志数据,然后点击下一步,进入以下页面:
![]() |
点击 next step,出现如下
选择@timestamp,创建索引
![]() |
点击左侧的 discover,可看到如下:
https://www.elastic.co/guide/en/kibana/7.12/kuery-query.html
9.3 测试收集 pod 业务日志
[root@k8s-master-1.com efk]# cat pod.yaml apiVersion: v1
kind: Pod metadata:
name: counter spec:
containers:
- name: count image: busybox
imagePullPolicy: IfNotPresent
args: [/bin/sh, -c,'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
[root@k8s-master-1.com efk]# kubectl apply -f pod.yaml
登录到 kibana 的控制面板,在 discover 处的搜索栏中输入 kubernetes.pod_name:counter,这将过滤名为的 Pod
![]() |
的日志数据 counter,如下所示:
![]() |
kubernetes.namespace_name :default
10 在 k8s 中部署 Prometheus 监控告警系统
10.1 在 k8s 集群中安装 Prometheus server 服务
10.1.1 创建 sa 账号
#在 k8s 集群的控制节点操作,创建一个 sa 账号
[root@k8s-master-1.com~]# kubectl create ns monitor-sa
[root@k8s-master-1.com ~]# kubectl create serviceaccount monitor -n monitor-sa serviceaccount/monitor created
#把 sa 账号 monitor 通过 clusterrolebing 绑定到clusterrole 上
[root@k8s-master-1.com ~]# kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor- sa --clusterrole=cluster-admin --serviceaccount=monitor-sa:monitor
#注意:有的同学执行上面授权也会报错,那就需要下面的授权命令:
kubectl create clusterrolebinding monitor-clusterrolebinding-1 -n monitor-sa --clusterrole=cluster- admin --user=system:serviceaccount:monitor:monitor-sa
10.1.2 创建数据目录
#在 k8s-node-1.com 作节点创建存储数据的目录: [root@k8s-node-1.com ~]# mkdir /data [root@k8s-node-1.com ~]# chmod 777 /data/
10.1.3 安装 node-exporter 组件
把课件里的 node-exporter.tar.gz 镜像压缩包上传到 k8s 的各个节点,手动解压:
ctr -n=k8s.io images import node-exporter.tar.gz
node-export.yaml 文件在课件,可自行上传到自己 k8s 的控制节点,内容如下:
[root@k8s-master-1.com ~]# cat node-export.yaml apiVersion: apps/v1
kind: DaemonSet metadata:
name: node-exporter namespace: monitor-sa labels:
name: node-exporter spec:
selector:
matchLabels:
name: node-exporter template:
metadata:
labels:
name: node-exporter spec:
hostPID: true hostIPC: true hostNetwork: true containers:
- name: node-exporter
image: prom/node-exporter:v0.16.0 ports:
- containerPort: 9100 resources:
requests: cpu: 0.15
securityContext: privileged: true
args:
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- '"^/(sys|proc|dev|host|etc)($|/)"' volumeMounts:
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs mountPath: /rootfs
tolerations:
- key: "node-role.kubernetes.io/master" operator: "Exists"
effect: "NoSchedule" volumes:
- name: proc hostPath:
path: /proc
- name: dev hostPath:
path: /dev
- name: sys hostPath:
path: /sys
- name: rootfs hostPath:
path: /
#通过 kubectl apply 更新 node-exporter
[root@k8s-master-1.com ~]# kubectl apply -f node-export.yaml
10.1.4 安装 prometheus 服务
以下步骤均在 k8s 集群的控制节点操作:
创建一个 configmap 存储卷,用来存放 prometheus 配置信息
prometheus-cfg.yaml 文件在课件,可自行上传到自己 k8s 的控制节点,内容如下:
[root@k8s-master-1.com ~]# cat prometheus-cfg.yaml
---
kind: ConfigMap
apiVersion: v1 metadata:
labels:
app: prometheus
name: prometheus-config namespace: monitor-sa
data:
prometheus.yml: | global:
scrape_interval: 15s scrape_timeout: 10s evaluation_interval: 1m
scrape_configs:
- job_name: 'kubernetes-node' kubernetes_sd_configs:
- role: node relabel_configs:
- source_labels: [ address ] regex: '(.*):10250' replacement: '${1}:9100' target_label: address action: replace
- action: labelmap
regex: meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-node-cadvisor' kubernetes_sd_configs:
- role: node scheme: https tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs:
- action: labelmap
regex: meta_kubernetes_node_label_(.+)
- target_label: address
replacement: kubernetes.default.svc:443
- source_labels: [ meta_kubernetes_node_name] regex: (.+)
target_label: metrics_path
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-apiserver' kubernetes_sd_configs:
- role: endpoints scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs:
- source_labels: [ meta_kubernetes_namespace, meta_kubernetes_service_name,
meta_kubernetes_endpoint_port_name] action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs:
- role: endpoints relabel_configs:
- source_labels: [ meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep
regex: true
- source_labels: [ meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace
target_label: scheme regex: (https?)
- source_labels: [ meta_kubernetes_service_annotation_prometheus_io_path] action: replace
target_label: metrics_path regex: (.+)
- source_labels: [ address , meta_kubernetes_service_annotation_prometheus_io_port] action: replace
target_label: address regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2
- action: labelmap
regex: meta_kubernetes_service_label_(.+)
- source_labels: [ meta_kubernetes_namespace] action: replace
target_label: kubernetes_namespace
- source_labels: [ meta_kubernetes_service_name] action: replace
target_label: kubernetes_name
#通过 kubectl apply 更新 configmap
[root@k8s-master-1.com ~]# kubectl apply -f prometheus-cfg.yaml configmap/prometheus-config created
通过 deployment 部署 prometheus
安装 prometheus server 需要的镜像 prometheus_2.33.5.tar.gz 在课件,上传到 k8s 的工作节点 k8s-node- 1.com 上,手动解压:
[root@k8s-node-1.com ~]# mkdir /data/ -p [root@k8s-node-1.com ~]#chmod 777 /data/
[root@k8s-node-1.com ~]#ctr -n=k8s.io images import prometheus_2.33.5.tar.gz prometheus-deploy.yaml 文件在课件,上传到自己的 k8s 的控制节点,内容如下: [root@k8s-master-1.com ~]# cat prometheus-deploy.yaml
---
apiVersion: apps/v1 kind: Deployment metadata:
name: prometheus-server namespace: monitor-sa labels:
app: prometheus spec:
replicas: 1 selector:
matchLabels:
app: prometheus component: server
#matchExpressions:
#- {key: app, operator: In, values: [prometheus]} #- {key: component, operator: In, values: [server]}
template:
metadata:
labels:
app: prometheus component: server
annotations:
prometheus.io/scrape: 'false' spec:
nodeName: k8s-node-1.com serviceAccountName: monitor containers:
- name: prometheus
image: docker.io/prom/prometheus:v2.33.5 imagePullPolicy: IfNotPresent
command:
- prometheus
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus #旧数据存储目录
- --storage.tsdb.retention=720h #何时删除旧数据,默认为 15 天。
- --web.enable-lifecycle #开启热加载ports:
- containerPort: 9090
protocol: TCP volumeMounts:
- mountPath: /etc/prometheus/prometheus.yml name: prometheus-config
subPath: prometheus.yml
- mountPath: /prometheus/
name: prometheus-storage-volume volumes:
- name: prometheus-config configMap:
name: prometheus-config items:
- key: prometheus.yml path: prometheus.yml mode: 0644
- name: prometheus-storage-volume hostPath:
path: /data type: Directory
注意:在上面的 prometheus-deploy.yaml 文件有个 nodeName 字段,这个就是用来指定创建的这个 prometheus 的 pod 调度到哪个节点上,我们这里让 nodeName=k8s-node-1.com,也即是让 pod 调度到 k8s-node-1.com 节点上,因为 k8s-node-1.com 节点我们创建了数据目录/data,所以大家记住:你在 k8s 集群的哪个节点创建/data, 就让 pod 调度到哪个节点。
#通过 kubectl apply 更新 prometheus
[root@k8s-master-1.com ~]# kubectl apply -f prometheus-deploy.yaml deployment.apps/prometheus-server created
#查看 prometheus 是否部署成功
[root@k8s-master-1.com ~]# kubectl get pods -n monitor-sa
NAME READY STATUS RESTARTS AGE
| node-exporter-7cjhw | 1/1 | Running | 0 | 6m33s |
|---|---|---|---|---|
| node-exporter-8m2fp | 1/1 | Running | 0 | 6m33s |
| node-exporter-c6sdq | 1/1 | Running | 0 | 6m33s |
| prometheus-server-6fffccc6c9-bhbpz | 1/1 | Running | 0 | 26s |
| 给 prometheus pod 创建一个 service |
prometheus-svc.yaml 文件在课件,可上传到 k8s 的控制节点,内容如下:
cat prometheus-svc.yaml
---
apiVersion: v1 kind: Service metadata:
name: prometheus namespace: monitor-sa labels:
app: prometheus spec:
type: NodePort ports:
- port: 9090
targetPort: 9090 protocol: TCP
selector:
app: prometheus component: server
#通过 kubectl apply 更新 service
[root@k8s-master-1.com ~]# kubectl apply -f prometheus-svc.yaml service/prometheus created
#查看 service 在物理机映射的端口
[root@k8s-master-1.com ~]# kubectl get svc -n monitor-sa
| NAME | TYPE | CLUSTER-IP | EXTERNAL-IP PORT(S) | AGE |
|---|---|---|---|---|
| prometheus | NodePort | 10.103.98.225 | 27s |
通过上面可以看到 service 在宿主机上映射的端口是 30009,这样我们访问 k8s 集群的控制节点的 ip:30009,就可以访问到 prometheus 的 web ui 界面了
#访问 prometheus web ui 界面火狐浏览器输入如下地址:
http://192.168.1.63:30009/graph
可看到如下页面:
![]() |
热加载:
curl -X POST http://10.244.121.4:9090/-/reload
10.2 安装和配置可视化 UI 界面 Grafana
安装 Grafana 需要的镜像 grafana_8.4.5.tar.gz 在课件里,把镜像上传到 k8s 的各个工作节点,然后在各个节点手动解压:
[root@k8s-node-1.com ~]# ctr -n=k8s.io images import grafana_8.4.5.tar.gz [root@k8s-node-1.com ~]# mkdir /var/lib/grafana/ -p
[root@k8s-node-1.com ~]# chmod 777 /var/lib/grafana/
grafana.yaml 文件在课件里,可上传到 k8s 的控制节点,内容如下:
[root@k8s-master-1.com ~]# cat grafana.yaml apiVersion: apps/v1
kind: Deployment metadata:
name: monitoring-grafana namespace: kube-system spec:
replicas: 1 selector:
matchLabels:
task: monitoring k8s-app: grafana
template:
metadata:
labels:
task: monitoring k8s-app: grafana
spec:
nodeName: mysql_slave_1.com containers:
- name: grafana
image: grafana/grafana:8.4.5 imagePullPolicy: IfNotPresent ports:
- containerPort: 3000 protocol: TCP volumeMounts:
- mountPath: /etc/ssl/certs name: ca-certificates readOnly: true
- mountPath: /var name: grafana-storage
- mountPath: /var/lib/grafana/ name: lib
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via # the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
---
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy value: /
volumes:
- name: ca-certificates hostPath:
path: /etc/ssl/certs
- name: grafana-storage emptyDir: {}
- name: lib hostPath:
path: /var/lib/grafana/ type: DirectoryOrCreate
apiVersion: v1 kind: Service metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana name: monitoring-grafana
namespace: kube-system spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer # or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port # type: NodePort
ports:
- port: 80
targetPort: 3000 selector:
k8s-app: grafana
type: NodePort
#更新 yaml 文件
[root@k8s-master-1.com ~]# kubectl apply -f grafana.yaml deployment.apps/monitoring-grafana created service/monitoring-grafana created
#验证是否安装成功
[root@k8s-master-1.com ~]# kubectl get pods -n kube-system| grep monitor monitoring-grafana-675798bf47-4rp2b 1/1 Running 0
#查看 grafana 前端的 service
[root@k8s-master-1.com ~]# kubectl get svc -n kube-system | grep grafana monitoring-grafana NodePort 10.100.56.76
#登陆 grafana,在浏览器访问192.168.1.63:30989
![]() |
可看到如下界面:
#配置 grafana 界面
![]() |
开始配置 grafana 的 web 界面: 选择 Add your first data source
出现如下
选择 Prometheus,出现如下:
Name: Prometheus HTTP 处的 URL 如下:
http://prometheus.monitor-sa.svc:9090
配置好的整体页面如下:
![]() |
点击右下角 Save & Test,出现如下 Data source is working,说明 prometheus 数据源成功的被 grafana 接入了:
导入监控模板,可在如下链接搜索
https://grafana.com/dashboards?dataSource=prometheus&search=kubernetes
可直接导入 node_exporter.json 监控模板,这个可以把 node 节点指标显示出来
node_exporter.json 在课件里,也可直接导入 docker_rev1.json,这个可以把容器资源指标显示出来, node_exporter.json 和 docker_rev1.json 都在课件里
怎么导入监控模板,按如下步骤
![]() |
上面 Save & Test 测试没问题之后,就可以返回 Grafana 主页面
![]() |
点击左侧+号下面的 Import
出现如下界面:
![]() |
选择 Upload json file,出现如下
选择一个本地的 json 文件,我们选择的是上面让大家下载的 node_exporter.json 这个文件,选择之后出现如下:
注:箭头标注的地方 Name 后面的名字是 node_exporter.json 定义的
Prometheus 后面需要变成 Prometheus,然后再点击 Import,就可以出现如下界面:
![]() |
导入 docker_rev1.json 监控模板,步骤和上面导入 node_exporter.json 步骤一样,导入之后显示如下:
10.3 kube-state-metrics 组件解读
10.3.1 什么是 kube-state-metrics?
kube-state-metrics 通过监听 API Server 生成有关资源对象的状态指标,比如 Deployment、Node、Pod,需要注意的是 kube-state-metrics 只是简单的提供一个 metrics 数据,并不会存储这些指标数据,所以我们可以使用
Prometheus 来抓取这些数据然后存储,主要关注的是业务相关的一些元数据,比如 Deployment、Pod、副本状态等;调度了多少个 replicas?现在可用的有几个?多少个 Pod 是 running/stopped/terminated 状态?Pod 重启了多少次?我有多少 job 在运行中。
10.3.2 安装和配置 kube-state-metrics
创建 sa,并对 sa 授权
在 k8s 的控制节点生成一个 kube-state-metrics-rbac.yaml 文件,kube-state-metrics-rbac.yaml 文件在课件,大家自行下载到 k8s 的控制节点即可,内容如下:
[root@k8s-master-1.com ~]# cat kube-state-metrics-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount metadata:
name: kube-state-metrics namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole
metadata:
name: kube-state-metrics rules:
- apiGroups: [""]
resources: ["nodes", "pods", "services", "resourcequotas", "replicationcontrollers", "limitranges", "persistentvolumeclaims", "persistentvolumes", "namespaces", "endpoints"]
verbs: ["list", "watch"]
- apiGroups: ["extensions"]
resources: ["daemonsets", "deployments", "replicasets"] verbs: ["list", "watch"]
- apiGroups: ["apps"] resources: ["statefulsets"] verbs: ["list", "watch"]
- apiGroups: ["batch"] resources: ["cronjobs", "jobs"] verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"] verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding
metadata:
name: kube-state-metrics roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics subjects:
- kind: ServiceAccount name: kube-state-metrics namespace: kube-system
通过 kubectl apply 更新 yaml 文件
[root@k8s-master-1.com ~]# kubectl apply -f kube-state-metrics-rbac.yaml serviceaccount/kube-state-metrics created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
安装 kube-state-metrics 组件
安装 kube-state-metrics 组件需要的镜像在课件,可上传到 k8s 各个工作节点,手动解压:
[root@ k8s-node-1.com ~]#ctr -n=k8s.io images import kube-state-metrics_1_9_0.tar.gz [root@k8s-node-1.com ~]#ctr -n=k8s.io images import kube-state-metrics_1_9_0.tar.gz
在 k8s 的 k8s-master-1.com 节点生成一个 kube-state-metrics-deploy.yaml 文件,kube-state-metrics- deploy.yaml 在课件,可自行下载,内容如下:
[root@k8s-master-1.com ~]# cat kube-state-metrics-deploy.yaml apiVersion: apps/v1
kind: Deployment metadata:
name: kube-state-metrics namespace: kube-system
spec:
replicas: 1 selector:
matchLabels:
app: kube-state-metrics template:
metadata:
labels:
app: kube-state-metrics spec:
serviceAccountName: kube-state-metrics containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.9.0 ports:
- containerPort: 8080
通过 kubectl apply 更新 yaml 文件
[root@k8s-master-1.com ~]# kubectl apply -f kube-state-metrics-deploy.yaml deployment.apps/kube-state-metrics created
查看 kube-state-metrics 是否部署成功
[root@k8s-master-1.com ~]# kubectl get pods -n kube-system -l app=kube-state-metrics NAME READY STATUS RESTARTS AGE
kube-state-metrics-58d4957bc5-9thsw 1/1 Running 0 30s
创建 service
在 k8s 的控制节点生成一个 kube-state-metrics-svc.yaml 文件,kube-state-metrics-svc.yaml 文件在课件,可上传到 k8s 的控制节点,内容如下:
[root@k8s-master-1.com ~]# cat kube-state-metrics-svc.yaml apiVersion: v1
kind: Service metadata:
annotations:
prometheus.io/scrape: 'true' name: kube-state-metrics namespace: kube-system labels:
app: kube-state-metrics spec:
ports:
- name: kube-state-metrics port: 8080
protocol: TCP selector:
app: kube-state-metrics
通过 kubectl apply 更新 yaml
[root@k8s-master-1.com ~]# kubectl apply -f kube-state-metrics-svc.yaml service/kube-state-metrics created
查看 service 是否创建成功
[root@k8s-master-1.com ~]# kubectl get svc -n kube-system | grep kube-state-metrics kube-state-metrics ClusterIP 10.105.160.224
在 grafana web 界面导入 Kubernetes Cluster (Prometheus)-1577674936972.json 和 Kubernetes cluster monitoring (via Prometheus) (k8s 1.16)-1577691996738.json,Kubernetes Cluster (Prometheus)- 1577674936972.json 和 Kubernetes cluster monitoring (via Prometheus) (k8s 1.16)-1577691996738.json 文件在课件
导入 Kubernetes Cluster (Prometheus)-1577674936972.json 之后出现如下页面
![]() |
在 grafana web 界面导入 Kubernetes cluster monitoring (via Prometheus) (k8s 1.16)- 1577691996738.json,出现如下页面




















































































