资源计划表

用途 服务器配置 操作系统 数量 备注
K8S集群主控节点 通用计算型 | 4vCPUs | 8GB | 高IO 60G Centos 7.7 3 K8S集群主控节点,3master
应用服务器 通用计算型 | 16vCPUs | 32GB | 高IO 40G+200G Centos 7.7 6 K8S集群工作节点(开2个pod)
数据库服务器 通用计算型 | 16vCPUs | 64GB | 高IO 500G Centos 7.7 2 2节点集群(MySQL Replication)
存储服务器 通用计算型 | 4vCPUs | 8GB | 高IO 2T Centos 7.7 1
缓存服务器 通用计算型 | 8vCPUs | 16GB | 高IO 300G Centos 7.7 6 3节点集群(3主3从,每个节点1主1从)
检索数据中间件服务器 通用计算型 | 4vCPUs | 16GB | 高IO 500G Centos 7.7 3 3节点集群(Elasticsearch/Nacos )
日志消息中间件服务器 通用计算型 | 4vCPUs | 16GB | 高IO 500G Centos 7.7 3 3节点集群(Kafka/Zookeeper)
业务消息中间件服务器 通用计算型 | 4vCPUs | 16GB | 高IO 500G Centos 7.7 4 3节点集群(NameService/RocketMQ[master + slave])

虚拟机列表

机器名称 IP 数据存储路径 配置 系统 账号密码 备注
k8s-master1 192.168.66.21 / 4核/8G/100G CentOS 7.9 root/****
k8s-master2 192.168.66.22 / 4核/8G/100G CentOS 7.9 root/****
k8s-master3 192.168.66.23 / 4核/8G/100G CentOS 7.9 root/****
k8s-worker1 192.168.66.24 / 16核/32G/500G CentOS 7.9 root/****
k8s-worker2 192.168.66.25 / 16核/32G/500G CentOS 7.9 root/****
k8s-worker3 192.168.66.26 / 16核/32G/500G CentOS 7.9 root/****
k8s-worker4 192.168.66.27 / 16核/32G/500G CentOS 7.9 root/****
k8s-worker5 192.168.66.28 / 16核/32G/500G CentOS 7.9 root/****
k8s-mysql1 192.168.66.30 / 16核/32G/500G CentOS 7.9 root/****
k8s-mysql2 192.168.66.31 / 16核/32G/500G CentOS 7.9 root/****
k8s-redis1 192.168.66.32 / 4核/4G/100G CentOS 7.9 root/****
k8s-redis2 192.168.66.33 / 4核/4G/100G CentOS 7.9 root/****
k8s-redis3 192.168.66.34 / 4核/4G/100G CentOS 7.9 root/****
k8s-redis4 192.168.66.35 / 4核/4G/100G CentOS 7.9 root/****
k8s-redis5 192.168.66.36 / 4核/4G/100G CentOS 7.9 root/****
k8s-redis6 192.168.66.37 / 4核/4G/100G CentOS 7.9 root/****
k8s-es1 192.168.66.38 / 4核/16G/200G CentOS 7.9 root/**** Elasticsearch/Nacos
k8s-es2 192.168.66.39 / 4核/16G/200G CentOS 7.9 root/**** Elasticsearch/Nacos
k8s-es3 192.168.66.40 / 4核/16G/200G CentOS 7.9 root/**** Elasticsearch/Nacos
k8s-kafka1 192.168.66.41 / 4核/16G/200G CentOS 7.9 root/**** Kafka/Zookeeper
k8s-kafka2 192.168.66.42 / 4核/16G/200G CentOS 7.9 root/**** Kafka/Zookeeper
k8s-kafka3 192.168.66.43 / 4核/16G/200G CentOS 7.9 root/**** Kafka/Zookeeper
k8s-rocketmq1 192.168.66.44 / 4核/24G/200G CentOS 7.9 root/**** NameService/RocketMQ master
k8s-rocketmq2 192.168.66.45 / 4核/24G/200G CentOS 7.9 root/**** NameService/RocketMQ master
k8s-rocketmq3 192.168.66.46 / 4核/24G/200G CentOS 7.9 root/**** NameService/RocketMQ slave
k8s-minio1 192.168.66.47 / 4核/8G/100G+2048G CentOS 7.9 root/**** MinIO

组件版本

MySQL community-server-8.0.30
Redis 6.2.6
Zookeeper 3.7.1
Kafka 2.12-2.8.1
RocketMQ 4.9.2
ElasticSearch 7.17.2
Kibana 7.17.2
Logstath 7.17.2
Nacos 2.0.3
MinIO RELEASE.2021-06-14T01-29-23Z
Kubernets 1.21.1

虚拟机模板

安装全新的虚拟机 4核 8G 100G ,系统安装CentOS7.9 64位,分区为 /home 为10G, / 为 82G,其他默认

更改防火墙配置

# 检查防火墙配置
grep 'AllowZoneDrifting=yes' /etc/firewalld/firewalld.conf

# 上面命令结果为 AllowZoneDrifting=yes,执行修改
sed -i 's/AllowZoneDrifting=yes/AllowZoneDrifting=no/' /etc/firewalld/firewalld.conf

永久关闭swap

sed -i 's/^[^#].*swap.*/#&/' /etc/fstab
swapoff -a

永久关闭selinux

sed -i "s/^SELINUX=.*$/SELINUX=disabled/" /etc/selinux/config
setenforce 0

资源限制配置

cat >> /etc/security/limits.conf << EOF
* soft noproc 65536
* hard noproc 65536
* soft nofile 65536
* hard nofile 65536
EOF

部署Kubernetes

调整分区

按需执行,由100G模板克隆出来并加大硬盘的节点则需要调整

fdisk /dev/vda

    n
    p
    默认(回车)
    默认(回车)
    默认(回车)
    t
    默认(回车)
    8e
    p
    w
# 更新Linux内核中的硬盘分区表数据
partprobe /dev/vda
# 创建物理卷
pvcreate /dev/vda3
# 默认为centos,可通过vgdisplay查看确认
vgextend centos /dev/vda3

# 扩展逻辑卷的大小, 默认为 /dev/mapper/centos-root,可能通过df -h命令查看确认
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root

# 查看是否成功
df -hT

机器初始化(所有机器)

根据机器分配情况,对所有节点机器名进行命名设置, k8s-master13 ,k8s-worker15 示例如下:

hostnamectl set-hostname k8s-master1

同步时间

目前安装的是英文系统,默认时区改为 上海时区(GMT +8)

# 查看当前时区
timedatectl status
# 修改时区
timedatectl set-timezone Asia/Shanghai

# 安装同步工具并进行同步
yum -y install ntp
ntpdate ntp1.aliyun.com

升级系统内核

【所有K8s节点都执行】
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,如下所示:

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grep initrd16 /boot/grub2/grub.cfg
grub2-set-default 0

reboot

放行防火墙

物理机器 的IP: 192.168.66.21~192.168.66.28

K8s podSubnet 和 serviceSubnet来源,需要Kubernetes安装后查看

# kubectl get cm kubeadm-config -n kube-system -oyaml|grep Subnet
      podSubnet: 100.64.0.0/10
      serviceSubnet: 10.96.0.0/12

汇总如下

# 物理服务器
firewall-cmd --add-source=192.168.66.21 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.22 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.23 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.24 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.25 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.26 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.27 --zone=trusted --permanent
firewall-cmd --add-source=192.168.66.28 --zone=trusted --permanent

# podSubnet 和 serviceSubnet
firewall-cmd --add-source=100.64.0.0/10 --zone=trusted --permanent
firewall-cmd --add-source=10.96.0.0/12 --zone=trusted --permanent

firewall-cmd --reload

#查看设置
firewall-cmd --list-all --zone=trusted

下载安装包

mkdir -p /lingyun
cd /lingyun
curl -o sealos https://api.flyrise.cn/pai/k8s/sealos
chmod +x sealos && cp sealos /usr/bin
curl -o kube1.21.1.tar.gz https://api.flyrise.cn/pai/k8s/kube1.21.1.tar.gz

执行安装

本次安装采用单master多node集群方式部署,需要root权限来配合sealos工具安装,需要先把所有机器节点的密码改为一致。
通过sealos工具进行kubernets集群安装,将配置systemd,关闭swap防火墙等,然后导入集群所需要的镜像。Kubernetes 在 Changelog 中宣布自 Kubernetes 1.20 之后将弃用 Docker 作为容器运行时之后,containerd成为下一个容器运行时的热门选项。

cd /lingyun
sealos init \
    --user root \
    --passwd '****' \
    --master 192.168.66.21:22 \
    --master 192.168.66.22:22 \
    --master 192.168.66.23:22 \
    --node 192.168.66.24:22 \
    --node 192.168.66.25:22 \
    --node 192.168.66.26:22 \
    --node 192.168.66.27:22 \
    --node 192.168.66.28:22 \
    --pkg-url /lingyun/kube1.21.1.tar.gz \
    --version v1.21.1

# 安装成功结果
07:45:46 [INFO] [print.go:39] sealos install success.
07:45:46 [INFO] [init.go:96]
      ___           ___           ___           ___       ___           ___
     /\  \         /\  \         /\  \         /\__\     /\  \         /\  \
    /::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \
   /:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \
  _\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \
 /\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\
 \:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/
  \:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\
   \:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /
    \::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /
     \/__/         \/__/         \/__/         \/__/     \/__/         \/__/

                  官方文档:sealyun.com
                  项目地址:github.com/fanux/sealos
                  QQ群   :98488045
                  常见问题:sealyun.com/faq

Kubectl命令自动补全

# Master节点按需执行
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)

安装ingress-nginx

ingress-nginx-controller.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  # Local 改为 Cluster
  externalTrafficPolicy: Cluster
  ports:
    - name: http
      port: 80
      protocol: TCP
      # 固定Nodeport
      nodePort: 30000
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      # 固定Nodeport
      nodePort: 30001
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
# Deployment 换成 DaemonSet 模式,这样保每一台node上都启用 ingress-nginx-controller pod副本
kind: DaemonSet
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          #image: k8s.gcr.io/ingress-nginx/controller:v1.0.0@sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6
          image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.0.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.0.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          #image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
          image: registry.aliyuncs.com/google_containers/kube-webhook-certgen:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.0.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          #image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068
          image: registry.aliyuncs.com/google_containers/kube-webhook-certgen:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

部署ingress

kubectl apply -f ingress-nginx-controller.yaml

Ingress 终端模式验证是否通

curl -L 192.168.66.21:30000
curl -L 192.168.66.24:30000

显示结果如下,则表示成功了

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

安装Kubarod

安装

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
# 您也可以使用下面的指令,唯一的区别是,该指令使用华为云的镜像仓库替代 docker hub 分发 Kuboard 所需要的镜像
# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml

调整镜像接取策略
注意:需等pod都调度到指定的节点上后才能调整,可通过 kubectl get po -n kuboard 查看

# 这两个可在 kuboard 的 yaml 中直接配置
kubectl patch deployment kuboard-v3  -n kuboard --patch '{"spec": {"template": {"spec": {"containers": [{"name": "kuboard","imagePullPolicy":"IfNotPresent"}]}}}}'
kubectl patch daemonset kuboard-etcd -n kuboard --patch '{"spec": {"template": {"spec": {"containers": [{"name": "etcd","imagePullPolicy":"IfNotPresent"}]}}}}'

# 这两个要等pod出现后再执行
kubectl patch deployment kuboard-agent    -n kuboard --patch '{"spec": {"template": {"spec": {"containers": [{"name": "kuboard-agent","imagePullPolicy":"IfNotPresent"}]}}}}'
kubectl patch deployment kuboard-agent-2  -n kuboard --patch '{"spec": {"template": {"spec": {"containers": [{"name": "kuboard-agent","imagePullPolicy":"IfNotPresent"}]}}}}'

访问 Kuboard

  • 在浏览器中打开链接 http://节点IP:30080
  • 输入初始用户名和密码(admin/Kuboard123),并登录后修改密码为:****
  • 由于v3.1.2.1启用了swagger,当有对外的情况则要service中增加一个404端口,并在路由 /swagger/index.html 中增加一个特殊指向(本项目为VPN访问暂不配置)

浏览器兼容性

  • 请使用 Chrome / FireFox / Safari / Edge 等浏览器

  • 不兼容 IE 以及以 IE 为内核的浏览器

    卸载

  • 执行 Kuboard v3 的卸载

    kubectl delete -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
  • 清理遗留数据

    在 master 节点以及带有 k8s.kuboard.cn/role=etcd 标签的节点上执行

    rm -rf /usr/share/kuboard

安装MySQL

环境准备

MySQL1-主 192.168.66.30 3306
应用名 IP地址 端口
MySQL2-从 192.168.66.31 3306
  • 调整分区

按需执行,由100G模板克隆出来并加大硬盘的节点则需要调整

fdisk /dev/vda

    n
    p
    默认(回车)
    默认(回车)
    默认(回车)
    t
    默认(回车)
    8e
    p
    w
# 更新Linux内核中的硬盘分区表数据
partprobe /dev/vda
# 创建物理卷
pvcreate /dev/vda3
# 默认为centos,可通过vgdisplay查看确认
vgextend centos /dev/vda3

# 扩展逻辑卷的大小, 默认为 /dev/mapper/centos-root,可能通过df -h命令查看确认
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root

# 查看是否成功
df -hT
  • 修改机器名

根据上面列表修改每台机器名称, 1到3全改, 示例如下:

hostnamectl set-hostname k8s-mysql1
hostnamectl set-hostname k8s-mysql2
  • 防火墙放行端口

# 放行端口
firewall-cmd --permanent --zone=public  --add-port=3306/tcp

# 重载配置
firewall-cmd --reload
  • 检查是否已安装MySQL或者MariaDB
rpm -qa | grep -i mysql  
rpm -qa | grep -i mariadb

如果存在则执行卸载

# yum remove + 包名 会检查依赖关系并一起移除
yum remove -y mariadb-libs-5.5.68-1.el7.x86_64
  • 同步时间

目前安装的是英文系统,默认时区改为 上海时区(GMT +8)

# 查看当前时区
timedatectl status
# 修改时区
timedatectl set-timezone Asia/Shanghai

# 安装同步工具并进行同步
yum -y install ntp
ntpdate ntp1.aliyun.com

安装步骤

(1) 下载MySQL8.0的repo源

cd /tmp
curl -sSL -o mysql80-community-release-el7-7.noarch.rpm https://repo.mysql.com//mysql80-community-release-el7-7.noarch.rpm

(2) 安装mysql的repo源

rpm -ivh mysql80-community-release-el7-7.noarch.rpm

(3) 安装MySQL

查询MySQL版本

yum list mysql-community-server --showduplicates | sort -r

指定版本安装

yum install -y mysql-community-server-8.0.30

可能会提示 源 “MySQL 8.0 Community Server” 的 GPG 密钥已安装,但是不适用于此软件包。请检查源的公钥 URL 是否配置正确。

添加 --nogpgcheck来解决密钥不适用问题

yum install -y mysql-community-server-8.0.30 --nogpgcheck

安装结果

Installed:
  mysql-community-server.x86_64 0:8.0.30-1.el7

Dependency Installed:
  mysql-community-client.x86_64 0:8.0.31-1.el7
  mysql-community-client-plugins.x86_64 0:8.0.31-1.el7
  mysql-community-common.x86_64 0:8.0.30-1.el7
  mysql-community-icu-data-files.x86_64 0:8.0.30-1.el7
  mysql-community-libs.x86_64 0:8.0.31-1.el7
  net-tools.x86_64 0:2.0-0.25.20131004git.el7
  perl.x86_64 4:5.16.3-299.el7_9
  perl-Carp.noarch 0:1.26-244.el7
  perl-Encode.x86_64 0:2.51-7.el7
  perl-Exporter.noarch 0:5.68-3.el7
  perl-File-Path.noarch 0:2.09-2.el7
  perl-File-Temp.noarch 0:0.23.01-3.el7
  perl-Filter.x86_64 0:1.49-3.el7
  perl-Getopt-Long.noarch 0:2.40-3.el7
  perl-HTTP-Tiny.noarch 0:0.033-3.el7
  perl-PathTools.x86_64 0:3.40-5.el7
  perl-Pod-Escapes.noarch 1:1.04-299.el7_9
  perl-Pod-Perldoc.noarch 0:3.20-4.el7
  perl-Pod-Simple.noarch 1:3.28-4.el7
  perl-Pod-Usage.noarch 0:1.63-3.el7
  perl-Scalar-List-Utils.x86_64 0:1.27-248.el7
  perl-Socket.x86_64 0:2.010-5.el7
  perl-Storable.x86_64 0:2.45-3.el7
  perl-Text-ParseWords.noarch 0:3.29-4.el7
  perl-Time-HiRes.x86_64 4:1.9725-3.el7
  perl-Time-Local.noarch 0:1.2300-2.el7
  perl-constant.noarch 0:1.27-2.el7
  perl-libs.x86_64 4:5.16.3-299.el7_9
  perl-macros.x86_64 4:5.16.3-299.el7_9
  perl-parent.noarch 1:0.225-244.el7
  perl-podlators.noarch 0:2.5.1-3.el7
  perl-threads.x86_64 0:1.87-4.el7
  perl-threads-shared.x86_64 0:1.43-6.el7

(4) 启动MySQL

创建数据存储位置(选足够的空间来创建)

mkdir -p /data/mysql
chown -R mysql:mysql /var/lib/mysql
chown -R mysql:mysql /data/mysql

配置文件/etc/my.cnf

修改配置

# 存储配置
#datadir=/var/lib/mysql
datadir=/data/mysql

添加配置

# 基础配置
character-set-server=utf8mb4
collation-server=utf8mb4_general_ci
default-authentication-plugin=mysql_native_password
explicit_defaults_for_timestamp=true
lower_case_table_names=1
skip-character-set-client-handshake
max-allowed-packet=1073741824

# 优化配置(16-32G)
# 用于索引的缓冲区大小
key_buffer_size = 1024MB
# 临时表缓存大小
tmp_table_size = 2048MB
# Innodb缓冲区大小
innodb_buffer_pool_size = 4096MB
# Innodb日志缓冲区大小
innodb_log_buffer_size = 8MB
# * 连接数, 每个线程排序的缓冲大小
sort_buffer_size = 4096KB
# * 连接数, 读入缓冲区大小
read_buffer_size = 4096KB
# * 连接数, 随机读取缓冲区大小
read_rnd_buffer_size = 2048KB
# * 连接数, 关联表缓存大小
join_buffer_size = 8192KB
# * 连接数, 每个线程的堆栈大小
thread_stack = 512KB
#  线程池大小
thread_cache_size = 256
# 表缓存(最大不要超过2048)
table_open_cache = 2048
# 最大连接数
max_connections = 500

其他配置

#默认如下,按需调整配置
sql_mode = 'ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'

常用sql_mode

①ONLY_FULL_GROUP_BY
对于GROUP BY聚合操作,如果在SELECT中的列,没有在GROUP BY中出现,那么这个SQL是不合法的,因为列不在GROUP BY从句中
②NO_AUTO_VALUE_ON_ZERO
该值影响自增长列的插入。默认设置下,插入0NULL代表生成下一个自增长值。如果用户希望插入的值为0,而该列又是自增长的,那么这个选项就有用了。
③STRICT_TRANS_TABLES
如果一个值不能插入到一个事务中,则中断当前的操作,对非事务表不做限制
④NO_ZERO_IN_DATE
不允许日期和月份为零
⑤NO_ZERO_DATE
mysql数据库不允许插入零日期,插入零日期会抛出错误而不是警告
⑥ERROR_FOR_DIVISION_BY_ZERO
在insertupdate过程中,如果数据被零除,则产生错误而非警告。如果未给出该模式,那么数据被零除时Mysql返回NULL
⑦NO_AUTO_CREATE_USER
禁止GRANT创建密码为空的用户
⑧NO_ENGINE_SUBSTITUTION
如果需要的存储引擎被禁用或未编译,那么抛出错误。不设置此值时,用默认的存储引擎替代,并抛出一个异常
⑨PIPES_AS_CONCAT
将"||"视为字符串的连接操作符而非或运算符,这和Oracle数据库是一样是,也和字符串的拼接函数Concat想类似
⑩ANSI_QUOTES
不能用双引号来引用字符串,因为它被解释为识别符

启动服务

systemctl start mysqld

(5) 查看初始化密码

cat /var/log/mysqld.log | grep password

2022-12-26T09:49:09.105846Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: 9Ueq?S_p:kR9

(6) 重新修改root密码

mysql -uroot -p
#输入上面的初始密码登录 

# 修改root密码和外部访问权限
alter user root@localhost identified by '****';
RENAME USER 'root'@'localhost' TO 'root'@'%';
FLUSH PRIVILEGES;

# 增加新用户
use mysql
CREATE USER 'tsjt'@'%' IDENTIFIED BY '****';
GRANT ALL PRIVILEGES ON *.* TO 'tsjt'@'%';
flush privileges;

# 查验
select user,host from user;

主从复制

基于二进制日志文件位置的主从复制

1.主服务器配置

1.1 配置文件/etc/my.cnf 中开启二进制日志,并指定server-id
[mysqld]
## 设置server_id,同一局域网中需要唯一
server_id=30
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能
log-bin=mysql-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
1.2 重启服务
systemctl restart mysqld
1.3 登录mysql ,授权账号,让从数据库可以进行复制
mysql -uroot -p

use mysql;
create user 'rootslave'@'192.168.66.31' identified with mysql_native_password by '****';
grant replication slave on *.* to 'rootslave'@'192.168.66.31';
flush privileges;
1.4 查看当前二进制日志文件的名称和位置
show master status;

+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      157 |              | mysql            |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

2.从数据库配置

2.1 配置文件/etc/my.cnf中指定server-id
[mysqld]
## 设置server_id,同一局域网中需要唯一
server-id=31
## 指定不需要同步的数据库名称
binlog-ignore-db=mysql
## 开启二进制日志功能,以备Slave作为其它数据库实例的Master时使用
log-bin=mysql-slave1-bin
## 设置二进制日志使用内存大小(事务)
binlog_cache_size=1M
## 设置使用的二进制日志格式(mixed,statement,row)
binlog_format=mixed
## 二进制日志过期清理时间。默认值为0,表示不自动清理。
expire_logs_days=7
## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。
## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致
slave_skip_errors=1062
## relay_log配置中继日志
relay_log=mysql-relay-bin
## log_slave_updates表示slave将复制事件写进自己的二进制日志
log_slave_updates=1
## slave设置为只读(具有super权限的用户除外)
read_only=1
2.2 重启服务
systemctl restart mysqld
2.3 在从节点上设置主节点参数

这里的值一定要主节点上配置和查询得到的数据进行配置!!!

这里的值一定要主节点上配置和查询得到的数据进行配置!!!

这里的值一定要主节点上配置和查询得到的数据进行配置!!!

mysql -uroot -p

change master to
  master_host='192.168.66.30',
  master_user='rootslave',
  master_password='****',
  master_log_file='mysql-bin.000001',
  master_log_pos=157;
2.4 开启主从复制
start slave; 
2.5 查看主从复制状态
show slave status\G;

*************************** 1. row ***************************
               Slave_IO_State: Connecting to source
                  Master_Host: 10.62.17.120
                  Master_User: rootslave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 851
               Relay_Log_File: worker1-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

看到这两个状态为 Yes 则表示成功了

             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

3.主从复制测试

主数据库上创建库、表,并插入数据, 查看从数据库中的库、表同主数据库一致,则主从复制成功

3.1 主库创建数据库,从库查看
#主库
mysql> CREATE DATABASE `test` COLLATE 'utf8mb4_general_ci';

Query OK, 1 row affected (0.10 sec)

#从库
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
+--------------------+
5 rows in set (0.00 sec)
3.2 主库创建表,从库查看
#主库
mysql> use test
Database changed
mysql> CREATE TABLE `user` (
     `id` INT NULL,
     `name` varchar(20) NULL,
     `age` INT NULL
     )
     COLLATE='utf8mb4_general_ci'
     ;
Query OK, 0 rows affected (0.60 sec)

# 从库
mysql> use test
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| user           |
+----------------+
1 row in set (0.00 sec)
3.3 主库插入数据,从库查看
#主库
mysql> insert into user values(1, 'Joe', 18);
Query OK, 1 row affected (0.05 sec)

mysql> insert into user values(2, 'Lucy', 18);
Query OK, 1 row affected (0.06 sec)

#从库
mysql> select * from user;
+------+------+------+
| id   | name | age  |
+------+------+------+
|    1 | Joe  |   18 |
|    2 | Lucy |   18 |
+------+------+------+
2 rows in set (0.00 sec)
3.4 主库删除数据,从库查看
#主库
mysql> drop database test;
Query OK, 1 row affected (0.00 sec)


#从库
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

安装Redis集群

应用名 IP地址 端口
Redis1-主 192.168.66.32 服务端口:6379,集群通信端口:16379
Redis2-从 192.168.66.33 服务端口:6379,集群通信端口:16379
Redis3-主 192.168.66.34 服务端口:6379,集群通信端口:16379
Redis4-从 192.168.66.35 服务端口:6379,集群通信端口:16379
Redis5-主 192.168.66.36 服务端口:6379,集群通信端口:16379
Redis6-从 192.168.66.37 服务端口:6379,集群通信端口:16379

修改机器名

根据上面列表修改每台机器名称, 1到6全改, 示例如下:

hostnamectl set-hostname k8s-redis1
hostnamectl set-hostname k8s-redis2
hostnamectl set-hostname k8s-redis3
hostnamectl set-hostname k8s-redis4
hostnamectl set-hostname k8s-redis5
hostnamectl set-hostname k8s-redis6

下载编译安装

mkdir -p /data/redis/6379/{conf,data}
cd /data/redis
# 下载源码并安装编译依赖
curl -o redis-6.2.6.tar.gz -C - http://download.redis.io/releases/redis-6.2.6.tar.gz
yum -y install gcc-c++

tar -xzvf redis-6.2.6.tar.gz
cd redis-6.2.6
# 编译安装
make && make install PREFIX=/data/redis/6379

# 删除安装源文件
rm -rf /data/redis/redis-6.2.6*

准备配置

cat > /data/redis/6379/conf/redis.conf << EOF
# redis端口
port 6379

# 绑定可达IP(一定要是宿主机IP)
bind $(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}')

# 关闭保护模式
protected-mode no
# 配置redis作为守护进程运行
daemonize yes
# 开启集群
cluster-enabled yes
# 集群节点配置,重新创建集群,那么把这个文件删了就行
cluster-config-file nodes.conf
# 超时
cluster-node-timeout 15000
# 开启 appendonly 备份模式
appendonly yes
# 每秒钟备份
appendfsync everysec
# 对aof文件进行压缩时,是否执行同步操作
no-appendfsync-on-rewrite no
# 当目前aof文件大小超过上一次重写时的aof文件大小的100%时会再次进行重写
auto-aof-rewrite-percentage 100
# 重写前AOF文件的大小最小值 默认 64mb
auto-aof-rewrite-min-size 64mb
# 设置 Redis 最大使用内存大小,当 Redis 使用的内存超过设置大小时就开始对数据进行淘汰
maxmemory 4GB
# 设置 Redis 的内存淘汰算法为:在设置了过期时间的Keys中,通过LRU算法来进行淘汰
maxmemory-policy volatile-lru
# 配置存储
dir /data/redis/6379/data
pidfile /data/redis/6379/redis.pid
logfile /data/redis/6379/redis.log

# 设置密码
requirepass <密码>
masterauth <密码>

EOF

各个节点启动服务

cat > /usr/lib/systemd/system/redis6379.service << END
[Unit]
Description=Redis Cluster
After=network.target

[Service]
Type=forking
ExecStart=/data/redis/6379/bin/redis-server /data/redis/6379/conf/redis.conf
ExecReload=/bin/kill -s HUP \$MAINPID
ExecStop=/data/redis/6379/bin/redis-cli -h $(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}') -p 6379 -a <密码> shutdown
PrivateTmp=true

[Install]
WantedBy=multi-user.target
END

# 启动服务
systemctl daemon-reload
systemctl enable --now redis6379

防火墙放行端口

firewall-cmd --zone=public --permanent --add-port=6379/tcp
firewall-cmd --zone=public --permanent --add-port=16379/tcp
firewall-cmd --reload

创建集群

# 在任一节点执行, 意思是复制模式1:1,前三为主,后三位从,主从自动配对(从1对主3,从2对主1,从3对主2)
/data/redis/6379/bin/redis-cli --cluster create \
   192.168.66.32:6379 192.168.66.34:6379 192.168.66.36:6379 \
   192.168.66.37:6379 192.168.66.33:6379 192.168.66.35:6379 \
   --cluster-replicas 1 -a DxO6M9Diu7rQmYEP

# 会提示输入,填写yes即可
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.66.33:6379 to 192.168.66.32:6379
Adding replica 192.168.66.35:6379 to 192.168.66.34:6379
Adding replica 192.168.66.37:6379 to 192.168.66.36:6379
M: e899793f697793bc10ce0fee16cd70d39894d8be 192.168.66.32:6379
   slots:[0-5460] (5461 slots) master
M: 62b85b99ac67f3331a9a40b60b842bbcc10bc559 192.168.66.34:6379
   slots:[5461-10922] (5462 slots) master
M: 697970880c079dc7f2c765cffa925e79a9af78f0 192.168.66.36:6379
   slots:[10923-16383] (5461 slots) master
S: a86e54f499d35bd8fc4e4ee1dafa6faa87dd0065 192.168.66.37:6379
   replicates 697970880c079dc7f2c765cffa925e79a9af78f0
S: 7f3440a30841aa7385d4f772438537c123e421d4 192.168.66.33:6379
   replicates e899793f697793bc10ce0fee16cd70d39894d8be
S: 325b3b102f822861f68081a85b6694636b842712 192.168.66.35:6379
   replicates 62b85b99ac67f3331a9a40b60b842bbcc10bc559
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

>>> Performing Cluster Check (using node 192.168.66.32:6379)
M: e899793f697793bc10ce0fee16cd70d39894d8be 192.168.66.32:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: a86e54f499d35bd8fc4e4ee1dafa6faa87dd0065 192.168.66.37:6379
   slots: (0 slots) slave
   replicates 697970880c079dc7f2c765cffa925e79a9af78f0
M: 62b85b99ac67f3331a9a40b60b842bbcc10bc559 192.168.66.34:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 697970880c079dc7f2c765cffa925e79a9af78f0 192.168.66.36:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 325b3b102f822861f68081a85b6694636b842712 192.168.66.35:6379
   slots: (0 slots) slave
   replicates 62b85b99ac67f3331a9a40b60b842bbcc10bc559
S: 7f3440a30841aa7385d4f772438537c123e421d4 192.168.66.33:6379
   slots: (0 slots) slave
   replicates e899793f697793bc10ce0fee16cd70d39894d8be
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

登录查验集群信息

/data/redis/6379/bin/redis-cli -h 192.168.66.32 -p 6379 -c -a <密码> --raw

192.168.66.32:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:72
cluster_stats_messages_pong_sent:79
cluster_stats_messages_sent:151
cluster_stats_messages_ping_received:74
cluster_stats_messages_pong_received:72
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:151

192.168.66.32:6379> cluster nodes
0225d4cee963a8caab52612fa2adc4a266ec64c3 192.168.66.36:6379@16379 slave 47cd62837c9e1cf78deb7dd2d8d8dcb1935e64b3 0 1671185594739 1 connected
47cd62837c9e1cf78deb7dd2d8d8dcb1935e64b3 192.168.66.32:6379@16379 myself,master - 0 1671185594000 1 connected 0-5460
1440fcd7e5da94f6ddfe868297da7fb2fe761eb1 192.168.66.37:6379@16379 slave f19cf8f0d13a7ecca85162a119b4117fba12a9b6 0 1671185595743 2 connected
f19cf8f0d13a7ecca85162a119b4117fba12a9b6 192.168.66.33:6379@16379 master - 0 1671185595000 2 connected 5461-10922
91371c75c9390ca1061cbf2fc17b6e6b9306c767 192.168.66.35:6379@16379 slave 4343873263817d427dcac084554fdb0147edcbff 0 1671185595000 3 connected
4343873263817d427dcac084554fdb0147edcbff 192.168.66.34:6379@16379 master - 0 1671185596746 3 connected 10923-16383

安装ElasticSearch

应用名 IP地址 端口
Elasticsearch1 192.168.66.38 HTTP访问端口:9200,TCP传输端口:9300
Elasticsearch2 192.168.66.39 HTTP访问端口:9200,TCP传输端口:9300
Elasticsearch3 192.168.66.40 HTTP访问端口:9200,TCP传输端口:9300

准备工作

  • 调整分区

按需执行,由100G模板克隆出来并加大硬盘的节点则需要调整

fdisk /dev/vda

    n
    p
    默认(回车)
    默认(回车)
    默认(回车)
    t
    默认(回车)
    8e
    p
    w
# 更新Linux内核中的硬盘分区表数据
partprobe /dev/vda
# 创建物理卷
pvcreate /dev/vda3
# 默认为centos,可通过vgdisplay查看确认
vgextend centos /dev/vda3

# 扩展逻辑卷的大小, 默认为 /dev/mapper/centos-root,可能通过df -h命令查看确认
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root

# 查看是否成功
df -hT
  • 修改机器名

根据上面列表修改每台机器名称, 1到3全改, 示例如下:

hostnamectl set-hostname k8s-es1
hostnamectl set-hostname k8s-es2
hostnamectl set-hostname k8s-es3
  • 同步时间

目前安装的是英文系统,默认时区改为 上海时区(GMT +8)

# 查看当前时区
timedatectl status
# 修改时区
timedatectl set-timezone Asia/Shanghai

# 安装同步工具并进行同步
yum -y install ntp
ntpdate ntp1.aliyun.com
  • 调整虚拟内存&最大并发连接
#修改系统文件
vi /etc/sysctl.conf

#增加的内容
vm.max_map_count=262144
fs.file-max=655360

# 执行命令sysctl -p生效
sysctl -p
  • 创建ES专用用户
# 创建用户组:-r创建一个系统账户
groupadd -r es

# 创建用户 -M不创建用户的主目录  -r创建一个系统账户  -g新账户主组的名称或 ID
useradd -M -r -g es es
  • 开放端口
firewall-cmd --zone=public --permanent --add-port=9200/tcp --add-port=9300/tcp
firewall-cmd --reload
  • 上传安装包

先将相关安装包上传到 /tmp 目录

scp 192.168.66.21:/lingyun/amazon-corretto-8.352.08.1-linux-x64.tar.gz /tmp/
scp 192.168.66.21:/lingyun/elasticsearch-7.17.2-linux-x86_64.tar.gz /tmp/

安装JDK8

cd /tmp
tar -xzvf amazon-corretto-8.352.08.1-linux-x64.tar.gz
rm -f amazon-corretto-8.352.08.1-linux-x64.tar.gz
mv amazon-corretto-8.352.08.1-linux-x64 /usr/local/amazon-corretto-8.352.08.1

# 配置环境变量
vi /etc/profile

# 文件最后增加配置内容
JAVA_HOME=/usr/local/amazon-corretto-8.352.08.1
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin

export PATH JAVA_HOME CLASSPATH

# 加载生效配置
source /etc/profile

# 验证1
which java
# 结果
/usr/local/amazon-corretto-8.352.08.1/bin/java

# 验证2
java -version
# 结果
openjdk version "1.8.0_352"
OpenJDK Runtime Environment Corretto-8.352.08.1 (build 1.8.0_352-b08)
OpenJDK 64-Bit Server VM Corretto-8.352.08.1 (build 25.352-b08, mixed mode)

安装ES

本次一共要部署三个Elasticsearch节点,所有文中没有指定机器的操作都表示每个Elasticsearch机器都要执行该操作

cd /tmp
tar -zvxf elasticsearch-7.17.2-linux-x86_64.tar.gz
rm -f elasticsearch-7.17.2-linux-x86_64.tar.gz
mv elasticsearch-7.17.2 /usr/local/

#创建ES数据目录
mkdir -p /data/es/{data,logs}

#更改目录Owner
chown -R es:es /usr/local/elasticsearch-7.17.2
chown -R es:es /data/es

节点配置

  • 主节点配置(192.168.66.38)
cat >> /usr/local/elasticsearch-7.17.2/config/elasticsearch.yml << EOF
cluster.name: es
node.name: $(hostname)
path.data: /data/es/data
path.logs: /data/es/logs
network.host: $(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}')
http.port: 9200
transport.tcp.port: 9300
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.66.38:9300","192.168.66.39:9300","192.168.66.40:9300"]
cluster.initial_master_nodes: $(hostname)
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"

EOF

# 查验,注意node.name,network.host,cluster.initial_master_nodes是否正常取得相应的节点名称和IP
cat /usr/local/elasticsearch-7.17.2/config/elasticsearch.yml
  • 从节点配置(192.168.66.39、192.168.66.40)
cat >> /usr/local/elasticsearch-7.17.2/config/elasticsearch.yml << EOF
cluster.name: es
node.name: $(hostname)
path.data: /data/es/data
path.logs: /data/es/logs
network.host: $(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}')
http.port: 9200
transport.tcp.port: 9300
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.66.38:9300","192.168.66.39:9300","192.168.66.40:9300"]
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"

EOF

# 查验,注意node.name,network.host是否正常取得相应的节点名称和IP
cat /usr/local/elasticsearch-7.17.2/config/elasticsearch.yml
  • 配置项说明
说明
cluster.name 集群名
node.name 节点名
path.data 数据保存目录
path.logs 日志保存目录
network.host 节点host/ip
http.port HTTP访问端口
transport.tcp.port TCP传输端口
node.master 是否允许作为主节点
node.data 是否保存数据
discovery.zen.ping.unicast.hosts 集群中的主节点的初始列表,当节点(主节点或者数据节点)启动时使用这个列表进行探测
discovery.zen.minimum_master_nodes 主节点个数
http.cors.enabled 是否允许跨域访问
http.cors.allow-origin 跨域访问权限
cluster.initial_master_nodes 集群初始节点,当启动新的集群时需指定

配置服务

cat > /usr/lib/systemd/system/elasticsearch.service << EOF
[Unit]
Description=Elasticsearch
After=network.target

[Service]
User=es
LimitNOFILE=100000
LimitNPROC=100000
ExecStart=/usr/local/elasticsearch-7.17.2/bin/elasticsearch

[Install]
WantedBy=multi-user.target

EOF

# 重载配置
systemctl daemon-reload
# 设置开机启动和立即启动服务
systemctl enable --now elasticsearch

# 查看服务状态 
systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-12-18 11:45:39 CST; 1min 19s ago
 Main PID: 37153 (java)
   CGroup: /system.slice/elasticsearch.service
           ├─37153 /usr/local/elasticsearch-7.17.2/jdk/bin/java -Xshare:auto -Des.networka...
           └─37392 /usr/local/elasticsearch-7.17.2/modules/x-pack-ml/platform/linux-x86_64...

Dec 18 11:45:53 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:53,670][INFO ][o.e.x.i.Ind...
Dec 18 11:45:53 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:53,927][INFO ][o.e.c.m.M...0]
Dec 18 11:45:53 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:53,929][INFO ][o.e.c.r.a...s]
Dec 18 11:45:54 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:54,371][INFO ][o.e.c.r.a...).
Dec 18 11:45:54 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:54,670][INFO ][o.e.i.g.D...z]
Dec 18 11:45:54 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:54,894][INFO ][o.e.i.g.D...b]
Dec 18 11:45:58 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:58,634][INFO ][o.e.i.g.D...z]
Dec 18 11:45:59 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:59,338][INFO ][o.e.i.g.D...z]
Dec 18 11:45:59 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:59,438][INFO ][o.e.i.g.D...b]
Dec 18 11:45:59 k8s-es1 elasticsearch[37153]: [2022-12-18T11:45:59,644][INFO ][o.e.i.g.D...b]
Hint: Some lines were ellipsized, use -l to show in full.

验证服务

curl http://192.168.66.38:9200
curl http://192.168.66.39:9200
curl http://192.168.66.40:9200

# 返回以下结果即为正常

{
  "name" : "k8s-es1",
  "cluster_name" : "es",
  "cluster_uuid" : "iDKK5NAkSeSJsg1Oupk7XA",
  "version" : {
    "number" : "7.17.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "de7261de50d90919ae53b0eff9413fd7e5307301",
    "build_date" : "2022-03-28T15:12:21.446567561Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

安装Zookeeper和Kafka集群

应用名 IP地址 端口
zookeeper1 192.168.66.41 服务端口:2181,集群通讯端口:2888,选举端口:3888
zookeeper2 192.168.66.42 服务端口:2181,集群通讯端口:2888,选举端口:3888
zookeeper3 192.168.66.43 服务端口:2181,集群通讯端口:2888,选举端口:3888
kafka1 192.168.66.41 服务端口:9092
kafka2 192.168.66.42 服务端口:9092
kafka3 192.168.66.43 服务端口:9092

准备工作

  • 调整分区

按需执行,由100G模板克隆出来并加大硬盘的节点则需要调整

fdisk /dev/vda

    n
    p
    默认(回车)
    默认(回车)
    默认(回车)
    t
    默认(回车)
    8e
    p
    w
# 更新Linux内核中的硬盘分区表数据
partprobe /dev/vda
# 创建物理卷
pvcreate /dev/vda3
# 默认为centos,可通过vgdisplay查看确认
vgextend centos /dev/vda3

# 扩展逻辑卷的大小, 默认为 /dev/mapper/centos-root,可能通过df -h命令查看确认
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root

# 查看是否成功
df -hT
  • 修改机器名

根据上面列表修改每台机器名称, 1到3全改, 示例如下:

hostnamectl set-hostname k8s-kafka1
hostnamectl set-hostname k8s-kafka2
hostnamectl set-hostname k8s-kafka3
  • 同步时间

目前安装的是英文系统,默认时区改为 上海时区(GMT +8)

# 查看当前时区
timedatectl status
# 修改时区
timedatectl set-timezone Asia/Shanghai

# 安装同步工具并进行同步
yum -y install ntp
ntpdate ntp1.aliyun.com
  • 开放端口
firewall-cmd --permanent --zone=public --add-port=2181/tcp --add-port=2888/tcp --add-port=3888/tcp
firewall-cmd --permanent --zone=public --add-port=9092/tcp
firewall-cmd --reload
  • 上传安装包

先将相关安装包上传到 /tmp 目录

scp 192.168.66.21:/lingyun/amazon-corretto-8.352.08.1-linux-x64.tar.gz /tmp/
scp 192.168.66.21:/lingyun/apache-zookeeper-3.7.1-bin.tar.gz /tmp/
scp 192.168.66.21:/lingyun/kafka_2.12-2.8.1.tgz /tmp/
  • 创建目录
mkdir -p /data/zookeeper/{data,logs} /data/kafka/logs

安装JDK8

cd /tmp
tar -xzvf amazon-corretto-8.352.08.1-linux-x64.tar.gz
rm -f amazon-corretto-8.352.08.1-linux-x64.tar.gz
mv amazon-corretto-8.352.08.1-linux-x64 /usr/local/amazon-corretto-8.352.08.1

# 配置环境变量
vi /etc/profile

# 文件最后增加配置内容
JAVA_HOME=/usr/local/amazon-corretto-8.352.08.1
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin

export PATH JAVA_HOME CLASSPATH

# 加载生效配置
source /etc/profile

# 验证1
which java
# 结果
/usr/local/amazon-corretto-8.352.08.1/bin/java

# 验证2
java -version
# 结果
openjdk version "1.8.0_352"
OpenJDK Runtime Environment Corretto-8.352.08.1 (build 1.8.0_352-b08)
OpenJDK 64-Bit Server VM Corretto-8.352.08.1 (build 25.352-b08, mixed mode)

安装Zookeeper

cd /tmp
tar -xzvf apache-zookeeper-3.7.1-bin.tar.gz
rm -f apache-zookeeper-3.7.1-bin.tar.gz
mv apache-zookeeper-3.7.1-bin /usr/local/zookeeper

修改配置

cat > /usr/local/zookeeper/conf/zoo.cfg << EOF
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
clientPort=2181
server.1=192.168.66.41:2888:3888
server.2=192.168.66.42:2888:3888
server.3=192.168.66.43:2888:3888
EOF

# 192.168.66.41
echo "1" > /data/zookeeper/data/myid
# 192.168.66.42
echo "2" > /data/zookeeper/data/myid
# 192.168.66.43
echo "3" > /data/zookeeper/data/myid

# 添加服务
cat > /usr/lib/systemd/system/zookeeper.service << EOF
[Unit]
Description=Zookeeper
After=network.target

[Service]
Type=forking
User=root
Group=root
Environment=ZOO_LOG_DIR=/data/zookeeper/logs
Environment=JAVA_HOME=$JAVA_HOME
WorkingDirectory=/usr/local/zookeeper/bin
ExecStart=/usr/local/zookeeper/bin/zkServer.sh start
ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable --now zookeeper
systemctl status zookeeper

安装Kafka

cd /tmp
tar -zvxf kafka_2.12-2.8.1.tgz
rm -f kafka_2.12-2.8.1.tgz
mv kafka_2.12-2.8.1 /usr/local/kafka

修改配置

通用配置
sed -i 's+log.dirs=/tmp/kafka-logs+log.dirs=/data/kafka/logs+g' /usr/local/kafka/config/server.properties
sed -i 's+zookeeper.connect=localhost:2181+zookeeper.connect=192.168.66.41:2181,192.168.66.42:2181,192.168.66.43:2181+g' /usr/local/kafka/config/server.properties
按节点配置
# k8s-Kafka1 192.168.66.41
sed -i 's+broker.id=0+broker.id=0+g' /usr/local/kafka/config/server.properties
sed -i 's+#listeners=PLAINTEXT://:9092+listeners=PLAINTEXT://192.168.66.41:9092+g' /usr/local/kafka/config/server.properties

# k8s-Kafka2 192.168.66.42
sed -i 's+broker.id=0+broker.id=1+g' /usr/local/kafka/config/server.properties
sed -i 's+#listeners=PLAINTEXT://:9092+listeners=PLAINTEXT://192.168.66.42:9092+g' /usr/local/kafka/config/server.properties

# k8s-Kafka3 192.168.66.43
sed -i 's+broker.id=0+broker.id=2+g' /usr/local/kafka/config/server.properties
sed -i 's+#listeners=PLAINTEXT://:9092+listeners=PLAINTEXT://192.168.66.43:9092+g' /usr/local/kafka/config/server.properties

配置服务并启动

cat > /usr/lib/systemd/system/kafka.service << EOF
[Unit]
[Unit]
Description=Apache Kafka server
After=network.target  zookeeper.service

[Service]
Type=simple
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$JAVA_HOME/bin"
User=root
Group=root
ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
ExecStop=/usr/local/kafka/bin/kafka-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

# 加载服务配置并启动
systemctl daemon-reload
systemctl enable --now kafka
systemctl status kafka

测试

# 执行节点:192.168.66.41 创建主题
sh /usr/local/kafka/bin/kafka-topics.sh --create --bootstrap-server 192.168.66.41:9092 --replication-factor 3 --partitions 1 --topic test-ken-io

# 执行节点:192.168.66.41 查询主题
sh /usr/local/kafka/bin/kafka-topics.sh --list --bootstrap-server 192.168.66.41:9092
sh /usr/local/kafka/bin/kafka-topics.sh --list --bootstrap-server 192.168.66.42:9092
sh /usr/local/kafka/bin/kafka-topics.sh --list --bootstrap-server 192.168.66.44:9092

# 执行节点:192.168.66.41 发布消息
sh /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.66.41:9092  --topic test-ken-io
# 执行节点:192.168.66.42 订阅消息
sh /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.66.42:9092 --topic test-ken-io --from-beginning

# 执行节点:192.168.66.41 删除主题
sh /usr/local/kafka/bin/kafka-topics.sh --delete --topic test-ken-io --zookeeper localhost:2181

安装RocketMQ

三主三从交错方案

应用名 IP地址 端口
rocketmq-nameserver1 192.168.66.44 服务端口:9876
rocketmq-nameserver2 192.168.66.45 服务端口:9876
rocketmq-nameserver3 192.168.66.46 服务端口:9876
rocketmq-broker1-主 192.168.66.44 监听端口:10909,10911,10912
rocketmq-broker2-主 192.168.66.45 监听端口:10909,10911,10912
rocketmq-broker3-主 192.168.66.46 监听端口:10909,10911,10912
rocketmq-broker1-从 192.168.66.45 监听端口:10919,10921,10922
rocketmq-broker2-从 192.168.66.46 监听端口:10919,10921,10922
rocketmq-broker3-从 192.168.66.44 监听端口:10919,10921,10922

准备工作

  • 调整分区

按需执行,由100G模板克隆出来并加大硬盘的节点则需要调整

fdisk /dev/vda

    n
    p
    默认(回车)
    默认(回车)
    默认(回车)
    t
    默认(回车)
    8e
    p
    w
# 更新Linux内核中的硬盘分区表数据
partprobe /dev/vda
# 创建物理卷
pvcreate /dev/vda3
# 默认为centos,可通过vgdisplay查看确认
vgextend centos /dev/vda3

# 扩展逻辑卷的大小, 默认为 /dev/mapper/centos-root,可能通过df -h命令查看确认
lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root

# 查看是否成功
df -hT
  • 修改机器名

根据上面列表修改每台机器名称, 1到3全改, 示例如下:

hostnamectl set-hostname k8s-rocketmq1
hostnamectl set-hostname k8s-rocketmq2
hostnamectl set-hostname k8s-rocketmq3
  • 同步时间

目前安装的是英文系统,默认时区改为 上海时区(GMT +8)

# 查看当前时区
timedatectl status
# 修改时区
timedatectl set-timezone Asia/Shanghai

# 安装同步工具并进行同步
yum -y install ntp
ntpdate ntp1.aliyun.com
  • 开放端口
firewall-cmd --permanent --zone=public --add-port=9876/tcp
firewall-cmd --permanent --zone=public --add-port=10909/tcp --add-port=10911/tcp --add-port=10912/tcp
firewall-cmd --permanent --zone=public --add-port=10919/tcp --add-port=10921/tcp --add-port=10922/tcp
firewall-cmd --reload
  • 上传安装包

先将相关安装包上传到 /tmp 目录

scp 192.168.66.21:/lingyun/amazon-corretto-8.352.08.1-linux-x64.tar.gz /tmp/
scp 192.168.66.21:/lingyun/rocketmq-all-4.9.2-bin-release.tar.gz /tmp/
  • 创建目录
mkdir -p /data/rocketmq/{master,slave}/{store,conf}

安装JDK8

cd /tmp
tar -xzvf amazon-corretto-8.352.08.1-linux-x64.tar.gz
rm -f amazon-corretto-8.352.08.1-linux-x64.tar.gz
mv amazon-corretto-8.352.08.1-linux-x64 /usr/local/amazon-corretto-8.352.08.1

# 配置环境变量
vi /etc/profile

# 文件最后增加配置内容
JAVA_HOME=/usr/local/amazon-corretto-8.352.08.1
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin

export PATH JAVA_HOME CLASSPATH

# 加载生效配置
source /etc/profile

# 建立软链接,不然nameserver启动不了
ln -s /usr/local/amazon-corretto-8.352.08.1/bin/java /bin

# 验证1
which java
# 结果
/usr/local/amazon-corretto-8.352.08.1/bin/java

# 验证2
java -version
# 结果
openjdk version "1.8.0_352"
OpenJDK Runtime Environment Corretto-8.352.08.1 (build 1.8.0_352-b08)
OpenJDK 64-Bit Server VM Corretto-8.352.08.1 (build 25.352-b08, mixed mode)

安装主服务

cd /tmp
tar -xzvf rocketmq-all-4.9.2-bin-release.tar.gz
rm -f rocketmq-all-4.9.2-bin-release.tar.gz
mv rocketmq-4.9.2 /usr/local/

修改配置

# 三台节点
cat > /data/rocketmq/master/conf/broker.properties << EOF
#集群名称
brokerClusterName=RocketMQCluster
#broker名称(根据节点修改)
brokerName=broker-01
#0 表示Master,>0 表示Slave
brokerId=0
#broker角色 ASYNC_MASTER为异步主节点,SYNC_MASTER为同步主节点,SLAVE为从节点
brokerRole=ASYNC_MASTER
#刷新数据到磁盘的方式:ASYNC_FLUSH -异步刷盘, SYNC_FLUSH -同步刷盘
flushDiskType=ASYNC_FLUSH
##Broker 对外服务的监听端口
listenPort=10911
#nameserver地址,分号分割
namesrvAddr=192.168.66.44:9876;192.168.66.45:9876;192.168.66.46:9876
#在发送消息时,自动创建服务器不存在的topic,默认创建的队列数
defaultTopicQueueNums=4
#是否允许 Broker 自动创建Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许 Broker 自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=true
#默认不配置brokerIP1和brokerIP2时,都会根据当前网卡选择一个IP使用,当你的机器有多块网卡时,很有可能会有问题。
brokerIP1=$(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}')


#存储路径
storePathRootDir=/data/rocketmq/master/store
#commitLog 存储路径
storePathCommitLog=/data/rocketmq/master/store/commitlog
#消费队列存储路径存储路径
storePathConsumerQueue=/data/rocketmq/master/store/consumequeue
#消息索引存储路径
storePathIndex=/data/rocketmq/master/store/index
#checkpoint 文件存储路径
storeCheckpoint=/data/rocketmq/master/store/checkpoint
#abort 文件存储路径
abortFile=/data/rocketmq/master/store/abort
#删除文件时间点,默认凌晨 4点
deleteWhen=4
#文件保留时间,默认 48 小时
fileReservedTime=120
#commitLog每个文件的大小默认1G
mapedFileSizeCommitLog=1073741824
#ConsumeQueue每个文件默认存30W条,根据业务情况调整
mapedFileSizeConsumeQueue=300000
#destroyMapedFileIntervalForcibly=120000
#redeleteHangedFileInterval=120000
#检测物理文件磁盘空间
#diskMaxUsedSpaceRatio=88
#发送信息线程池线程个数
sendMessageThreadPoolNums=16
#拉取信息线程池线程个数
#pullMessageThreadPoolNums=128
#发送消息是否使用可重入锁
#useReentrantLockWhenPutMessage=true
#ACL权限,需配合plain_acl.yml使用, true - 启动, false - 停用
aclEnable=false
EOF

# 192.168.66.45(MQ2) brokerName=broker-01 改为 brokerName=broker-02
sed -i 's+brokerName=broker-01+brokerName=broker-02+g' /data/rocketmq/master/conf/broker.properties

# 192.168.66.46(MQ3) brokerName=broker-01 改为 brokerName=broker-03
sed -i 's+brokerName=broker-01+brokerName=broker-03+g' /data/rocketmq/master/conf/broker.properties

创建服务并启动

# namesrv服务
cat > /lib/systemd/system/rocketmq-nameserver.service << EOF
[Unit]
Description=rocketmq-nameserver
After=network.target

[Service]
User=root
Type=simple
ExecStart=/bin/sh /usr/local/rocketmq-4.9.2/bin/mqnamesrv
ExecReload=/bin/kill -s HUP \$MAINPID
ExecStop=/bin/kill -s QUIT \$MAINPID
Restart=always
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

# master-broker服务
cat > /data/rocketmq/master/conf/env << EOF
JAVA_OPT_EXT=-Duser.home=/data/rocketmq/master
EOF

cat > /lib/systemd/system/rocketmq-master-broker.service << EOF
[Unit]
Description=rocketmq-broker
After=network.target

[Service]
User=root
Type=simple
EnvironmentFile=/data/rocketmq/master/conf/env
ExecStart=/usr/local/rocketmq-4.9.2/bin/mqbroker -c /data/rocketmq/master/conf/broker.properties
ExecReload=/bin/kill -s HUP \$MAINPID
ExecStop=/bin/kill -s QUIT \$MAINPID
Restart=always
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

# 重置配置并启动服务,设置开机启动
systemctl daemon-reload
systemctl enable --now rocketmq-nameserver
systemctl enable --now rocketmq-master-broker

# 查看状态
systemctl status rocketmq-nameserver
systemctl status rocketmq-master-broker

部署从节点

修改配置

# 三台节点
cat > /data/rocketmq/slave/conf/broker.properties << EOF
#集群名称
brokerClusterName=RocketMQCluster
#broker名称(根据节点修改)
brokerName=broker-01
#0 表示Master,>0 表示Slave
brokerId=1
#broker角色 ASYNC_MASTER为异步主节点,SYNC_MASTER为同步主节点,SLAVE为从节点
brokerRole=SLAVE
#刷新数据到磁盘的方式:ASYNC_FLUSH -异步刷盘, SYNC_FLUSH -同步刷盘
flushDiskType=ASYNC_FLUSH
##Broker 对外服务的监听端口
listenPort=10921
#nameserver地址,分号分割
namesrvAddr=192.168.66.44:9876;192.168.66.45:9876;192.168.66.46:9876
#在发送消息时,自动创建服务器不存在的topic,默认创建的队列数
defaultTopicQueueNums=4
#是否允许 Broker 自动创建Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许 Broker 自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=true
#默认不配置brokerIP1和brokerIP2时,都会根据当前网卡选择一个IP使用,当你的机器有多块网卡时,很有可能会有问题。
brokerIP1=$(ip addr | awk '/^[0-9]+: / {}; /inet.*global/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}')


#存储路径
storePathRootDir=/data/rocketmq/slave/store
#commitLog 存储路径
storePathCommitLog=/data/rocketmq/slave/store/commitlog
#消费队列存储路径存储路径
storePathConsumerQueue=/data/rocketmq/slave/store/consumequeue
#消息索引存储路径
storePathIndex=/data/rocketmq/slave/store/index
#checkpoint 文件存储路径
storeCheckpoint=/data/rocketmq/slave/store/checkpoint
#abort 文件存储路径
abortFile=/data/rocketmq/slave/store/abort
#删除文件时间点,默认凌晨 4点
deleteWhen=4
#文件保留时间,默认 48 小时
fileReservedTime=120
#commitLog每个文件的大小默认1G
mapedFileSizeCommitLog=1073741824
#ConsumeQueue每个文件默认存30W条,根据业务情况调整
mapedFileSizeConsumeQueue=300000
#destroyMapedFileIntervalForcibly=120000
#redeleteHangedFileInterval=120000
#检测物理文件磁盘空间
#diskMaxUsedSpaceRatio=88
#发送信息线程池线程个数
sendMessageThreadPoolNums=16
#拉取信息线程池线程个数
#pullMessageThreadPoolNums=128
#发送消息是否使用可重入锁
#useReentrantLockWhenPutMessage=true
#ACL权限,需配合plain_acl.yml使用, true - 启动, false - 停用
aclEnable=false
EOF

# 192.168.66.44(MQ1) brokerName=broker-01 改为 brokerName=broker-02
sed -i 's+brokerName=broker-01+brokerName=broker-02+g' /data/rocketmq/slave/conf/broker.properties

# 192.168.66.45(MQ2) brokerName=broker-01 改为 brokerName=broker-03
sed -i 's+brokerName=broker-01+brokerName=broker-03+g' /data/rocketmq/slave/conf/broker.properties

创建服务并启动

cat > /data/rocketmq/slave/conf/env << EOF
JAVA_OPT_EXT=-Duser.home=/data/rocketmq/slave
EOF

cat > /lib/systemd/system/rocketmq-slave-broker.service << EOF
[Unit]
Description=rocketmq-broker
After=network.target

[Service]
User=root
Type=simple
EnvironmentFile=/data/rocketmq/slave/conf/env
ExecStart=/usr/local/rocketmq-4.9.2/bin/mqbroker -c /data/rocketmq/slave/conf/broker.properties
ExecStop=/bin/kill -s QUIT \$MAINPID
ExecReload=/bin/kill -s HUP \$MAINPID
Restart=always
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

# 重置配置并启动服务,设置开机启动
systemctl daemon-reload
systemctl enable --now rocketmq-slave-broker

# 查看状态
systemctl status rocketmq-slave-broker

安装MinIO

准备工作

  • 挂载数据盘
fdisk /dev/vdb
    n
    p
    默认(回车)
    默认(回车)
    默认(回车)
    t
    默认(回车)
    8e
    p
    w
# 重读分区表
partprobe /dev/vdb
# 物理硬盘分区初始化为物理卷,以便LVM使用
pvcreate /dev/vdb1
# 格式化为xfs格式
mkfs.xfs -f /dev/vdb1

#手动挂载
mkdir /data
mount /dev/vdb1 /data

#开机挂载
vi /etc/fstab
/dev/vdb1               /data                   xfs     defaults        0 0

# 查看是否成功
df -hT
  • 修改机器名

根据上面列表修改每台机器名称, 示例如下:

hostnamectl set-hostname k8s-minio1
  • 同步时间

目前安装的是英文系统,默认时区改为 上海时区(GMT +8)

# 查看当前时区
timedatectl status
# 修改时区
timedatectl set-timezone Asia/Shanghai

# 安装同步工具并进行同步
yum -y install ntp
ntpdate ntp1.aliyun.com
  • 开放端口
firewall-cmd --permanent --zone=public  --add-port=9000/tcp
firewall-cmd --reload

安装服务

下载安装包

本地找一台同样系统的环境下载离线包

mkdir -p /data/minio/bin
cd /data/minio/bin
curl -o minio -C - https://dl.min.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2021-06-14T01-29-23Z
chmod +x minio

服务启动

创建环境变理配置文件

mkdir -p /data/minio/data

cat <<EOT >> /data/minio/bin/env
# Volume to be used for MinIO server.
MINIO_VOLUMES="/data/minio/data"
# Use if you want to run MinIO on a custom port.
# MINIO_OPTS="--address :9199 --console-address :9001"
MINIO_OPTS=""
# Root user for the server.
MINIO_ROOT_USER=admin
# Root secret for the server.
MINIO_ROOT_PASSWORD=<密码>

# set this for MinIO to reload entries with 'mc admin service restart'
MINIO_CONFIG_ENV_FILE=/data/minio/bin/env
EOT

创建服务文件

vi /etc/systemd/system/minio.service

具体内容

[Unit]
Description=MinIO
Documentation=https://docs.min.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/data/minio/bin/minio

[Service]
WorkingDirectory=/data/minio/

User=minio
Group=minio
ProtectProc=invisible

EnvironmentFile=/data/minio/bin/env
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /data/minio/bin/env\"; exit 1; fi"
ExecStart=/data/minio/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID

# Let systemd restart this service always
Restart=always

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=1048576

# Specifies the maximum number of threads this process can create
TasksMax=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no

[Install]
WantedBy=multi-user.target

# Built for ${project.name}-${project.version} (${project.name})

创建用户

# 创建用户组:-r创建一个系统账户
groupadd -r minio

# 创建用户 -M不创建用户的主目录  -r创建一个系统账户  -g新账户主组的名称或 ID
useradd -M -r -g minio minio

chown -R minio:minio /data/minio

管理操作

systemctl daemon-reload
# 启用服务
systemctl enable minio.service
# 启动服务
systemctl start minio.service
# 重启服务
systemctl restart minio.service
# 停止服务
systemctl stop minio.service

# 查看状态
systemctl status minio.service
journalctl -xe -f -u minio.service

安装Nacos

此章节将介绍如何在K8s集群中安装 Nacos 三节点集群服务。

naocs初始脚本

/*
 * Copyright 1999-2018 Alibaba Group Holding Ltd.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */


CREATE database if NOT EXISTS `nacos_config` default character set utf8mb4 collate utf8mb4_unicode_ci;
use `nacos_config`;

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info   */
/******************************************/
CREATE TABLE `config_info` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(255) DEFAULT NULL,
  `content` longtext NOT NULL COMMENT 'content',
  `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  `src_user` text COMMENT 'source user',
  `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
  `app_name` varchar(128) DEFAULT NULL,
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  `c_desc` varchar(256) DEFAULT NULL,
  `c_use` varchar(64) DEFAULT NULL,
  `effect` varchar(64) DEFAULT NULL,
  `type` varchar(64) DEFAULT NULL,
  `c_schema` text,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info_aggr   */
/******************************************/
CREATE TABLE `config_info_aggr` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(255) NOT NULL COMMENT 'group_id',
  `datum_id` varchar(255) NOT NULL COMMENT 'datum_id',
  `content` longtext NOT NULL COMMENT '内容',
  `gmt_modified` datetime NOT NULL COMMENT '修改时间',
  `app_name` varchar(128) DEFAULT NULL,
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段';


/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info_beta   */
/******************************************/
CREATE TABLE `config_info_beta` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(128) NOT NULL COMMENT 'group_id',
  `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
  `content` longtext NOT NULL COMMENT 'content',
  `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps',
  `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  `src_user` text COMMENT 'source user',
  `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_info_tag   */
/******************************************/
CREATE TABLE `config_info_tag` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(128) NOT NULL COMMENT 'group_id',
  `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
  `tag_id` varchar(128) NOT NULL COMMENT 'tag_id',
  `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
  `content` longtext NOT NULL COMMENT 'content',
  `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  `src_user` text COMMENT 'source user',
  `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = config_tags_relation   */
/******************************************/
CREATE TABLE `config_tags_relation` (
  `id` bigint(20) NOT NULL COMMENT 'id',
  `tag_name` varchar(128) NOT NULL COMMENT 'tag_name',
  `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type',
  `data_id` varchar(255) NOT NULL COMMENT 'data_id',
  `group_id` varchar(128) NOT NULL COMMENT 'group_id',
  `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
  `nid` bigint(20) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`nid`),
  UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),
  KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = group_capacity   */
/******************************************/
CREATE TABLE `group_capacity` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群',
  `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
  `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
  `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
  `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值',
  `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
  `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表';

/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = his_config_info   */
/******************************************/
CREATE TABLE `his_config_info` (
  `id` bigint(64) unsigned NOT NULL,
  `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `data_id` varchar(255) NOT NULL,
  `group_id` varchar(128) NOT NULL,
  `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
  `content` longtext NOT NULL,
  `md5` varchar(32) DEFAULT NULL,
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `src_user` text,
  `src_ip` varchar(50) DEFAULT NULL,
  `op_type` char(10) DEFAULT NULL,
  `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
  PRIMARY KEY (`nid`),
  KEY `idx_gmt_create` (`gmt_create`),
  KEY `idx_gmt_modified` (`gmt_modified`),
  KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造';


/******************************************/
/*   数据库全名 = nacos_config   */
/*   表名称 = tenant_capacity   */
/******************************************/
CREATE TABLE `tenant_capacity` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',
  `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
  `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
  `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
  `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数',
  `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
  `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
  `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表';


CREATE TABLE `tenant_info` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
  `kp` varchar(128) NOT NULL COMMENT 'kp',
  `tenant_id` varchar(128) default '' COMMENT 'tenant_id',
  `tenant_name` varchar(128) default '' COMMENT 'tenant_name',
  `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc',
  `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source',
  `gmt_create` bigint(20) NOT NULL COMMENT '创建时间',
  `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),
  KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';

CREATE TABLE `users` (
    `username` varchar(50) NOT NULL PRIMARY KEY,
    `password` varchar(500) NOT NULL,
    `enabled` boolean NOT NULL
);

CREATE TABLE `roles` (
    `username` varchar(50) NOT NULL,
    `role` varchar(50) NOT NULL,
    UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE
);

CREATE TABLE `permissions` (
    `role` varchar(50) NOT NULL,
    `resource` varchar(255) NOT NULL,
    `action` varchar(8) NOT NULL,
    UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE
);

-- @Lingyun2022
INSERT INTO users (username, password, enabled) VALUES ('admin', '$2a$10$AKzKKCKvhx213G2Ui5HekerlClgQ4NbKNNMZPNZrmUYrHOw3TJOze', TRUE);

INSERT INTO roles (username, role) VALUES ('admin', 'ROLE_ADMIN');

K8s中部署Nacos集群

apiVersion: v1
kind: Service
metadata:
  name: nacos
  namespace: pai-cloud
  labels:
    app: nacos
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    - port: 7848
      name: old-raft
      targetPort: 7848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    k8s.kuboard.cn/layer: cloud
  name: nacos
  namespace: pai-cloud
spec:
  serviceName: nacos
  replicas: 3
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: k8snacos
          imagePullPolicy: IfNotPresent
          image: nacos/nacos-server:2.0.3
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client
              protocol: TCP
            - containerPort: 9848
              name: client-rpc
              protocol: TCP
            - containerPort: 9849
              name: raft-rpc
              protocol: TCP
            - containerPort: 7848
              name: old-raft
              protocol: TCP
          env:
            - name: NACOS_REPLICAS
              value: '3'
            - name: MYSQL_SERVICE_HOST
              value: 192.168.66.30
            - name: MYSQL_SERVICE_DB_NAME
              value: nacos_config
            - name: MYSQL_SERVICE_PORT
              value: '3306'
            - name: MYSQL_SERVICE_USER
              value: '<账号>'
            - name: MYSQL_SERVICE_PASSWORD
              value: '<密码>'
            - name: MODE
              value: cluster
            - name: NACOS_SERVER_PORT
              value: '8848'
            - name: PREFER_HOST_MODE
              value: hostname
            - name: NACOS_SERVERS
              value: >-
                nacos-0.nacos.pai-cloud.svc.cluster.local:8848
                nacos-1.nacos.pai-cloud.svc.cluster.local:8848
                nacos-2.nacos.pai-cloud.svc.cluster.local:8848
  selector:
    matchLabels:
      app: nacos

访问并创建生产命名空间

通过Kuboard的服务页面来访问,并创建命名空间
账号:admin
密码:@Lingyun2022 (登录成功后修改)

增加生产环境命名空间 prod

项目 内容
命名空间ID(不填则自动生成) prod
命名空间名 prod
描述 生产环境

安装Kibana

此章节将介绍如何在K8s集群中安装kibana 服务。

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s.kuboard.cn/layer: cloud
  namespace: pai-cloud
  name: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
        - name: kibana
          image: kibana:7.17.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5601
              protocol: TCP
          env: 
            - name: "ELASTICSEARCH_HOSTS"
              value: >-
                 ["http://192.168.66.38:9200","http://192.168.66.39:9200","http://192.168.66.40:9200"]
            - name: "I18N_LOCALE"
              value: "zh-CN"

---
kind: Service
apiVersion: v1
metadata:
  name: kibana
  namespace: pai-cloud
spec:
  type: NodePort
  selector:
    app: kibana
  ports:
    - port: 5601
      targetPort: 5601

安装Logstash

logstash-logs

此章节将介绍如何在K8s集群中安装 logstash-logs服务。

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-logs-conf
  namespace: pai-cloud
data:
  logstash.conf: |
    input {
      tcp {
        mode => "server"
        host => "0.0.0.0"
        port => 5044
        codec => json_lines
      }
    }
    output {
      elasticsearch {
        hosts => ["192.168.66.38:9200","192.168.66.39:9200","192.168.66.40:9200"]
        index => "spark-%{+YYYY.MM}"
      }
    }

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s.kuboard.cn/layer: cloud
  name: logstash-logs
  namespace: pai-cloud
spec:
  replicas: 1
  selector:
    matchLabels:
      name: logstash-logs
  template:
    metadata:
      labels:
        name: logstash-logs
    spec:
      containers:
      - name: logstash-logs
        image: logstash:7.17.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5044
          protocol: TCP
        env: 
          - name: "xpack.monitoring.elasticsearch.hosts"
            value: >-
               ["http://192.168.66.38:9200","http://192.168.66.39:9200","http://192.168.66.40:9200"]
          - name: "xpack.monitoring.enabled"
            value: "true"
          - name: "TZ"
            value: "Asia/Shanghai"
        volumeMounts:
        - name: logstash-logs-conf
          mountPath: /usr/share/logstash/pipeline/logstash.conf
          subPath: logstash.conf
      volumes:
      - name: logstash-logs-conf
        configMap:
          name: logstash-logs-conf

---
kind: Service
apiVersion: v1
metadata:
  name: logstash-logs
  namespace: pai-cloud
  labels:
    name: logstash-logs
spec:
  type: ClusterIP
  ports:
  - name: logstash-logs
    port: 5044
    targetPort: 5044
  selector:
    name: logstash-logs

安装 RocketMQ-Dashboard

此章节将介绍如何在K8s集群中安装 rocketmq-dashboard服务。

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: rocketmq-dashboard-configmap
  namespace: pai-cloud
data:
  java-opts: |-
    -Drocketmq.namesrv.addr=192.168.66.44:9876;192.168.66.45:9876;192.168.66.46:9876
    -Drocketmq.config.loginRequired=true
  users.properties: |-
    # Define Admin
    admin=Mx1yk9arH7YIJ8mh,1

    # Define Users
    #user1=@fe123

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rocketmq-dashboard
  namespace: pai-cloud
  labels:
    k8s.kuboard.cn/layer: monitor
spec:
  replicas: 1
  selector:
    matchLabels:
      name: rocketmq-dashboard
  template:
    metadata:
      labels:
        name: rocketmq-dashboard
    spec:
      containers:
        - name: rocketmq-dashboard
          image: dev.flyrise.cn:8082/library/rocketmq-dashboard:1.0.0
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 8080
            protocol: TCP
          resources:
            limits: 
              cpu: "1000m"
              memory: "1Gi"
          env:
            - name: JAVA_OPTS
              valueFrom:
                configMapKeyRef:
                  key: java-opts
                  name: rocketmq-dashboard-configmap
          volumeMounts:
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime 
            - mountPath: /tmp/rocketmq-console/data/users.properties
              name: users
              subPath: users.properties
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: File
        - name: users
          configMap:
            name: rocketmq-dashboard-configmap
---
kind: Service
apiVersion: v1
metadata:
  name: rocketmq-dashboard
  namespace: pai-cloud
  labels:
    name: rocketmq-dashboard
spec:
  type: NodePort
  ports:
  - name: rocketmq-dashboard
    port: 8080
    targetPort: 8080
  selector:
    name: rocketmq-dashboard

安装业务组件

##

配置coredns(可选)

由于目前没有正式域名,暂时通过coredns来解决前端启动域名解析失败问题

    hosts {
       127.0.0.1    spark-test.example.com
       fallthrough
    }

创建数据库

使用root账号连接数据库 192.168.66.30 并执行 database.sql (安装包提供)

创建数据库

使用root账号连接数据库 192.168.66.30 并执行 database.sql (安装包提供)

新增Nacos配置

通过Kuboard 打开Nacos管理控制台,在prod命名空间新建并配置 pai-gateway-prod.yaml、pai-share-config-prod.yaml、share-tenant-config.yaml (安装包提供)

导入工作负载

在master1上执行kubectl apply -f components.yaml (安装包提供)

配置IP代理入口

---
apiVersion: v1
data:
  default.conf: |-
    upstream ingress {
        server 192.168.66.24:30000;
        server 192.168.66.25:30000;
        server 192.168.66.26:30000;
        server 192.168.66.27:30000;
        server 192.168.66.28:30000;
    }

    server {
        listen       80;
        listen  [::]:80;
        server_name  localhost;

        #access_log  /var/log/nginx/host.access.log  main;

        location / {
           proxy_pass http://ingress;

           #Proxy Settings
           proxy_redirect off;
           proxy_set_header Host sp.zjts.com;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
           proxy_max_temp_file_size 0;
           proxy_connect_timeout 90;
           proxy_send_timeout 90;
           proxy_read_timeout 90;
           proxy_buffer_size 4k;
           proxy_buffers 4 32k;
           proxy_busy_buffers_size 64k;
           proxy_temp_file_write_size 64k;

           #support websocket
           proxy_http_version 1.1;
           proxy_set_header Upgrade $http_upgrade;
           proxy_set_header Connection "upgrade";
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
kind: ConfigMap
metadata:
  name: nginx-conf
  namespace: default

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s.kuboard.cn/name: nginx-proxy
  name: nginx-proxy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s.kuboard.cn/name: nginx-proxy
  template:
    metadata:
      labels:
        k8s.kuboard.cn/name: nginx-proxy
    spec:
      containers:
        - image: 'nginx:alpine'
          imagePullPolicy: IfNotPresent
          name: nginx-proxy
          volumeMounts:
            - mountPath: /etc/nginx/conf.d/default.conf
              name: volume-conf
              subPath: default.conf
      nodeName: k8s-worker3
      restartPolicy: Always
      hostNetwork: true
      volumes:
        - configMap:
            defaultMode: 420
            name: nginx-conf
          name: volume-conf

同步数据

  • 通过代理IP登录运营中心(sys/Sys@1234,首次登录要求改密码)

  • 云边一体 > 边端管理 (错误: 认证失败,无法访问系统资源,忽略) > 更新套件数据

  • 数据同步成功后,重启未成功的后端服务

    • pai-business
    • pai-contract
    • pai-electronic-contract
    • pai-enterprise
    • pai-finance
    • pai-park-property
    • pai-portal
    • pai-service-center

安装NFS

安装服务端(文件存储服务器)

yum install -y rpcbind nfs-utils
mkdir -p /data/nfs
echo "/data/nfs 192.168.66.0/24(insecure,rw,sync,no_root_squash)" > /etc/exports
systemctl enable --now rpcbind
systemctl enable --now nfs-server
exportfs -r
exportfs

firewall-cmd --permanent --add-port=111/tcp
firewall-cmd --permanent --add-port=111/udp
firewall-cmd --permanent --add-port=2049/tcp
firewall-cmd --permanent --add-port=20048/udp
firewall-cmd --reload
firewall-cmd --list-ports

安装客户端(K8s集群节点)

yum install -y nfs-utils

#测试
showmount -e 192.168.66.47

MySQL备份

---
apiVersion: v1
data:
  backup.sh: >-
    #!/usr/bin/env bash

    #保存数量

    BACKUP_NUMBER=30


    #当前日期

    DATE=$(date +%Y-%m-%d)


    BACKUP_ROOT=/data


    if [ ! -n $BACKUP_PATH ] ; then
      BACKUP_ROOT=$BACKUP_PATH
    fi  



    BACKUP_FILEDIR=$BACKUP_ROOT/$DATE




    #如果文件夹不存在则创建

    if [ ! -d $BACKUP_FILEDIR ];

    then
        mkdir -p $BACKUP_FILEDIR;
    fi


    #查询所有数据库

    DATABASES=$(mysql -h$DATABASE_HOST -u$DATABASE_USER -p"$DATABASE_PASSWORD"
    -P$DATABASE_PORT -e "show databases" | grep -Ev
    "Database|sys|information_schema|performance_schema|mysql")

    #循环数据库进行备份

    for db in $DATABASES

    do
      echo
      if [[ "${db}" =~ "+" ]] || [[ "${db}" =~ "|" ]];then
        echo "jump over ${db}"
      else
        echo ----------$BACKUP_FILEDIR/${db}_$DATE.sql.gz $(date) BEGIN----------
        mysqldump -h$DATABASE_HOST -u$DATABASE_USER -p"$DATABASE_PASSWORD" -P$DATABASE_PORT --default-character-set=utf8 -q --lock-all-tables --flush-logs -E -R --triggers -B ${db} | gzip > $BACKUP_FILEDIR/${db}_$DATE.sql.gz
        echo ${db}
        echo ----------$BACKUP_FILEDIR/${db}_$DATE.sql.gz $(date) COMPLETE----------
        echo
      fi
    done



    #找出需要删除的备份

    delfile=`ls -l -crt $BACKUP_ROOT | awk '{print $9 }' | grep "-" | head -1`

    #判断现在的备份数量是否大于$BACKUP_NUMBER

    count=`ls -l -crt $BACKUP_ROOT | awk '{print $9 }' | grep "-" | wc -l`

    if [ $count -gt $BACKUP_NUMBER ]

    then
      #删除最早生成的备份,只保留BACKUP_NUMBER数量的备份
      rm -rf $BACKUP_ROOT/$delfile
      #写删除文件日志
      echo "delete $delfile" >> $BACKUP_ROOT/log.txt
    fi  
kind: ConfigMap
metadata:
  name: pai-database-backup-conf
  namespace: pai-cloud

---
apiVersion: batch/v1
kind: CronJob
metadata:
  annotations:
    k8s.kuboard.cn/displayName: 数据库定时备份
  labels:
    k8s.kuboard.cn/name: pai-database-backup
  name: pai-database-backup
  namespace: pai-cloud
spec:
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 1
  jobTemplate:
    metadata:
      creationTimestamp: null
      labels:
        k8s.kuboard.cn/name: pai-database-backup
    spec:
      template:
        metadata:
          creationTimestamp: null
        spec:
          containers:
            - command:
                - bash
                - /home/backup.sh
              env:
                - name: DATABASE_HOST
                  value: 192.168.66.31
                - name: DATABASE_USER
                  value: root
                - name: DATABASE_PORT
                  value: '3306'
                - name: DATABASE_PASSWORD
                  value: 'L7iTW#******'
              image: 'mysql:8.0.25'
              imagePullPolicy: IfNotPresent
              name: pai-database-backup
              resources: {}
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
                - mountPath: /data
                  name: backup
                - mountPath: /home/backup.sh
                  name: sh
                  subPath: backup.sh
                - mountPath: /etc/localtime
                  name: host-time
          dnsPolicy: ClusterFirst
          nodeName: k8s-master1
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
            - name: backup
              nfs:
                path: /data/nfs/mysql
                server: 192.168.66.47
            - configMap:
                defaultMode: 511
                name: pai-database-backup-conf
              name: sh
            - hostPath:
                path: /etc/localtime
                type: ''
              name: host-time
  schedule: 0 1 * * *
  successfulJobsHistoryLimit: 1
  suspend: false
文档更新时间: 2024-04-25 16:35   作者:姚连洲