Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd 无法通过验证 #2464

Open
zcl8515991 opened this issue Nov 23, 2024 · 1 comment
Open

etcd 无法通过验证 #2464

zcl8515991 opened this issue Nov 23, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@zcl8515991
Copy link

What is version of KubeKey has the issue?

v3.1.0-alpha.5

What is your os environment?

contos 7

KubeKey config file

hosts:
  - {name: k8s-master1, address: 192.168.1.73, internalAddress: 192.168.1.73, user: root, password: "rootroot"}
  - {name: k8s-master2, address: 192.168.1.110, internalAddress: 192.168.1.110, user: root, password: "rootroot"}
  - {name: k8s-master3, address: 192.168.1.74, internalAddress: 192.168.1.74, user: root, password: "rootroot"}
  - {name: k8s-node1, address: 192.168.1.75, internalAddress: 192.168.1.75, user: root, password: "rootroot"}
#  - {name: k8s-node2, address: 192.168.1.111, internalAddress: 192.168.1.111, user: root, password: "rootroot"}
  - {name: k8s-node3, address: 192.168.1.112, internalAddress: 192.168.1.112, user: root, password: "rootroot"}
  roleGroups:
    etcd:
    - k8s-master1
    - k8s-master2
    - k8s-master3
    control-plane:
    - k8s-master1
 #   - k8s-master2
    - k8s-master3
    worker:
    - k8s-node1
  #  - k8s-node2
    - k8s-node3
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.22x
    clusterName: cluster.local
    autoRenewCerts: true
    # containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 192.168.119.0/24
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

A clear and concise description of what happend.

我使用虚拟机部署三个master 三个node 有两个node节点服务器坏了 重装之后使用命令./kk add nodes k8s-node3 -f config-sample.yaml 命令就报错 之一搞不定

Relevant log output

[certs] Using existing admin-k8s-master2 certificate and key on disk
[certs] Using existing member-k8s-master2 certificate and key on disk
22:49:11 CST success: [LocalHost]
22:49:11 CST [CertsModule] Synchronize certs file
22:49:31 CST success: [k8s-master3]
22:49:31 CST success: [k8s-master1]
22:49:31 CST success: [k8s-master2]
22:49:31 CST [CertsModule] Synchronize certs file to master
22:49:31 CST skipped: [k8s-master3]
22:49:31 CST skipped: [k8s-master1]
22:49:31 CST [InstallETCDBinaryModule] Install etcd using binary
22:50:01 CST success: [k8s-master1]
22:50:01 CST success: [k8s-master3]
22:50:01 CST success: [k8s-master2]
22:50:01 CST [InstallETCDBinaryModule] Generate etcd service
22:50:02 CST success: [k8s-master1]
22:50:02 CST success: [k8s-master3]
22:50:02 CST success: [k8s-master2]
22:50:02 CST [InstallETCDBinaryModule] Generate access address
22:50:02 CST skipped: [k8s-master2]
22:50:02 CST skipped: [k8s-master3]
22:50:02 CST success: [k8s-master1]
22:50:02 CST [ETCDConfigureModule] Health check on exist etcd
22:50:02 CST message: [k8s-master3]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-k8s-master3.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-k8s-master3-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://192.168.1.73:2379,https://192.168.1.74:2379,https://192.168.1.110:2379 cluster-health | grep -q 'cluster is healthy'" 
: Process exited with status 1
22:50:02 CST retry: [k8s-master3]
22:50:02 CST message: [k8s-master1]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-k8s-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-k8s-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://192.168.1.73:2379,https://192.168.1.74:2379,https://192.168.1.110:2379 cluster-health | grep -q 'cluster is healthy'" 
: Process exited with status 1
22:50:02 CST retry: [k8s-master1]
22:50:03 CST message: [k8s-master2]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-k8s-master2.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-k8s-master2-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://192.168.1.73:2379,https://192.168.1.74:2379,https://192.168.1.110:2379 cluster-health | grep -q 'cluster is healthy'" 
: Process exited with status 1
22:50:03 CST retry: [k8s-master2]
22:50:07 CST message: [k8s-master1]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-k8s-master1.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-k8s-master1-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://192.168.1.73:2379,https://192.168.1.74:2379,https://192.168.1.110:2379 cluster-health | grep -q 'cluster is healthy'" 
: Process exited with status 1
22:50:07 CST retry: [k8s-master1]
22:50:07 CST message: [k8s-master3]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-k8s-master3.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-k8s-master3-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://192.168.1.73:2379,https://192.168.1.74:2379,https://192.168.1.110:2379 cluster-health | grep -q 'cluster is healthy'" 
: Process exited with status 1
22:50:07 CST retry: [k8s-master3]
22:50:08 CST message: [k8s-master2]
etcd health check failed: Failed to exec command: sudo -E /bin/bash -c "export ETCDCTL_API=2;export ETCDCTL_CERT_FILE='/etc/ssl/etcd/ssl/admin-k8s-master2.pem';export ETCDCTL_KEY_FILE='/etc/ssl/etcd/ssl/admin-k8s-master2-key.pem';export ETCDCTL_CA_FILE='/etc/ssl/etcd/ssl/ca.pem';/usr/local/bin/etcdctl --endpoints=https://192.168.1.73:2379,https://192.168.1.74:2379,https://192.168.1.110:2379 cluster-health | grep -q 'cluster is healthy'" 
: Process exited with status 1
22:50:08 CST retry: [k8s-master2]
^Z
[2]+  Stopped                 ./kk add nodes k8s-node3 -f config-sample.yaml

Additional information

No response

@zcl8515991 zcl8515991 added the bug Something isn't working label Nov 23, 2024
@redscholar
Copy link
Collaborator

得看etcd的日志,具体报啥错

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants