Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kk命令安装时,nfs安装失败 #2454

Open
pkyit opened this issue Nov 10, 2024 · 1 comment
Open

kk命令安装时,nfs安装失败 #2454

pkyit opened this issue Nov 10, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@pkyit
Copy link

pkyit commented Nov 10, 2024

What is version of KubeKey has the issue?

kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.7", GitCommit:"da475c670813fc8a4dd3b1312aaa36e96ff01a1f", GitTreeState:"clean", BuildDate:"2024-10-30T09:41:20Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

What is your os environment?

阿里龙蜥8.9

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-master1, address: 10.10.10.201, internalAddress: 10.10.10.201, user: root, password: "123456789"}
  - {name: k8s-master2, address: 10.10.10.202, internalAddress: 10.10.10.202, user: root, password: "123456789"}
  - {name: k8s-master3, address: 10.10.10.203, internalAddress: 10.10.10.203, user: root, password: "123456789"}
  - {name: k8s-node1, address: 10.10.10.211, internalAddress: 10.10.10.211, user: root, password: "123456789"}
  - {name: k8s-node2, address: 10.10.10.212, internalAddress: 10.10.10.212, user: root, password: "123456789"}
  - {name: k8s-node3, address: 10.10.10.213, internalAddress: 10.10.10.213, user: root, password: "123456789"}
  - {name: k8s-node4, address: 10.10.10.214, internalAddress: 10.10.10.214, user: root, password: "123456789"}
  - {name: k8s-node5, address: 10.10.10.215, internalAddress: 10.10.10.215, user: root, password: "123456789"}
  roleGroups:
    etcd:
    - k8s-master1
    - k8s-master2
    - k8s-master3
    control-plane: 
    - k8s-master1
    - k8s-master2
    - k8s-master3
    worker:
    - k8s-node1
    - k8s-node2
    - k8s-node3
    - k8s-node4
    - k8s-node5
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.27.16
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 192.168.0.0/16
    kubeServiceCIDR: 172.20.0.0/16
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: 
  - name: nfs-client
    namespace: kube-system
    sources:
      chart:
        name: nfs-client-provisioner
        repo: https://charts.kubesphere.io/main
        valuesFile: /root/nfs.yaml

A clear and concise description of what happend.

nfs-client
22:39:01 CST [AddonModule] Install addon nfs-client
Release "nfs-client" does not exist. Installing it now.
22:40:36 CST message: [LocalHost]
looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
22:40:36 CST failed: [LocalHost]
22:40:36 CST message: [LocalHost]
Pipeline[InstallAddonsPipeline] execute failed: Module[AddonModule] exec failed:
failed: [LocalHost] [InstallAddon] exec failed after 1 retries: looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer
22:40:36 CST failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[AddonsModule] exec failed:
failed: [LocalHost] [InstallAddons] exec failed after 1 retries: Pipeline[InstallAddonsPipeline] execute failed: Module[AddonModule] exec failed:
failed: [LocalHost] [InstallAddon] exec failed after 1 retries: looks like "https://charts.kubesphere.io/main" is not a valid chart repository or cannot be reached: Get "https://charts.kubesphere.io/main/index.yaml": read tcp 10.10.10.201:47604->104.21.80.188:443: read: connection reset by peer

Relevant log output

No response

Additional information

nfs配置文件如下:

创建了一个存储类

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份


apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
## 指定自己nfs服务器地址
value: 10.10.10.100
- name: NFS_PATH
## nfs服务器共享的目录
value: /home/k8s-nfs
volumes:
- name: nfs-client-root
nfs:
server: 10.10.10.100
path: /home/k8s-nfs

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:

  • apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  • apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  • apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  • apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  • apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:

  • kind: ServiceAccount
    name: nfs-client-provisioner

    replace with namespace where provisioner is deployed

    namespace: default
    roleRef:
    kind: ClusterRole
    name: nfs-client-provisioner-runner
    apiGroup: rbac.authorization.k8s.io

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default
rules:

  • apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner

replace with namespace where provisioner is deployed

namespace: default
subjects:

  • kind: ServiceAccount
    name: nfs-client-provisioner

    replace with namespace where provisioner is deployed

    namespace: default
    roleRef:
    kind: Role
    name: leader-locking-nfs-client-provisioner
    apiGroup: rbac.authorization.k8s.io
@pkyit pkyit added the bug Something isn't working label Nov 10, 2024
@redscholar
Copy link
Collaborator

看报错是没法访问https://charts.kubesphere.io这个地址之前在国内访问确实不稳定。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants