请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

node-3工作节点上Failed connect to 127.0.0.1:6443; 拒绝连接 nginx 容器位运行

1. systemctl restart kubelet kube-proxy

1.1 journalctl -f -u kubelet:

-- Logs begin at 二 2021-11-30 15:21:42 CST. --
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.030191    3958 feature_gate.go:243] feature gates: &{map[]}
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.052412    3958 feature_gate.go:243] feature gates: &{map[]}
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.052580    3958 feature_gate.go:243] feature gates: &{map[]}
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.068363    3958 mount_linux.go:202] Detected OS with systemd
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.068586    3958 server.go:416] Version: v1.20.2
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.068681    3958 feature_gate.go:243] feature gates: &{map[]}
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.068785    3958 feature_gate.go:243] feature gates: &{map[]}
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.119429    3958 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/ssl/ca.pem"
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.119746    3958 manager.go:165] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
5月 11 12:10:42 node-3 kubelet[3958]: I0511 12:10:42.120278    3958 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/ssl/ca.pem
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.121891    3958 fs.go:127] Filesystem UUIDs: map[1114fe9e-2309-4580-b183-d778e6d97397:/dev/vda1]
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.121937    3958 fs.go:128] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:19 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:253 minor:1 fsType:ext4 blockSize:0} /run:{mountpoint:/run major:0 minor:20 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:37 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:21 fsType:tmpfs blockSize:0}]
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.122258    3958 nvidia.go:61] NVIDIA setup failed: no NVIDIA devices found
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.123686    3958 manager.go:213] Machine: {Timestamp:2022-05-11 12:10:47.123560204 +0800 CST m=+5.245507838 NumCores:1 NumPhysicalCores:1 NumSockets:1 CpuFrequency:2500012 MemoryCapacity:1927217152 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:20190711105006363114529432776998 SystemUUID:15094915-A1B0-4450-92CC-6FDE6C2CD4FA BootID:65c3e5d8-67ec-4ad8-83e4-b8a5723c2fea Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:19 Capacity:963608576 Type:vfs Inodes:235256 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:20 Capacity:963608576 Type:vfs Inodes:235256 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:21 Capacity:963608576 Type:vfs Inodes:235256 HasInodes:true} {Device:/dev/vda1 DeviceMajor:253 DeviceMinor:1 Capacity:42135011328 Type:vfs Inodes:2621440 HasInodes:true} {Device:/run/user/0 DeviceMajor:0 DeviceMinor:37 Capacity:192724992 Type:vfs Inodes:235256 HasInodes:true}] DiskMap:map[253:0:{Name:vda Major:253 Minor:0 Size:42949672960 Scheduler:mq-deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:16:3e:2e:44:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:2146951168 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:1048576 Type:Unified Level:2}] SocketID:0}] Caches:[{Size:34603008 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.123820    3958 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available.
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.127733    3958 manager.go:229] Version: {KernelVersion:3.10.0-957.21.3.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.127860    3958 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128216    3958 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128235    3958 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128341    3958 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128351    3958 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128357    3958 container_manager_linux.go:315] Creating device plugin manager: true
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128369    3958 manager.go:133] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128494    3958 remote_runtime.go:62] parsed scheme: ""
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128511    3958 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128547    3958 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128566    3958 clientconn.go:948] ClientConn switching balancer to "pick_first"
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128625    3958 remote_image.go:50] parsed scheme: ""
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128632    3958 remote_image.go:50] scheme "" not registered, fallback to default scheme
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128641    3958 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128647    3958 clientconn.go:948] ClientConn switching balancer to "pick_first"
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128679    3958 server.go:1117] Using root directory: /var/lib/kubelet
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128701    3958 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128731    3958 file.go:68] Watching path "/etc/kubernetes/manifests"
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.128745    3958 kubelet.go:273] Watching apiserver
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.133302    3958 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.133333    3958 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:438
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.133762    3958 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000c3e990, {CONNECTING <nil>}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.134009    3958 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000c3ec30, {CONNECTING <nil>}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.148120    3958 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.149131    3958 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.150972    3958 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000c3e990, {READY <nil>}
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.151880    3958 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000c3ec30, {READY <nil>}
5月 11 12:10:47 node-3 kubelet[3958]: E0511 12:10:47.152991    3958 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:47 node-3 kubelet[3958]: E0511 12:10:47.153103    3958 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:47 node-3 kubelet[3958]: E0511 12:10:47.155059    3958 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:47 node-3 kubelet[3958]: I0511 12:10:47.155588    3958 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.3, apiVersion: v1alpha2
5月 11 12:10:48 node-3 kubelet[3958]: E0511 12:10:48.045695    3958 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:48 node-3 kubelet[3958]: E0511 12:10:48.171310    3958 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:48 node-3 kubelet[3958]: E0511 12:10:48.420922    3958 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:50 node-3 kubelet[3958]: E0511 12:10:50.886545    3958 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:51 node-3 kubelet[3958]: E0511 12:10:51.179362    3958 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:51 node-3 kubelet[3958]: E0511 12:10:51.292137    3958 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.170203    3958 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
5月 11 12:10:53 node-3 kubelet[3958]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.170407    3958 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.200.0.0/16
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172064    3958 kubelet_network.go:77] Setting Pod CIDR:  -> 10.200.0.0/16
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172273    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/aws-ebs"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172285    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/gce-pd"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172294    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/cinder"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172302    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/azure-disk"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172309    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/azure-file"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172319    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/vsphere-volume"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172328    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/empty-dir"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172337    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/git-repo"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172349    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/host-path"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172359    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/nfs"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172370    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/secret"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172379    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/iscsi"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172387    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/glusterfs"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172397    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/rbd"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172406    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/quobyte"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172416    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/cephfs"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172425    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/downward-api"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172434    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/fc"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172442    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/flocker"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172452    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/configmap"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172462    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/projected"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172487    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/portworx-volume"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172496    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/scaleio"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172512    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/local-volume"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172527    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/storageos"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172559    3958 plugins.go:635] Loaded volume plugin "kubernetes.io/csi"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.172715    3958 server.go:1176] Started kubelet
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.179968    3958 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.184165    3958 server.go:148] Starting to listen on 172.17.149.93:10250
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.184992    3958 server.go:410] Adding debug handlers to kubelet server.
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.188648    3958 volume_manager.go:269] The desired_state_of_world populator starts
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.188661    3958 volume_manager.go:271] Starting Kubelet Volume Manager
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.191555    3958 desired_state_of_world_populator.go:142] Desired state populator starts to run
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.193510    3958 reflector.go:219] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:134
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.209930    3958 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.210011    3958 status_manager.go:158] Starting to sync pod status with apiserver
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.210045    3958 kubelet.go:1802] Starting kubelet main sync loop.
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.210086    3958 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.212496    3958 reflector.go:219] Starting reflector *v1.RuntimeClass (0s) from k8s.io/client-go/informers/factory.go:134
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.216118    3958 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"node-3.16edf193473fb96e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-3", UID:"node-3", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"node-3"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc096ebb34a4af76e, ext:11294632632, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc096ebb34a4af76e, ext:11294632632, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://127.0.0.1:6443/api/v1/namespaces/default/events": dial tcp 127.0.0.1:6443: connect: connection refused'(may retry after sleeping)
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.216275    3958 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csinodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.216500    3958 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.216565    3958 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node-3?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.216630    3958 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.216881    3958 client.go:86] parsed scheme: "unix"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.216891    3958 client.go:86] scheme "unix" not registered, fallback to default scheme
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.216912    3958 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.216920    3958 clientconn.go:948] ClientConn switching balancer to "pick_first"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.217074    3958 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0009e3d80, {CONNECTING <nil>}
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.217171    3958 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.222734    3958 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.222758    3958 kubelet.go:1274] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.222838    3958 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0009e3d80, {READY <nil>}
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.224464    3958 factory.go:137] Registering containerd factory
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.224567    3958 factory.go:55] Registering systemd factory
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.224704    3958 factory.go:101] Registering Raw factory
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.224825    3958 manager.go:1203] Started watching for new ooms in manager
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.225408    3958 manager.go:301] Starting recovery of all containers
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.236839    3958 manager.go:306] Recovery completed
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.295214    3958 kubelet_node_status.go:339] Setting node annotation to enable volume controller attach/detach
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.295401    3958 setters.go:86] Using node IP: "172.17.149.93"
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.295831    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.310303    3958 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.318957    3958 kubelet_node_status.go:339] Setting node annotation to enable volume controller attach/detach
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.319066    3958 setters.go:86] Using node IP: "172.17.149.93"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.319617    3958 kubelet_node_status.go:531] Recording NodeHasSufficientMemory event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.319637    3958 kubelet_node_status.go:531] Recording NodeHasNoDiskPressure event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.319645    3958 kubelet_node_status.go:531] Recording NodeHasSufficientPID event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.319689    3958 kubelet_node_status.go:71] Attempting to register node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.322353    3958 kubelet_node_status.go:531] Recording NodeHasSufficientMemory event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.322372    3958 kubelet_node_status.go:531] Recording NodeHasNoDiskPressure event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.322380    3958 kubelet_node_status.go:531] Recording NodeHasSufficientPID event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.322542    3958 kubelet_node_status.go:93] Unable to register node "node-3" with API server: Post "https://127.0.0.1:6443/api/v1/nodes": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324120    3958 cpu_manager.go:193] [cpumanager] starting with none policy
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324139    3958 cpu_manager.go:194] [cpumanager] reconciling every 10s
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324170    3958 state_mem.go:36] [cpumanager] initializing new in-memory state store
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324482    3958 state_mem.go:88] [cpumanager] updated default cpuset: ""
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324493    3958 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324522    3958 state_checkpoint.go:136] [cpumanager] state checkpoint: restored state from checkpoint
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324529    3958 state_checkpoint.go:137] [cpumanager] state checkpoint: defaultCPUSet:
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.324538    3958 policy_none.go:43] [cpumanager] none policy: Start
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.327418    3958 manager.go:236] Starting Device Plugin manager
5月 11 12:10:53 node-3 kubelet[3958]: W0511 12:10:53.327450    3958 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.327586    3958 manager.go:278] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.327647    3958 plugin_watcher.go:52] Plugin Watcher Start at /var/lib/kubelet/plugins_registry
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.327714    3958 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.327720    3958 plugin_manager.go:114] Starting Kubelet Plugin Manager
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.335184    3958 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.395954    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.417062    3958 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node-3?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.496122    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.510472    3958 kubelet.go:1888] SyncLoop (ADD, "file"): "nginx-proxy-node-3_kube-system(6b0063abcca88c78b1a1acbd0bd48914)"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.510577    3958 topology_manager.go:187] [topologymanager] Topology Admit Handler
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.510648    3958 kubelet_node_status.go:339] Setting node annotation to enable volume controller attach/detach
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.510760    3958 setters.go:86] Using node IP: "172.17.149.93"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.513572    3958 kubelet_node_status.go:531] Recording NodeHasSufficientMemory event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.513606    3958 kubelet_node_status.go:531] Recording NodeHasNoDiskPressure event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.513615    3958 kubelet_node_status.go:531] Recording NodeHasSufficientPID event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.513908    3958 kubelet_node_status.go:339] Setting node annotation to enable volume controller attach/detach
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.514032    3958 setters.go:86] Using node IP: "172.17.149.93"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.516573    3958 kubelet_node_status.go:531] Recording NodeHasSufficientMemory event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.516597    3958 kubelet_node_status.go:531] Recording NodeHasNoDiskPressure event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.516609    3958 kubelet_node_status.go:531] Recording NodeHasSufficientPID event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: W0511 12:10:53.517158    3958 status_manager.go:550] Failed to get status for pod "nginx-proxy-node-3_kube-system(6b0063abcca88c78b1a1acbd0bd48914)": Get "https://127.0.0.1:6443/api/v1/namespaces/kube-system/pods/nginx-proxy-node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.522654    3958 kubelet_node_status.go:339] Setting node annotation to enable volume controller attach/detach
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.522796    3958 setters.go:86] Using node IP: "172.17.149.93"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.524307    3958 kubelet_node_status.go:531] Recording NodeHasSufficientMemory event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.524327    3958 kubelet_node_status.go:531] Recording NodeHasNoDiskPressure event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.524336    3958 kubelet_node_status.go:531] Recording NodeHasSufficientPID event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.524360    3958 kubelet_node_status.go:71] Attempting to register node node-3
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.524664    3958 kubelet_node_status.go:93] Unable to register node "node-3" with API server: Post "https://127.0.0.1:6443/api/v1/nodes": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.595499    3958 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-nginx" (UniqueName: "kubernetes.io/host-path/6b0063abcca88c78b1a1acbd0bd48914-etc-nginx") pod "nginx-proxy-node-3" (UID: "6b0063abcca88c78b1a1acbd0bd48914")
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.596602    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.695718    3958 reconciler.go:269] operationExecutor.MountVolume started for volume "etc-nginx" (UniqueName: "kubernetes.io/host-path/6b0063abcca88c78b1a1acbd0bd48914-etc-nginx") pod "nginx-proxy-node-3" (UID: "6b0063abcca88c78b1a1acbd0bd48914")
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.695804    3958 operation_generator.go:672] MountVolume.SetUp succeeded for volume "etc-nginx" (UniqueName: "kubernetes.io/host-path/6b0063abcca88c78b1a1acbd0bd48914-etc-nginx") pod "nginx-proxy-node-3" (UID: "6b0063abcca88c78b1a1acbd0bd48914")
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.697142    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.797270    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.817038    3958 kuberuntime_manager.go:439] No sandbox for pod "nginx-proxy-node-3_kube-system(6b0063abcca88c78b1a1acbd0bd48914)" can be found. Need to start a new one
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.817687    3958 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node-3?timeout=10s": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.897403    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.924794    3958 kubelet_node_status.go:339] Setting node annotation to enable volume controller attach/detach
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.924986    3958 setters.go:86] Using node IP: "172.17.149.93"
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.926929    3958 kubelet_node_status.go:531] Recording NodeHasSufficientMemory event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.926972    3958 kubelet_node_status.go:531] Recording NodeHasNoDiskPressure event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.926981    3958 kubelet_node_status.go:531] Recording NodeHasSufficientPID event message for node node-3
5月 11 12:10:53 node-3 kubelet[3958]: I0511 12:10:53.927011    3958 kubelet_node_status.go:71] Attempting to register node node-3
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.927305    3958 kubelet_node_status.go:93] Unable to register node "node-3" with API server: Post "https://127.0.0.1:6443/api/v1/nodes": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:10:53 node-3 kubelet[3958]: E0511 12:10:53.997553    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:54 node-3 kubelet[3958]: E0511 12:10:54.097684    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:54 node-3 kubelet[3958]: E0511 12:10:54.197818    3958 kubelet.go:2243] node "node-3" not found
5月 11 12:10:54 node-3 kubelet[3958]: I0511 12:10:54.216788    3958 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csinodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused

1.2 journalctl -f -u kube-proxy:

-- Logs begin at 二 2021-11-30 15:21:42 CST. --
5月 11 12:13:16 node-3 systemd[1]: Started Kubernetes Kube Proxy.
5月 11 12:13:17 node-3 kube-proxy[4027]: E0511 12:13:17.008732    4027 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:18 node-3 kube-proxy[4027]: E0511 12:13:18.095225    4027 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:20 node-3 kube-proxy[4027]: E0511 12:13:20.168963    4027 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:24 node-3 kube-proxy[4027]: E0511 12:13:24.652021    4027 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:29 node-3 systemd[1]: Stopping Kubernetes Kube Proxy...
5月 11 12:13:29 node-3 systemd[1]: Stopped Kubernetes Kube Proxy.
5月 11 12:13:29 node-3 systemd[1]: Started Kubernetes Kube Proxy.
5月 11 12:13:29 node-3 kube-proxy[4079]: E0511 12:13:29.281090    4079 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:30 node-3 kube-proxy[4079]: E0511 12:13:30.314163    4079 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:32 node-3 kube-proxy[4079]: E0511 12:13:32.470559    4079 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:37 node-3 kube-proxy[4079]: E0511 12:13:37.183627    4079 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused
5月 11 12:13:46 node-3 kube-proxy[4079]: E0511 12:13:46.010695    4079 node.go:161] Failed to retrieve node info: Get "https://127.0.0.1:6443/api/v1/nodes/node-3": dial tcp 127.0.0.1:6443: connect: connection refused

1.3 netstat -tnpl:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      4078/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      4079/kube-proxy     
tcp        0      0 172.17.149.93:10250     0.0.0.0:*               LISTEN      4078/kubelet        
tcp        0      0 127.0.0.1:42602         0.0.0.0:*               LISTEN      1476/containerd     
tcp        0      0 172.17.149.93:2379      0.0.0.0:*               LISTEN      1318/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      1318/etcd           
tcp        0      0 172.17.149.93:2380      0.0.0.0:*               LISTEN      1318/etcd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      22301/sshd          
tcp6       0      0 :::10256                :::*                    LISTEN      4079/kube-proxy  

正在回答 回答被采纳积分+3

3回答

qq_imba的bug_zBIZ29 2023-01-03 19:04:14

你好,你的问题是怎么解决的

0 回复 有任何疑惑可以回复我~
提问者 小小太阳秦 2022-05-12 09:50:59

本质原因是因为 pause image 拉不下来 , 采用老师文档中手动拉去然后重启 kubelet 就ok


0 回复 有任何疑惑可以回复我~
刘果国 2022-05-12 09:33:36

检查manifest配置文件( /etc/kubernetes/manifest)

0 回复 有任何疑惑可以回复我~
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信