请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

六、部署kubernetes工作节点失败

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759863 13531 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759889 13531 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:438
2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.760203 13531 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000cd3260, {CONNECTING }
2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.760366 13531 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000cd3400, {CONNECTING }
2月 07 16:24:14 node-3 kubelet[13531]: E0207 16:24:14.762863 13531 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get “https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dnode-3&limit=500&resourceVersion=0”: dial tcp 127.0.0.1:6443: connect: connection refused
2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.762889 13531 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
2月 07 16:24:14 node-3 kubelet[13531]: E0207 16:24:14.766110 13531 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get “https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dnode-3&limit=500&resourceVersion=0”: dial tcp 127.0.0.1:6443: connect: connection refused
2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.766312 13531 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.766472 13531 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000cd3260, {READY }
2月 07 16:24:14 node-3 kubelet[13531]: E0207 16:24:14.766615 13531 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get “https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0”: dial tcp 127.0.0.1:6443: connect: connection refused

麻烦老师抽空看看,卡了很久了!

正在回答 回答被采纳积分+3

1回答

提问者 weixin_慕勒5178051 2022-02-07 16:48:13

[root@node-3 ~]# journalctl -f -u kubelet

-- Logs begin at 二 2021-11-30 15:21:35 CST. --

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.709871   13531 feature_gate.go:243] feature gates: &{map[]}

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.711733   13531 feature_gate.go:243] feature gates: &{map[]}

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.711819   13531 feature_gate.go:243] feature gates: &{map[]}

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.716596   13531 mount_linux.go:202] Detected OS with systemd

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.716779   13531 server.go:416] Version: v1.20.2

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.716832   13531 feature_gate.go:243] feature gates: &{map[]}

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.716928   13531 feature_gate.go:243] feature gates: &{map[]}

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.753459   13531 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/ssl/ca.pem"

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.753681   13531 manager.go:165] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"

2月 07 16:24:09 node-3 kubelet[13531]: I0207 16:24:09.754691   13531 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/ssl/ca.pem

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.756450   13531 fs.go:127] Filesystem UUIDs: map[b98386f1-e6a8-44e3-9ce1-a50e59d9a170:/dev/vda1]

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.756480   13531 fs.go:128] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:19 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:253 minor:1 fsType:ext4 blockSize:0} /run:{mountpoint:/run major:0 minor:20 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:37 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:21 fsType:tmpfs blockSize:0}]

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.756750   13531 nvidia.go:61] NVIDIA setup failed: no NVIDIA devices found

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.758287   13531 manager.go:213] Machine: {Timestamp:2022-02-07 16:24:14.758126403 +0800 CST m=+5.106233768 NumCores:2 NumPhysicalCores:1 NumSockets:1 CpuFrequency:2499998 MemoryCapacity:1819570176 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:20181129113200424400422638950048 SystemUUID:88466AA5-2A47-4D02-BC8A-2DD9EECFFBA0 BootID:c22c1661-ece6-436b-bec2-266e9a4eae31 Filesystems:[{Device:/dev/vda1 DeviceMajor:253 DeviceMinor:1 Capacity:42140479488 Type:vfs Inodes:2621440 HasInodes:true} {Device:/run/user/0 DeviceMajor:0 DeviceMinor:37 Capacity:181960704 Type:vfs Inodes:222115 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:19 Capacity:909783040 Type:vfs Inodes:222115 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:20 Capacity:909783040 Type:vfs Inodes:222115 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:21 Capacity:909783040 Type:vfs Inodes:222115 HasInodes:true}] DiskMap:map[253:0:{Name:vda Major:253 Minor:0 Size:42949672960 Scheduler:mq-deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:16:3e:12:8e:41 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:2038743040 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:1048576 Type:Unified Level:2}] SocketID:0}] Caches:[{Size:37486592 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.758428   13531 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available.

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.758613   13531 manager.go:229] Version: {KernelVersion:3.10.0-862.14.4.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.758714   13531 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759078   13531 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759094   13531 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759183   13531 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759191   13531 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759196   13531 container_manager_linux.go:315] Creating device plugin manager: true

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759206   13531 manager.go:133] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759307   13531 remote_runtime.go:62] parsed scheme: ""

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759317   13531 remote_runtime.go:62] scheme "" not registered, fallback to default scheme

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759342   13531 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759353   13531 clientconn.go:948] ClientConn switching balancer to "pick_first"

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759397   13531 remote_image.go:50] parsed scheme: ""

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759401   13531 remote_image.go:50] scheme "" not registered, fallback to default scheme

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759409   13531 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759414   13531 clientconn.go:948] ClientConn switching balancer to "pick_first"

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759434   13531 server.go:1117] Using root directory: /var/lib/kubelet

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759451   13531 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759473   13531 file.go:68] Watching path "/etc/kubernetes/manifests"

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759505   13531 kubelet.go:273] Watching apiserver

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759863   13531 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.759889   13531 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:438

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.760203   13531 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000cd3260, {CONNECTING <nil>}

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.760366   13531 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000cd3400, {CONNECTING <nil>}

2月 07 16:24:14 node-3 kubelet[13531]: E0207 16:24:14.762863   13531 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused

2月 07 16:24:14 node-3 kubelet[13531]: I0207 16:24:14.762889   13531 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46

2月 07 16:24:14 node-3 kubelet[13531]: E0207 16:24:14.766110   13531 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dnode-3&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused

2月 07 16:24:


补全日志!

0 回复 有任何疑惑可以回复我~
  • 错误是访问不到apiserver的代理,代理是kubelet带起来的静态pod,pod配置在这个目录:/etc/kubernetes/manifests。按这个思路排查
    回复 有任何疑惑可以回复我~ 2022-02-08 10:12:23
  • 亦生云 回复 刘果国 #2
    您讲的哪条命令会自动拉起nginx-proxy的pod?
    回复 有任何疑惑可以回复我~ 2022-08-27 20:13:49
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信