请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

10250是什么端口-节点上无法执行logs和exec命令

我执行这个节点的任何pod的logs命令或者exec命令都会出现连接10250这个端口失败

kubectl logs -n kube-system coredns-84646c885d-v9wxl
Error from server: Get "https://k-node-1:10250/containerLogs/kube-system/coredns-84646c885d-v9wxl/coredns": dial tcp 127.0.1.1:10250: connect: connection refused

我怀疑是网络的问题, 这是一些信息

kubectl get pod -n kube-system
NAME                                       READY   STATUS        RESTARTS   AGE
calico-kube-controllers-7f4f5bf95d-ndvmk   1/1     Running       2          25d
calico-node-84xgx                          0/1     Running       0          26d
calico-node-lpqht                          1/1     Running       22         26d
coredns-84646c885d-4jj58                   0/1     Pending       0          15h
coredns-84646c885d-hz79w                   1/1     Running       0          26d
coredns-84646c885d-v9wxl                   0/1     Terminating   0          16h
nginx-proxy-k-node-2                       1/1     Running       0          26d
nodelocaldns-cxw59                         1/1     Running       0          26d
nodelocaldns-skt86                         1/1     Running       0          26d
kubectl describe pod -n kube-system coredns-84646c885d-v9wxl
Name:                      coredns-84646c885d-v9wxl
Namespace:                 kube-system
Priority:                  2000000000
Priority Class Name:       system-cluster-critical
Node:                      k-node-1/
Labels:                    k8s-app=kube-dns
                           pod-template-hash=84646c885d
Annotations:               seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:                    Terminating (lasts 15h)
Termination Grace Period:  30s
...
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  15h (x6 over 15h)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.
  Normal   Scheduled         15h                default-scheduler  Successfully assigned kube-system/coredns-84646c885d-v9wxl to k-node-1

进程存在

ps -ef | grep kubelet
root      6842     1  6 02:30 ?        00:00:00 /usr/local/bin/kubelet --config=/etc/kubernetes/kubelet-config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --image-pull-progress-deadline=2m --kubeconfig=/etc/kubernetes/kubeconfig --network-plugin=cni --node-ip=192.168.211.11 --register-node=true --v=2

但是端口没有监听 (在节点2中是能看到这个端口是由kubelet监听的)

ss -ntlp | grep 10250

以及kubelet配置中的healthzPort端口10248也没有监听

ss -ntlp | grep 10248

我尝试强制删除Terminating状态的coredns的Pod, 但是重新启动的Pod还会和上面发的样子一样.

我有了新发现, 我的kubelet一直在重启, 起了几秒钟然后又重启了

在这之前我修改了liunx的时间和时区为北京时间, 然后就这样了…

我应该怎么去排查问题?

这是我的启动日志

/usr/local/bin/kubelet --config=/etc/kubernetes/kubelet-config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --image-pull-progress-deadline=2m --kubeconfig=/etc/kubernetes/kubeconfig --network-plugin=cni --node-ip=192.168.211.11 --register-node=true --v=2
I0709 03:49:22.080261   11484 flags.go:59] FLAG: --add-dir-header="false"
I0709 03:49:22.080327   11484 flags.go:59] FLAG: --address="0.0.0.0"
I0709 03:49:22.080333   11484 flags.go:59] FLAG: --allowed-unsafe-sysctls="[]"
I0709 03:49:22.080341   11484 flags.go:59] FLAG: --alsologtostderr="false"
I0709 03:49:22.080345   11484 flags.go:59] FLAG: --anonymous-auth="true"
I0709 03:49:22.080351   11484 flags.go:59] FLAG: --application-metrics-count-limit="100"
I0709 03:49:22.080355   11484 flags.go:59] FLAG: --authentication-token-webhook="false"
I0709 03:49:22.080359   11484 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
I0709 03:49:22.080365   11484 flags.go:59] FLAG: --authorization-mode="AlwaysAllow"
I0709 03:49:22.080370   11484 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
I0709 03:49:22.080374   11484 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
I0709 03:49:22.080378   11484 flags.go:59] FLAG: --azure-container-registry-config=""
I0709 03:49:22.080381   11484 flags.go:59] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
I0709 03:49:22.080387   11484 flags.go:59] FLAG: --bootstrap-kubeconfig=""
I0709 03:49:22.080390   11484 flags.go:59] FLAG: --cert-dir="/var/lib/kubelet/pki"
I0709 03:49:22.080395   11484 flags.go:59] FLAG: --cgroup-driver="cgroupfs"
I0709 03:49:22.080399   11484 flags.go:59] FLAG: --cgroup-root=""
I0709 03:49:22.080403   11484 flags.go:59] FLAG: --cgroups-per-qos="true"
I0709 03:49:22.080408   11484 flags.go:59] FLAG: --chaos-chance="0"
I0709 03:49:22.080413   11484 flags.go:59] FLAG: --client-ca-file=""
I0709 03:49:22.080417   11484 flags.go:59] FLAG: --cloud-config=""
I0709 03:49:22.080421   11484 flags.go:59] FLAG: --cloud-provider=""
I0709 03:49:22.080425   11484 flags.go:59] FLAG: --cluster-dns="[]"
I0709 03:49:22.080433   11484 flags.go:59] FLAG: --cluster-domain=""
I0709 03:49:22.080437   11484 flags.go:59] FLAG: --cni-bin-dir="/opt/cni/bin"
I0709 03:49:22.080441   11484 flags.go:59] FLAG: --cni-cache-dir="/var/lib/cni/cache"
I0709 03:49:22.080445   11484 flags.go:59] FLAG: --cni-conf-dir="/etc/cni/net.d"
I0709 03:49:22.080449   11484 flags.go:59] FLAG: --config="/etc/kubernetes/kubelet-config.yaml"
I0709 03:49:22.080454   11484 flags.go:59] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
I0709 03:49:22.080459   11484 flags.go:59] FLAG: --container-log-max-files="5"
I0709 03:49:22.080464   11484 flags.go:59] FLAG: --container-log-max-size="10Mi"
I0709 03:49:22.080468   11484 flags.go:59] FLAG: --container-runtime="remote"
I0709 03:49:22.080474   11484 flags.go:59] FLAG: --container-runtime-endpoint="unix:///var/run/containerd/containerd.sock"
I0709 03:49:22.080479   11484 flags.go:59] FLAG: --containerd="/run/containerd/containerd.sock"
I0709 03:49:22.080484   11484 flags.go:59] FLAG: --containerd-namespace="k8s.io"
I0709 03:49:22.080488   11484 flags.go:59] FLAG: --contention-profiling="false"
I0709 03:49:22.080492   11484 flags.go:59] FLAG: --cpu-cfs-quota="true"
I0709 03:49:22.080496   11484 flags.go:59] FLAG: --cpu-cfs-quota-period="100ms"
I0709 03:49:22.080501   11484 flags.go:59] FLAG: --cpu-manager-policy="none"
I0709 03:49:22.080505   11484 flags.go:59] FLAG: --cpu-manager-reconcile-period="10s"
I0709 03:49:22.080509   11484 flags.go:59] FLAG: --docker="unix:///var/run/docker.sock"
I0709 03:49:22.080513   11484 flags.go:59] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
I0709 03:49:22.080518   11484 flags.go:59] FLAG: --docker-env-metadata-whitelist=""
I0709 03:49:22.080522   11484 flags.go:59] FLAG: --docker-only="false"
I0709 03:49:22.080525   11484 flags.go:59] FLAG: --docker-root="/var/lib/docker"
I0709 03:49:22.080530   11484 flags.go:59] FLAG: --docker-tls="false"
I0709 03:49:22.080534   11484 flags.go:59] FLAG: --docker-tls-ca="ca.pem"
I0709 03:49:22.080538   11484 flags.go:59] FLAG: --docker-tls-cert="cert.pem"
I0709 03:49:22.080541   11484 flags.go:59] FLAG: --docker-tls-key="key.pem"
I0709 03:49:22.080546   11484 flags.go:59] FLAG: --dynamic-config-dir=""
I0709 03:49:22.080553   11484 flags.go:59] FLAG: --enable-cadvisor-json-endpoints="false"
I0709 03:49:22.080557   11484 flags.go:59] FLAG: --enable-controller-attach-detach="true"
I0709 03:49:22.080561   11484 flags.go:59] FLAG: --enable-debugging-handlers="true"
I0709 03:49:22.080565   11484 flags.go:59] FLAG: --enable-load-reader="false"
I0709 03:49:22.080568   11484 flags.go:59] FLAG: --enable-server="true"
I0709 03:49:22.080573   11484 flags.go:59] FLAG: --enforce-node-allocatable="[pods]"
I0709 03:49:22.080578   11484 flags.go:59] FLAG: --event-burst="10"
I0709 03:49:22.080582   11484 flags.go:59] FLAG: --event-qps="5"
I0709 03:49:22.080586   11484 flags.go:59] FLAG: --event-storage-age-limit="default=0"
I0709 03:49:22.080590   11484 flags.go:59] FLAG: --event-storage-event-limit="default=0"
I0709 03:49:22.080594   11484 flags.go:59] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
I0709 03:49:22.080607   11484 flags.go:59] FLAG: --eviction-max-pod-grace-period="0"
I0709 03:49:22.080611   11484 flags.go:59] FLAG: --eviction-minimum-reclaim=""
I0709 03:49:22.080616   11484 flags.go:59] FLAG: --eviction-pressure-transition-period="5m0s"
I0709 03:49:22.080620   11484 flags.go:59] FLAG: --eviction-soft=""
I0709 03:49:22.080625   11484 flags.go:59] FLAG: --eviction-soft-grace-period=""
I0709 03:49:22.080629   11484 flags.go:59] FLAG: --exit-on-lock-contention="false"
I0709 03:49:22.080633   11484 flags.go:59] FLAG: --experimental-allocatable-ignore-eviction="false"
I0709 03:49:22.080637   11484 flags.go:59] FLAG: --experimental-bootstrap-kubeconfig=""
I0709 03:49:22.080641   11484 flags.go:59] FLAG: --experimental-check-node-capabilities-before-mount="false"
I0709 03:49:22.080647   11484 flags.go:59] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
I0709 03:49:22.080651   11484 flags.go:59] FLAG: --experimental-kernel-memcg-notification="false"
I0709 03:49:22.080655   11484 flags.go:59] FLAG: --experimental-logging-sanitization="false"
I0709 03:49:22.080659   11484 flags.go:59] FLAG: --experimental-mounter-path=""
I0709 03:49:22.080663   11484 flags.go:59] FLAG: --fail-swap-on="true"
I0709 03:49:22.080667   11484 flags.go:59] FLAG: --feature-gates=""
I0709 03:49:22.080672   11484 flags.go:59] FLAG: --file-check-frequency="20s"
I0709 03:49:22.080676   11484 flags.go:59] FLAG: --global-housekeeping-interval="1m0s"
I0709 03:49:22.080680   11484 flags.go:59] FLAG: --hairpin-mode="promiscuous-bridge"
I0709 03:49:22.080684   11484 flags.go:59] FLAG: --healthz-bind-address="127.0.0.1"
I0709 03:49:22.084397   11484 flags.go:59] FLAG: --healthz-port="10248"
I0709 03:49:22.084415   11484 flags.go:59] FLAG: --help="false"
I0709 03:49:22.084421   11484 flags.go:59] FLAG: --hostname-override=""
I0709 03:49:22.084425   11484 flags.go:59] FLAG: --housekeeping-interval="10s"
I0709 03:49:22.084431   11484 flags.go:59] FLAG: --http-check-frequency="20s"
I0709 03:49:22.084436   11484 flags.go:59] FLAG: --image-credential-provider-bin-dir=""
I0709 03:49:22.084440   11484 flags.go:59] FLAG: --image-credential-provider-config=""
I0709 03:49:22.084444   11484 flags.go:59] FLAG: --image-gc-high-threshold="85"
I0709 03:49:22.084448   11484 flags.go:59] FLAG: --image-gc-low-threshold="80"
I0709 03:49:22.084452   11484 flags.go:59] FLAG: --image-pull-progress-deadline="2m0s"
I0709 03:49:22.084457   11484 flags.go:59] FLAG: --image-service-endpoint=""
I0709 03:49:22.084460   11484 flags.go:59] FLAG: --iptables-drop-bit="15"
I0709 03:49:22.084465   11484 flags.go:59] FLAG: --iptables-masquerade-bit="14"
I0709 03:49:22.084469   11484 flags.go:59] FLAG: --keep-terminated-pod-volumes="false"
I0709 03:49:22.084473   11484 flags.go:59] FLAG: --kernel-memcg-notification="false"
I0709 03:49:22.084478   11484 flags.go:59] FLAG: --kube-api-burst="10"
I0709 03:49:22.084482   11484 flags.go:59] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0709 03:49:22.084487   11484 flags.go:59] FLAG: --kube-api-qps="5"
I0709 03:49:22.084491   11484 flags.go:59] FLAG: --kube-reserved=""
I0709 03:49:22.084497   11484 flags.go:59] FLAG: --kube-reserved-cgroup=""
I0709 03:49:22.084501   11484 flags.go:59] FLAG: --kubeconfig="/etc/kubernetes/kubeconfig"
I0709 03:49:22.084506   11484 flags.go:59] FLAG: --kubelet-cgroups=""
I0709 03:49:22.084510   11484 flags.go:59] FLAG: --lock-file=""
I0709 03:49:22.084514   11484 flags.go:59] FLAG: --log-backtrace-at=":0"
I0709 03:49:22.084521   11484 flags.go:59] FLAG: --log-cadvisor-usage="false"
I0709 03:49:22.084525   11484 flags.go:59] FLAG: --log-dir=""
I0709 03:49:22.084529   11484 flags.go:59] FLAG: --log-file=""
I0709 03:49:22.084533   11484 flags.go:59] FLAG: --log-file-max-size="1800"
I0709 03:49:22.084537   11484 flags.go:59] FLAG: --log-flush-frequency="5s"
I0709 03:49:22.084545   11484 flags.go:59] FLAG: --logging-format="text"
I0709 03:49:22.084550   11484 flags.go:59] FLAG: --logtostderr="true"
I0709 03:49:22.084554   11484 flags.go:59] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
I0709 03:49:22.084560   11484 flags.go:59] FLAG: --make-iptables-util-chains="true"
I0709 03:49:22.084564   11484 flags.go:59] FLAG: --manifest-url=""
I0709 03:49:22.084568   11484 flags.go:59] FLAG: --manifest-url-header=""
I0709 03:49:22.084677   11484 flags.go:59] FLAG: --master-service-namespace="default"
I0709 03:49:22.084684   11484 flags.go:59] FLAG: --max-open-files="1000000"
I0709 03:49:22.084691   11484 flags.go:59] FLAG: --max-pods="110"
I0709 03:49:22.084695   11484 flags.go:59] FLAG: --maximum-dead-containers="-1"
I0709 03:49:22.084700   11484 flags.go:59] FLAG: --maximum-dead-containers-per-container="1"
I0709 03:49:22.084704   11484 flags.go:59] FLAG: --minimum-container-ttl-duration="0s"
I0709 03:49:22.084708   11484 flags.go:59] FLAG: --minimum-image-ttl-duration="2m0s"
I0709 03:49:22.084713   11484 flags.go:59] FLAG: --network-plugin="cni"
I0709 03:49:22.084717   11484 flags.go:59] FLAG: --network-plugin-mtu="0"
I0709 03:49:22.084721   11484 flags.go:59] FLAG: --node-ip="192.168.211.11"
I0709 03:49:22.084725   11484 flags.go:59] FLAG: --node-labels=""
I0709 03:49:22.084730   11484 flags.go:59] FLAG: --node-status-max-images="50"
I0709 03:49:22.084734   11484 flags.go:59] FLAG: --node-status-update-frequency="10s"
I0709 03:49:22.084739   11484 flags.go:59] FLAG: --non-masquerade-cidr="10.0.0.0/8"
I0709 03:49:22.084743   11484 flags.go:59] FLAG: --one-output="false"
I0709 03:49:22.084747   11484 flags.go:59] FLAG: --oom-score-adj="-999"
I0709 03:49:22.084751   11484 flags.go:59] FLAG: --pod-cidr=""
I0709 03:49:22.084755   11484 flags.go:59] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.2"
I0709 03:49:22.084759   11484 flags.go:59] FLAG: --pod-manifest-path=""
I0709 03:49:22.086099   11484 flags.go:59] FLAG: --pod-max-pids="-1"
I0709 03:49:22.086219   11484 flags.go:59] FLAG: --pods-per-core="0"
I0709 03:49:22.086226   11484 flags.go:59] FLAG: --port="10250"
I0709 03:49:22.086231   11484 flags.go:59] FLAG: --protect-kernel-defaults="false"
I0709 03:49:22.086236   11484 flags.go:59] FLAG: --provider-id=""
I0709 03:49:22.086240   11484 flags.go:59] FLAG: --qos-reserved=""
I0709 03:49:22.086247   11484 flags.go:59] FLAG: --read-only-port="10255"
I0709 03:49:22.086252   11484 flags.go:59] FLAG: --really-crash-for-testing="false"
I0709 03:49:22.086256   11484 flags.go:59] FLAG: --redirect-container-streaming="false"
I0709 03:49:22.086260   11484 flags.go:59] FLAG: --register-node="true"
I0709 03:49:22.086264   11484 flags.go:59] FLAG: --register-schedulable="true"
I0709 03:49:22.086268   11484 flags.go:59] FLAG: --register-with-taints=""
I0709 03:49:22.086283   11484 flags.go:59] FLAG: --registry-burst="10"
I0709 03:49:22.086287   11484 flags.go:59] FLAG: --registry-qps="5"
I0709 03:49:22.086295   11484 flags.go:59] FLAG: --reserved-cpus=""
I0709 03:49:22.086380   11484 flags.go:59] FLAG: --resolv-conf="/etc/resolv.conf"
I0709 03:49:22.086394   11484 flags.go:59] FLAG: --root-dir="/var/lib/kubelet"
I0709 03:49:22.086401   11484 flags.go:59] FLAG: --rotate-certificates="false"
I0709 03:49:22.086407   11484 flags.go:59] FLAG: --rotate-server-certificates="false"
I0709 03:49:22.086412   11484 flags.go:59] FLAG: --runonce="false"
I0709 03:49:22.086418   11484 flags.go:59] FLAG: --runtime-cgroups=""
I0709 03:49:22.086423   11484 flags.go:59] FLAG: --runtime-request-timeout="2m0s"
I0709 03:49:22.086430   11484 flags.go:59] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
I0709 03:49:22.086436   11484 flags.go:59] FLAG: --serialize-image-pulls="true"
I0709 03:49:22.086442   11484 flags.go:59] FLAG: --skip-headers="false"
I0709 03:49:22.086450   11484 flags.go:59] FLAG: --skip-log-headers="false"
I0709 03:49:22.086456   11484 flags.go:59] FLAG: --stderrthreshold="2"
I0709 03:49:22.086465   11484 flags.go:59] FLAG: --storage-driver-buffer-duration="1m0s"
I0709 03:49:22.086471   11484 flags.go:59] FLAG: --storage-driver-db="cadvisor"
I0709 03:49:22.086476   11484 flags.go:59] FLAG: --storage-driver-host="localhost:8086"
I0709 03:49:22.086480   11484 flags.go:59] FLAG: --storage-driver-password="root"
I0709 03:49:22.086484   11484 flags.go:59] FLAG: --storage-driver-secure="false"
I0709 03:49:22.086488   11484 flags.go:59] FLAG: --storage-driver-table="stats"
I0709 03:49:22.086492   11484 flags.go:59] FLAG: --storage-driver-user="root"
I0709 03:49:22.086496   11484 flags.go:59] FLAG: --streaming-connection-idle-timeout="4h0m0s"
I0709 03:49:22.086501   11484 flags.go:59] FLAG: --sync-frequency="1m0s"
I0709 03:49:22.086505   11484 flags.go:59] FLAG: --system-cgroups=""
I0709 03:49:22.086509   11484 flags.go:59] FLAG: --system-reserved=""
I0709 03:49:22.086514   11484 flags.go:59] FLAG: --system-reserved-cgroup=""
I0709 03:49:22.086518   11484 flags.go:59] FLAG: --tls-cert-file=""
I0709 03:49:22.086522   11484 flags.go:59] FLAG: --tls-cipher-suites="[]"
I0709 03:49:22.086532   11484 flags.go:59] FLAG: --tls-min-version=""
I0709 03:49:22.086536   11484 flags.go:59] FLAG: --tls-private-key-file=""
I0709 03:49:22.086539   11484 flags.go:59] FLAG: --topology-manager-policy="none"
I0709 03:49:22.086543   11484 flags.go:59] FLAG: --topology-manager-scope="container"
I0709 03:49:22.086548   11484 flags.go:59] FLAG: --v="2"
I0709 03:49:22.086552   11484 flags.go:59] FLAG: --version="false"
I0709 03:49:22.086563   11484 flags.go:59] FLAG: --vmodule=""
I0709 03:49:22.086568   11484 flags.go:59] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
I0709 03:49:22.086573   11484 flags.go:59] FLAG: --volume-stats-agg-period="1m0s"
I0709 03:49:22.086627   11484 feature_gate.go:243] feature gates: &{map[]}
I0709 03:49:22.089107   11484 feature_gate.go:243] feature gates: &{map[]}
I0709 03:49:22.089228   11484 feature_gate.go:243] feature gates: &{map[]}
I0709 03:49:22.102575   11484 mount_linux.go:202] Detected OS with systemd
I0709 03:49:22.102728   11484 server.go:416] Version: v1.20.2
I0709 03:49:22.102780   11484 feature_gate.go:243] feature gates: &{map[]}
I0709 03:49:22.102843   11484 feature_gate.go:243] feature gates: &{map[]}
I0709 03:49:22.148280   11484 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/ssl/ca.pem"
I0709 03:49:22.148939   11484 manager.go:165] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/user.slice"
I0709 03:49:22.149478   11484 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/ssl/ca.pem
I0709 03:49:27.154891   11484 fs.go:127] Filesystem UUIDs: map[1c419d6c-5064-4a2b-953c-05b2c67edb15:/dev/sda1]
I0709 03:49:27.154925   11484 fs.go:128] Filesystem partitions: map[/dev/sda1:{mountpoint:/ major:8 minor:1 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:18 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:19 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:38 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:20 fsType:tmpfs blockSize:0}]
I0709 03:49:27.155149   11484 nvidia.go:61] NVIDIA setup failed: no NVIDIA devices found
I0709 03:49:27.156488   11484 manager.go:213] Machine: {Timestamp:2021-07-09 03:49:27.156326277 +0800 CST m=+5.171046720 NumCores:1 NumPhysicalCores:1 NumSockets:1 CpuFrequency:2208000 MemoryCapacity:1927241728 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:2048 NumPages:0}] MachineID:17796892b4409a4a8e63a6471ed11910 SystemUUID:17796892-B440-9A4A-8E63-A6471ED11910 BootID:2434e3a7-8d05-4622-9bb4-af6ed161449c Filesystems:[{Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:42927656960 Type:vfs Inodes:20971008 HasInodes:true} {Device:/run/user/0 DeviceMajor:0 DeviceMinor:38 Capacity:192724992 Type:vfs Inodes:235259 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:18 Capacity:963620864 Type:vfs Inodes:235259 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:19 Capacity:963620864 Type:vfs Inodes:235259 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:20 Capacity:963620864 Type:vfs Inodes:235259 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:42949672960 Scheduler:noop}] NetworkDevices:[{Name:dummy0 MacAddress:12:f5:3b:b1:4e:c1 Speed:0 Mtu:1500} {Name:eth0 MacAddress:52:54:00:4d:77:d3 Speed:1000 Mtu:1500} {Name:eth1 MacAddress:08:00:27:ad:0b:da Speed:1000 Mtu:1500} {Name:kube-ipvs0 MacAddress:4e:f5:79:e5:d6:85 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:1927241728 HugePages:[{PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}] SocketID:0}] Caches:[{Size:9437184 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0709 03:49:27.156625   11484 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available.
I0709 03:49:27.156793   11484 manager.go:229] Version: {KernelVersion:3.10.0-1160.25.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:}
I0709 03:49:27.156884   11484 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
F0709 03:49:27.157055   11484 server.go:269] failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename				Type		Size	Used	Priority /swapfile                               file		2097148	0	-2]
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc0000fc780, 0x11e, 0x265)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x70d14c0, 0xc000000003, 0x0, 0x0, 0xc0001ef960, 0x6f3c396, 0x9, 0x10d, 0x411b00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x70d14c0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc000b48280, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:732 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:714
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatal(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1482
k8s.io/kubernetes/cmd/kubelet/app.NewKubeletCommand.func1(0xc00049d8c0, 0xc00004e0b0, 0x9, 0x9)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:269 +0x845
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00049d8c0, 0xc00004e0b0, 0x9, 0x9, 0xc00049d8c0, 0xc00004e0b0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00049d8c0, 0x168fe89f75e6bfb5, 0x70d1080, 0x409b25)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:41 +0xe5

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x70d14c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:417 +0xdf

goroutine 98 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalContext.func1(0xc000897620)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:48 +0x36
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.SetupSignalContext
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/signal.go:47 +0xf3

goroutine 97 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x9d
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
	/usr/local/go/src/os/signal/signal.go:150 +0x45

goroutine 79 [select]:
k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000098eb0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57

goroutine 94 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4a77d40, 0x4f11e00, 0xc0005ba5a0, 0x1, 0xc00009a0c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a77d40, 0x12a05f200, 0x0, 0xc00001df01, 0xc00009a0c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a77d40, 0x12a05f200)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a

goroutine 99 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000a60fc0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 100 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a61140)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 101 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run(0xc000a611a0, 0x1, 0xc000dc7f20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:181 +0x313
created by k8s.io/kubernetes/cmd/kubelet/app.BuildAuthn.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/auth.go:99 +0x65

goroutine 106 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000e85d50, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000e85d40)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a60fc0, 0x0, 0x0, 0x3cde400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).processNextWorkItem(0xc000a611a0, 0x203000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).runWorker(0xc000a611a0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:185 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000b1b020)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b1b020, 0x4f11e00, 0xc0007f5800, 0x4a77401, 0xc000dc7f20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b1b020, 0x3b9aca00, 0x0, 0x1, 0xc000dc7f20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000b1b020, 0x3b9aca00, 0xc000dc7f20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x28b

goroutine 107 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000bda860, 0xc000b1b030, 0xc00009baa0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc000b1b030, 0xc000dc7f20, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc000b1b030, 0xc000dc7f20, 0x0, 0x48c0933)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2f9

goroutine 108 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc000dc7f20, 0xc000b1b050, 0x4f86f80, 0xc000414280)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c

goroutine 109 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00009bb60, 0xdf8475800, 0x0, 0xc00009bb00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c

goroutine 116 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc000415480)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:219 +0x66
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewBroadcaster
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:73 +0xf7

goroutine 117 [runnable]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4f22a00, 0xc000802db0, 0xc000bf6320)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e

goroutine 118 [runnable]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4f22a00, 0xc000802f60, 0xc000802f30)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e
[root@k-node-1 ~]# 

是因为我重启机器后, swap又打开了…

正在回答

1回答

kubelet一直重启说明启动失败,要看详细启动日志

0 回复 有任何疑惑可以回复我~
  • 提问者 qq_慕丝0528892 #1
    怎么看? 我用journalctl -p err和journalctl -u kubelet都看不到错误的日志
    回复 有任何疑惑可以回复我~ 2021-07-09 11:14:03
  • 提问者 qq_慕丝0528892 #2
    解决了, 好气啊, 居然日志不是错误级别的我一直没看出来, 是因为虚拟内存在我重启机器后又打开了
    回复 有任何疑惑可以回复我~ 2021-07-09 11:36:43
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信