हम 4 नोड्स के साथ एक छोटा कुबेरनेट क्लस्टर चला रहे हैं। मास्टर-नोड हैंग होने के बाद, हमें इसे पुनः आरंभ करना था।
मास्टर नोड पर अब kube-proxy
और weave-net
पॉड्स राज्य में लटके रहते हैं ContainerCreating
, जबकि अन्य सभी नोड्स पर ठीक चल रहा है:
kubectl get pod -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE
coredns-78fcdf6894-67wkh 1/1 Running 0 12h 10.42.0.40 i101v182
coredns-78fcdf6894-8xwkq 1/1 Running 3 122d 10.44.0.65 i101v181
etcd-i101v180.intra.graz.at 1/1 Running 2 11h 10.1.101.180 i101v180.intra.graz.at
heapster-74db55987-sthnp 1/1 Running 1 40d 10.44.0.60 i101v181
kube-apiserver-i101v180.intra.graz.at 1/1 Running 2 11h 10.1.101.180 i101v180.intra.graz.at
kube-controller-manager-i101v180.intra.graz.at 1/1 Running 2 11h 10.1.101.180 i101v180.intra.graz.at
kube-proxy-6dqzf 0/1 ContainerCreating 0 1h 10.1.101.180 i101v180.intra.graz.at
kube-proxy-nvghr 1/1 Running 9 187d 10.1.101.182 i101v182
kube-proxy-spchz 1/1 Running 8 187d 10.1.101.181 i101v181
kube-proxy-xhg77 1/1 Running 7 187d 10.1.101.183 i101v183
kube-scheduler-i101v180.intra.graz.at 1/1 Running 2 11h 10.1.101.180 i101v180.intra.graz.at
kubernetes-dashboard-767dc7d4d-ws79h 1/1 Running 0 12h 10.42.0.41 i101v182
metrics-server-696868464d-b8njh 1/1 Running 0 12h 10.42.0.42 i101v182
monitoring-influxdb-848b9b66f6-gwbf5 1/1 Running 1 46d 10.44.0.61 i101v181
weave-net-465np 2/2 Running 3 64d 10.1.101.182 i101v182
weave-net-5mgdp 0/2 ContainerCreating 0 1h 10.1.101.180 i101v180.intra.graz.at
weave-net-chsqv 2/2 Running 2 40d 10.1.101.183 i101v183
weave-net-mcn77 2/2 Running 18 187d 10.1.101.181 i101v181
लॉग में कोई त्रुटि नहीं है, न ही कोई ईवेंट जो इसके लिए कोई कारण देगा:
kubectl describe pod kube-proxy-6dqzf -n kube-system
Name: kube-proxy-6dqzf
Namespace: kube-system
Node: i101v180.intra.graz.at/10.1.101.180
Start Time: Wed, 16 Jan 2019 14:32:51 +0100
Labels: controller-revision-hash=1151982146
k8s-app=kube-proxy
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 10.1.101.180
Controlled By: DaemonSet/kube-proxy
Containers:
kube-proxy:
Container ID:
Image: k8s.gcr.io/kube-proxy-amd64:v1.11.0
Image ID:
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/kube-proxy
--config=/var/lib/kube-proxy/config.conf
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/kube-proxy from kube-proxy (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-9t8cp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-proxy:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-proxy
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
kube-proxy-token-9t8cp:
Type: Secret (a volume populated by a Secret)
SecretName: kube-proxy-token-9t8cp
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=amd64
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
kubectl describe pod weave-net-5mgdp -n kube-system
Name: weave-net-5mgdp
Namespace: kube-system
Node: i101v180.intra.graz.at/10.1.101.180
Start Time: Wed, 16 Jan 2019 14:21:57 +0100
Labels: controller-revision-hash=2963469815
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 10.1.101.180
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID:
Image: weaveworks/weave-kube:2.3.0
Image ID:
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-dcjzm (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID:
Image: weaveworks/weave-npc:2.3.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-dcjzm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType:
weave-net-token-dcjzm:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-dcjzm
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
कोई भी विचार इसका कारण क्या हो सकता है?