Print

Print


Dear Fabien and Fabio,

Puppet configuration is now fixed and Kubernetes 1.9.1 is running on all nodes.
However I now use internal kubernetes DNS and it seems something it killing it in my back after a few seconds on CC-IN2P3 cluster (see logs below).
FYI, I have successfully tested this setup on 10 Centos7 boxes in Openstack.

Would you please have an idea about the thing which kill kube-dns process on CC-IN2P3 cluster?

Cheers,

Fabrice

# Logs for the process inside kubedns pod:
root@ccosvms0070:~/admin# kubectl logs kube-dns-6f4fd4bdf-rb4x8 -c kubedns --namespace kube-system
I0201 13:29:23.613716       1 dns.go:48] version: 1.14.6-3-gc36cb11
I0201 13:29:23.615050       1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0201 13:29:23.615128       1 server.go:112] FLAG: --alsologtostderr="false"
I0201 13:29:23.615144       1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0201 13:29:23.615153       1 server.go:112] FLAG: --config-map=""
I0201 13:29:23.615158       1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0201 13:29:23.615165       1 server.go:112] FLAG: --config-period="10s"
I0201 13:29:23.615173       1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0201 13:29:23.615180       1 server.go:112] FLAG: --dns-port="10053"
I0201 13:29:23.615189       1 server.go:112] FLAG: --domain="cluster.local."
I0201 13:29:23.615198       1 server.go:112] FLAG: --federations=""
I0201 13:29:23.615204       1 server.go:112] FLAG: --healthz-port="8081"
I0201 13:29:23.615209       1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0201 13:29:23.615213       1 server.go:112] FLAG: --kube-master-url=""
I0201 13:29:23.615221       1 server.go:112] FLAG: --kubecfg-file=""
I0201 13:29:23.615227       1 server.go:112] FLAG: --log-backtrace-at=":0"
I0201 13:29:23.615237       1 server.go:112] FLAG: --log-dir=""
I0201 13:29:23.615244       1 server.go:112] FLAG: --log-flush-frequency="5s"
I0201 13:29:23.615251       1 server.go:112] FLAG: --logtostderr="true"
I0201 13:29:23.615256       1 server.go:112] FLAG: --nameservers=""
I0201 13:29:23.615263       1 server.go:112] FLAG: --stderrthreshold="2"
I0201 13:29:23.615268       1 server.go:112] FLAG: --v="2"
I0201 13:29:23.615274       1 server.go:112] FLAG: --version="false"
I0201 13:29:23.615283       1 server.go:112] FLAG: --vmodule=""
I0201 13:29:23.615368       1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0201 13:29:23.615753       1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0201 13:29:23.615781       1 dns.go:146] Starting endpointsController
I0201 13:29:23.615791       1 dns.go:149] Starting serviceController
I0201 13:29:23.615880       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0201 13:29:23.615898       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0201 13:29:24.116020       1 dns.go:170] Initialized services and endpoints from apiserver
I0201 13:29:24.116075       1 server.go:128] Setting up Healthz Handler (/readiness)
I0201 13:29:24.116108       1 server.go:133] Setting up cache handler (/cache)
I0201 13:29:24.116121       1 server.go:119] Status HTTP port 8081
I0201 13:31:02.019718       1 server.go:153] Ignoring signal terminated (can only be terminated by SIGKILL)


# kube-dns pods has crashed 39 times in 1 hour
root@ccosvms0070:~/admin# kubectl get pods --namespace kube-system          
NAME                                 READY     STATUS             RESTARTS   AGE
etcd-ccqservkm1                      1/1       Running            0          1h
kube-apiserver-ccqservkm1            1/1       Running            0          1h
kube-controller-manager-ccqservkm1   1/1       Running            0          1h
kube-dns-6f4fd4bdf-rb4x8             0/3       CrashLoopBackOff   39         1h
kube-proxy-28ssp                     1/1       Running            0          1h
kube-proxy-4kb9z                     1/1       Running            0          1h
kube-proxy-67vk2                     1/1       Running            0          1h

########################################################################
Use REPLY-ALL to reply to list

To unsubscribe from the QSERV-L list, click the following link:
https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=QSERV-L&A=1