>

kubernetes 클러스터를 시작하려고하지만 kubernetes가 이미지를 가져 오기 위해 다른 URL을 사용합니다. AFAIK, 구성 파일을 통해서만 가능합니다.

구성 파일에 익숙하지 않으므로 간단한 것으로 시작했습니다 :

apiVersion: kubeadm.k8s.io/v1alpha2
imageRepository: my.internal.repo:8082
kind: MasterConfiguration
kubernetesVersion: v1.11.3

그리고kubeadm init --config file.yaml명령을 실행했습니다 얼마 후 다음 오류와 함께 실패합니다.

[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I1015 12:05:54.066140   27275 kernel_validator.go:81] Validating kernel version
I1015 12:05:54.066324   27275 kernel_validator.go:96] Validating kernel config
        [WARNING Hostname]: hostname "kube-master-0" could not be reached
        [WARNING Hostname]: hostname "kube-master-0" lookup kube-master-0 on 10.11.12.246:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.5.189]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master-0 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master-0 localhost] and IPs [10.10.5.189 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
                Unfortunately, an error has occurred:
                        timed out waiting for the condition
                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                - my.internal.repo:8082/kube-apiserver-amd64:v1.11.3
                                - my.internal.repo:8082/kube-controller-manager-amd64:v1.11.3
                                - my.internal.repo:8082/kube-scheduler-amd64:v1.11.3
                                - my.internal.repo:8082/etcd-amd64:3.2.18
                                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                  are downloaded locally and cached.
                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'
                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                Here is one example how you may list all Kubernetes containers running in docker:
                        - 'docker ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

systemctl status kubelet으로 kubelet 상태를 확인했으며 실행 중입니다.

성공적으로다음과 같이 이미지를 강제로 가져 오려고했습니다.

docker pull my.internal.repo:8082/kubee-apiserver-amd64:v1.11.3

그러나 'docker ps -a는'컨테이너가 없습니다.

journalctl -xeu kubelet 은 많은 연결 거부를 표시하고 k8s.io에 요청합니다. 근본 오류를 이해하는 데 어려움을 겪고 있습니다.

어떤 아이디어?

미리 감사합니다!

편집 1 : 수동으로 포트를 열려고했지만 아무것도 바뀌지 않았습니다. [centos @ kube-master-0 ~] $sudo firewall-cmd --zone = public --list-ports 6443/tcp 5000/tcp 2379-2380/tcp 10250-10252/tcp

또한 kube 버전을 1.11.3에서 1.12.1로 변경했지만 아무것도 변경되지 않았습니다.

편집 2 : kubelet이 k8s.io repo에서 가져 오려고한다는 것을 깨달았습니다. 즉, kubeadm 내부 저장소 만 변경했습니다. kubelet과 동일한 작업을 수행해야합니다.

Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.108764   24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to...on refused
Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.110539   24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v...on refused

어떤 아이디어?

  • 답변 # 1

    문제의 절반을 해결했습니다. 마지막 해결책은 kubelet 를 편집하는 것입니다.  (와이즈 와이즈) ) init 파일. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 를 설정해야합니다  내부 저장소를 통해 가져온 일시 정지 컨테이너 이미지를 참조하는 매개 변수입니다. 이미지 이름은 다음과 같습니다. --pod_infra_container_image .

    이 이유는 kubelet이 새로운 이미지 태그를 참조 할 수 없기 때문입니다.

  • 답변 # 2

    코멘트를 사용하여 텍스트를 사용할 수없는 형식으로 인해 제 코멘트를 답변으로 게시합니다 :

    클러스터 초기화 전에 이미지를 다운로드하려고하면 어떻게됩니까? 예 :

    master-config.yaml :

    my.internal.repo:8082/pause:[version]
    
    

    명령 :

    와이즈 비즈

    출력 :

    apiVersion: kubeadm.k8s.io/v1alpha2
    kind: MasterConfiguration
    kubernetesVersion: v1.11.3
    
    

    P.S : 시도하기 전에 imageRepository : my.internal.repo : 8082를 추가하십시오.

    [email protected]:~# kubeadm config images pull --config="/root/master-config.yaml"

  • 이전 typescript - 왜 옵션 선택 반응성 양식 각도가 작동하지 않습니까?
  • 다음 mysql - 별도의 열에서 생성 된 인덱스는 내부적으로 어떻게 작동합니까?