KubeFlow 1.2.0镜像缓存(持续更新)

原创
2021/03/19 13:20
阅读数 900

KubeFlow 1.2.0镜像比较多,存在多个docker仓库中,其中部分访问不可达。这里介绍其缓存方法,可以加速KubeFlow部署的过程。本文实现了一个镜像名称的自动提取脚本。

1、准备配置文件

wget https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_k8s_istio.v1.2.0.yaml

修改其中的 manifest source为本地文件(先下载到本地):

  repos:
  - name: manifests
    uri: file:/home/supermap/openthings/kubeflow/v1.2.0.tar.gz
  version: v1.2-branch

改完后的kfctl_k8s_istio.v1.2.0.yaml长这样:

apiVersion: kfdef.apps.kubeflow.org/v1
kind: KfDef
metadata:
  clusterName: kubernetes
  creationTimestamp: null
  name: kubeflow
  namespace: kubeflow
spec:
  applications:
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: namespaces/base
    name: namespaces
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: application/v3
    name: application
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/istio-1-3-1-stack
    name: istio-stack
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/cluster-local-gateway-1-3-1
    name: cluster-local-gateway
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: istio/istio/base
    name: istio
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/cert-manager-crds
    name: cert-manager-crds
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/cert-manager-kube-system-resources
    name: cert-manager-kube-system-resources
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/cert-manager
    name: cert-manager
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/add-anonymous-user-filter
    name: add-anonymous-user-filter
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: metacontroller/base
    name: metacontroller
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: admission-webhook/bootstrap/overlays/application
    name: bootstrap
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/spark-operator
    name: spark-operator
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes
    name: kubeflow-apps
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: knative/installs/generic
    name: knative
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: kfserving/installs/generic
    name: kfserving
  - kustomizeConfig:
      repoRef:
        name: manifests
        path: stacks/kubernetes/application/spartakus
    name: spartakus
  repos:
  - name: manifests
    uri: file:/home/supermap/openthings/kubeflow/v1.2.0.tar.gz
  version: v1.2-branch
status:
  reposCache:
  - localPath: '".cache/manifests/manifests-1.2.0"'
    name: manifests

把其中的 manifest下载下来:

wget https://github.com/kubeflow/manifests/archive/v1.2.0.tar.gz

然后,执行安装到Kubernetes集群:

kubectl apply -f kfctl_k8s_istio.v1.2.0.yaml

安装完后,会发现kubectl get pod -n kubeflow得到的pod没有正常启动,主要是因为镜像下载失败。

2、创建镜像缓存脚本

下一步我们创建脚本,把镜像缓存到本地。下面是使用Jupyter Lab来完成的脚本。

  • 提取deployment的镜像:
%%bash
#提取deployment的镜像
depl=$(kubectl get deployment -n kubeflow -oname)
echo "#deployment " > imgs_list.txt
for d in ${depl}
do
    dimgs=$(kubectl describe $d -n kubeflow | grep "Image")
    dimg=(${dimgs//: / }) #(${dimgs//:/})
    #echo $dimg
    image_name=${dimg[1]}
    echo "docker pull "$image_name >> imgs_list.txt
    echo "docker tag "$image_name >> imgs_list.txt
done
  • 提取replicaset的镜像:
%%bash
#提取replicaset的镜像
depl=$(kubectl get replicaset -n kubeflow -oname)
echo "#replicaset " >> imgs_list.txt
for d in ${depl}
do
    dimgs=$(kubectl describe $d -n kubeflow | grep "Image")
    dimg=(${dimgs//: / }) #(${dimgs//:/})
    #echo $dimg
    image_name=${dimg[1]}
    echo "docker pull "$image_name >> imgs_list.txt
    echo "docker tag "$image_name >> imgs_list.txt
done
  • 提取statefulset的镜像:
%%bash
#提取statefulset的镜像
depl=$(kubectl get statefulset -n kubeflow -oname)
echo "#statefulset " >> imgs_list.txt
for d in ${depl}
do
    dimgs=$(kubectl describe $d -n kubeflow | grep "Image")
    dimg=(${dimgs//: / }) #(${dimgs//:/})
    #echo $dimg
    image_name=${dimg[1]}
    echo "docker pull "$image_name >> imgs_list.txt
    echo "docker tag "$image_name >> imgs_list.txt
done

通过上面的操作,所有kubeflow需要的镜像名称被提取到了imgs_list.txt文件中。

3、pull images list 以及 push 到 aliyun

修改脚本,把镜像pull下来,然后push到aliyun的镜像仓库,其它节点就可以从aliyun直接拉取了。

注意:以下脚本需要修改成自己的镜像仓库名称,有的仓库需要提前登录。

  • pull to local

下面的脚本需要在可以访问gcr.io的机器上执行。

#deployment 
docker pull gcr.io/kubeflow-images-public/admission-webhook:vmaster-ge5452b6f
docker pull argoproj/argoui:v2.3.0
docker pull gcr.io/ml-pipeline/cache-deployer:1.0.4
docker pull gcr.io/ml-pipeline/cache-server:1.0.4
docker pull gcr.io/kubeflow-images-public/centraldashboard:vmaster-g8097cfeb
docker pull gcr.io/kubeflow-images-public/jupyter-web-app:vmaster-g845af298
docker pull docker.io/kubeflowkatib/katib-controller:v1beta1-a96ff59
docker pull docker.io/kubeflowkatib/katib-db-manager:v1beta1-a96ff59
docker pull mysql:8
docker pull docker.io/kubeflowkatib/katib-ui:v1beta1-a96ff59
docker pull python:3.7
docker pull mysql:8.0.3
docker pull gcr.io/ml-pipeline/envoy:metadata-grpc
docker pull gcr.io/tfx-oss-public/ml_metadata_store_server:v0.21.1
docker pull gcr.io/ml-pipeline/metadata-writer:1.0.4
docker pull gcr.io/ml-pipeline/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance
docker pull gcr.io/ml-pipeline/api-server:1.0.4
docker pull gcr.io/ml-pipeline/persistenceagent:1.0.4
docker pull gcr.io/ml-pipeline/scheduledworkflow:1.0.4
docker pull gcr.io/ml-pipeline/frontend:1.0.4
docker pull gcr.io/ml-pipeline/viewer-crd-controller:1.0.4
docker pull gcr.io/ml-pipeline/visualization-server:1.0.4
docker pull mpioperator/mpi-operator:latest
docker pull kubeflow/mxnet-operator:v1.0.0-20200625
docker pull gcr.io/ml-pipeline/mysql:5.6
docker pull gcr.io/kubeflow-images-public/notebook-controller:vmaster-g6eb007d0
docker pull gcr.io/kubeflow-images-public/profile-controller:vmaster-ga49f658f
docker pull gcr.io/kubeflow-images-public/pytorch-operator:vmaster-g518f9c76
docker pull docker.io/seldonio/seldon-core-operator:1.4.0
docker pull gcr.io/spark-operator/spark-operator:v1beta2-1.1.0-2.4.5
docker pull gcr.io/google_containers/spartakus-amd64:v1.1.0
docker pull gcr.io/kubeflow-images-public/tf_operator:vmaster-gda226016
docker pull argoproj/workflow-controller:v2.3.0

#===============================================
docker pull gcr.io/kubeflow-images-public/kfam:vmaster-g9f3bfd00
#docker save gcr.io/kubeflow-images-public/kfam:vmaster-g9f3bfd00 -o ./kfam.tar

docker pull gcr.io/kfserving/kfserving-controller:v0.4.1
#docker save gcr.io/kfserving/kfserving-controller:v0.4.1 -o ./kfserving-controller.tar
#===============================================

#statefulset 
docker pull gcr.io/kubeflow-images-public/ingress-setup:latest
docker pull gcr.io/kubeflow-images-public/kubernetes-sigs/application:1.0-beta
docker pull gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
docker pull metacontroller/metacontroller:v0.3.0
  • push到aliyun

MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings/kfimgs1.2-

#######deployent ########################
docker tag gcr.io/kubeflow-images-public/admission-webhook:vmaster-ge5452b6f ${MY_REGISTRY}admission-webhook
docker push ${MY_REGISTRY}admission-webhook

docker tag argoproj/argoui:v2.3.0 ${MY_REGISTRY}argoui
docker push ${MY_REGISTRY}argoui 

docker tag gcr.io/ml-pipeline/cache-deployer:1.0.4 ${MY_REGISTRY}cache-deployer
docker push ${MY_REGISTRY}cache-deployer

docker tag gcr.io/ml-pipeline/cache-server:1.0.4 ${MY_REGISTRY}cache-server
docker push ${MY_REGISTRY}cache-server

docker tag gcr.io/kubeflow-images-public/centraldashboard:vmaster-g8097cfeb ${MY_REGISTRY}centraldashboard
docker push ${MY_REGISTRY}centraldashboard

docker tag gcr.io/kubeflow-images-public/jupyter-web-app:vmaster-g845af298 ${MY_REGISTRY}jupyter-web-app
docker push ${MY_REGISTRY}jupyter-web-app

docker tag docker.io/kubeflowkatib/katib-controller:v1beta1-a96ff59 ${MY_REGISTRY}katib-controller
docker push ${MY_REGISTRY}katib-controller

docker tag docker.io/kubeflowkatib/katib-db-manager:v1beta1-a96ff59 ${MY_REGISTRY}katib-db-manager
docker push ${MY_REGISTRY}katib-db-manager

docker tag mysql:8 ${MY_REGISTRY}mysql8
docker push ${MY_REGISTRY}mysql8

docker tag docker.io/kubeflowkatib/katib-ui:v1beta1-a96ff59 ${MY_REGISTRY}katib-ui
docker push ${MY_REGISTRY}katib-ui

docker tag python:3.7 ${MY_REGISTRY}python3.7
docker push ${MY_REGISTRY}python3.7

docker tag mysql:8.0.3 ${MY_REGISTRY}mysql8.0.3
docker push ${MY_REGISTRY}mysql8.0.3

docker tag gcr.io/ml-pipeline/envoy:metadata-grpc ${MY_REGISTRY}envoy
docker push ${MY_REGISTRY}envoy

docker tag gcr.io/tfx-oss-public/ml_metadata_store_server:v0.21.1 ${MY_REGISTRY}ml_metadata_store_server
docker push ${MY_REGISTRY}ml_metadata_store_server

docker tag gcr.io/ml-pipeline/metadata-writer:1.0.4 ${MY_REGISTRY}metadata-writer
docker push ${MY_REGISTRY}metadata-writer

docker tag gcr.io/ml-pipeline/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance ${MY_REGISTRY}minio
docker push ${MY_REGISTRY}minio

docker tag gcr.io/ml-pipeline/api-server:1.0.4 ${MY_REGISTRY}api-server
docker push ${MY_REGISTRY}api-server

docker tag gcr.io/ml-pipeline/persistenceagent:1.0.4 ${MY_REGISTRY}persistenceagent
docker push ${MY_REGISTRY}persistenceagent

docker tag gcr.io/ml-pipeline/scheduledworkflow:1.0.4 ${MY_REGISTRY}scheduledworkflow
docker push ${MY_REGISTRY}scheduledworkflow

docker tag gcr.io/ml-pipeline/frontend:1.0.4 ${MY_REGISTRY}frontend
docker push ${MY_REGISTRY}frontend

docker tag gcr.io/ml-pipeline/viewer-crd-controller:1.0.4 ${MY_REGISTRY}viewer-crd-controller
docker push ${MY_REGISTRY}viewer-crd-controller

docker tag gcr.io/ml-pipeline/visualization-server:1.0.4 ${MY_REGISTRY}visualization-server
docker push ${MY_REGISTRY}visualization-server

docker tag mpioperator/mpi-operator:latest ${MY_REGISTRY}mpi-operator
docker push ${MY_REGISTRY}mpi-operator

docker tag kubeflow/mxnet-operator:v1.0.0-20200625 ${MY_REGISTRY}mxnet-operator
docker push ${MY_REGISTRY}mxnet-operator

docker tag gcr.io/ml-pipeline/mysql:5.6 ${MY_REGISTRY}mysql5.6
docker push ${MY_REGISTRY}mysql5.6

docker tag gcr.io/kubeflow-images-public/notebook-controller:vmaster-g6eb007d0 ${MY_REGISTRY}notebook-controller
docker push ${MY_REGISTRY}notebook-controller

docker tag gcr.io/kubeflow-images-public/profile-controller:vmaster-ga49f658f ${MY_REGISTRY}profile-controller
docker push ${MY_REGISTRY}profile-controller

docker tag gcr.io/kubeflow-images-public/pytorch-operator:vmaster-g518f9c76 ${MY_REGISTRY}pytorch-operator
docker push ${MY_REGISTRY}pytorch-operator

docker tag docker.io/seldonio/seldon-core-operator:1.4.0 ${MY_REGISTRY}seldon-core-operator
docker push ${MY_REGISTRY}seldon-core-operator

docker tag gcr.io/spark-operator/spark-operator:v1beta2-1.1.0-2.4.5 ${MY_REGISTRY}spark-operator
docker push ${MY_REGISTRY}spark-operator

docker tag gcr.io/google_containers/spartakus-amd64:v1.1.0 ${MY_REGISTRY}spartakus-amd64
docker push ${MY_REGISTRY}spartakus-amd64

docker tag gcr.io/kubeflow-images-public/tf_operator:vmaster-gda226016 ${MY_REGISTRY}tf_operator
docker push ${MY_REGISTRY}tf_operator

docker tag argoproj/workflow-controller:v2.3.0 ${MY_REGISTRY}workflow-controller
docker push ${MY_REGISTRY}workflow-controller

#===============================================
docker tag gcr.io/kubeflow-images-public/kfam:vmaster-g9f3bfd00 ${MY_REGISTRY}kfam
docker push ${MY_REGISTRY}kfam

docker tag gcr.io/kfserving/kfserving-controller:v0.4.1 ${MY_REGISTRY}kfserving-controller
docker push ${MY_REGISTRY}kfserving-controller
#===============================================

#statefulset 
docker tag gcr.io/kubeflow-images-public/ingress-setup:latest ${MY_REGISTRY}ingress-setup
docker push ${MY_REGISTRY}ingress-setup

docker tag gcr.io/kubeflow-images-public/kubernetes-sigs/application:1.0-beta ${MY_REGISTRY}application
docker push ${MY_REGISTRY}application

docker tag gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0 ${MY_REGISTRY}kube-rbac-proxy
docker push ${MY_REGISTRY}kube-rbac-proxy

docker tag metacontroller/metacontroller:v0.3.0 ${MY_REGISTRY}metacontroller
docker push ${MY_REGISTRY}metacontroller

4、从aliyun拉取到本地

从aliyun拉取到本地,然后tag为原来的名称。

注意:改为自己的镜像仓库名称。

MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings/kfimgs1.2-

#######deployent ########################
docker pull ${MY_REGISTRY}admission-webhook
docker tag ${MY_REGISTRY}admission-webhook gcr.io/kubeflow-images-public/admission-webhook:vmaster-ge5452b6f 

docker pull ${MY_REGISTRY}argoui 
docker tag ${MY_REGISTRY}argoui argoproj/argoui:v2.3.0

docker pull ${MY_REGISTRY}cache-deployer
docker tag ${MY_REGISTRY}cache-deployer gcr.io/ml-pipeline/cache-deployer:1.0.4

docker pull ${MY_REGISTRY}cache-server
docker tag ${MY_REGISTRY}cache-server gcr.io/ml-pipeline/cache-server:1.0.4

docker pull ${MY_REGISTRY}centraldashboard
docker tag ${MY_REGISTRY}centraldashboard gcr.io/kubeflow-images-public/centraldashboard:vmaster-g8097cfeb

docker pull ${MY_REGISTRY}jupyter-web-app
docker tag ${MY_REGISTRY}jupyter-web-app gcr.io/kubeflow-images-public/jupyter-web-app:vmaster-g845af298

###======
docker pull ${MY_REGISTRY}katib-controller
docker tag ${MY_REGISTRY}katib-controller docker.io/kubeflowkatib/katib-controller:v1beta1-a96ff59

docker pull ${MY_REGISTRY}katib-db-manager
docker tag ${MY_REGISTRY}katib-db-manager docker.io/kubeflowkatib/katib-db-manager:v1beta1-a96ff59

docker pull ${MY_REGISTRY}mysql8
docker tag ${MY_REGISTRY}mysql8 mysql:8

docker pull ${MY_REGISTRY}katib-ui
docker tag ${MY_REGISTRY}katib-ui docker.io/kubeflowkatib/katib-ui:v1beta1-a96ff59

docker pull ${MY_REGISTRY}python3.7
docker tag ${MY_REGISTRY}python3.7 python:3.7

docker pull ${MY_REGISTRY}mysql8.0.3
docker tag ${MY_REGISTRY}mysql8.0.3 mysql:8.0.3

docker pull ${MY_REGISTRY}envoy
docker tag ${MY_REGISTRY}envoy gcr.io/ml-pipeline/envoy:metadata-grpc

###
docker pull ${MY_REGISTRY}ml_metadata_store_server
docker tag ${MY_REGISTRY}ml_metadata_store_server gcr.io/tfx-oss-public/ml_metadata_store_server:v0.21.1 

docker pull ${MY_REGISTRY}metadata-writer
docker tag ${MY_REGISTRY}metadata-writer gcr.io/ml-pipeline/metadata-writer:1.0.4

docker pull ${MY_REGISTRY}minio
docker tag ${MY_REGISTRY}minio gcr.io/ml-pipeline/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance

docker pull ${MY_REGISTRY}api-server
docker tag ${MY_REGISTRY}api-server gcr.io/ml-pipeline/api-server:1.0.4

docker pull ${MY_REGISTRY}persistenceagent
docker tag ${MY_REGISTRY}persistenceagent gcr.io/ml-pipeline/persistenceagent:1.0.4
##
docker pull ${MY_REGISTRY}scheduledworkflow
docker tag ${MY_REGISTRY}scheduledworkflow gcr.io/ml-pipeline/scheduledworkflow:1.0.4

docker pull ${MY_REGISTRY}frontend
docker tag ${MY_REGISTRY}frontend gcr.io/ml-pipeline/frontend:1.0.4

docker pull ${MY_REGISTRY}viewer-crd-controller
docker tag ${MY_REGISTRY}viewer-crd-controller gcr.io/ml-pipeline/viewer-crd-controller:1.0.4

docker pull ${MY_REGISTRY}visualization-server
docker tag ${MY_REGISTRY}visualization-server gcr.io/ml-pipeline/visualization-server:1.0.4

docker pull ${MY_REGISTRY}mpi-operator
docker tag ${MY_REGISTRY}mpi-operator mpioperator/mpi-operator:latest

docker pull ${MY_REGISTRY}mxnet-operator
docker tag ${MY_REGISTRY}mxnet-operator kubeflow/mxnet-operator:v1.0.0-20200625

docker pull ${MY_REGISTRY}mysql5.6
docker tag ${MY_REGISTRY}mysql5.6 gcr.io/ml-pipeline/mysql:5.6

##
docker pull ${MY_REGISTRY}notebook-controller
docker tag ${MY_REGISTRY}notebook-controller gcr.io/kubeflow-images-public/notebook-controller:vmaster-g6eb007d0

docker pull ${MY_REGISTRY}profile-controller
docker tag ${MY_REGISTRY}profile-controller gcr.io/kubeflow-images-public/profile-controller:vmaster-ga49f658f

docker pull ${MY_REGISTRY}pytorch-operator
docker tag ${MY_REGISTRY}pytorch-operator gcr.io/kubeflow-images-public/pytorch-operator:vmaster-g518f9c76

docker pull ${MY_REGISTRY}seldon-core-operator
docker tag ${MY_REGISTRY}seldon-core-operator docker.io/seldonio/seldon-core-operator:1.4.0

docker pull ${MY_REGISTRY}spark-operator
docker tag ${MY_REGISTRY}spark-operator gcr.io/spark-operator/spark-operator:v1beta2-1.1.0-2.4.5

docker pull ${MY_REGISTRY}spartakus-amd64
docker tag ${MY_REGISTRY}spartakus-amd64 gcr.io/google_containers/spartakus-amd64:v1.1.0

docker pull ${MY_REGISTRY}tf_operator
docker tag ${MY_REGISTRY}tf_operator gcr.io/kubeflow-images-public/tf_operator:vmaster-gda226016

docker pull ${MY_REGISTRY}workflow-controller
docker tag ${MY_REGISTRY}workflow-controller argoproj/workflow-controller:v2.3.0

#===============================================
docker pull ${MY_REGISTRY}kfam
docker tag ${MY_REGISTRY}kfam gcr.io/kubeflow-images-public/kfam:vmaster-g9f3bfd00 

docker pull ${MY_REGISTRY}kfserving-controller
docker tag ${MY_REGISTRY}kfserving-controller gcr.io/kfserving/kfserving-controller:v0.4.1
#===============================================

#statefulset 
docker pull ${MY_REGISTRY}ingress-setup
docker tag ${MY_REGISTRY}ingress-setup gcr.io/kubeflow-images-public/ingress-setup:latest

docker pull ${MY_REGISTRY}application
docker tag ${MY_REGISTRY}application gcr.io/kubeflow-images-public/kubernetes-sigs/application:1.0-beta

docker pull ${MY_REGISTRY}kube-rbac-proxy
docker tag ${MY_REGISTRY}kube-rbac-proxy gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0

docker pull ${MY_REGISTRY}metacontroller
docker tag ${MY_REGISTRY}metacontroller metacontroller/metacontroller:v0.3.0

5、save 到tar文件

将所有的镜像保存为tar文件,就可以在各个kubernetes节点下载然后load进去。

脚本如下:

#######deployent ########################
docker save gcr.io/kubeflow-images-public/admission-webhook:vmaster-ge5452b6f -o ./kfimgs1.2/admission-webhook.tar
docker save argoproj/argoui:v2.3.0 -o ./kfimgs1.2/argoui.tar
docker save gcr.io/ml-pipeline/cache-deployer:1.0.4 -o ./kfimgs1.2/cache-deployer.tar
docker save gcr.io/ml-pipeline/cache-server:1.0.4 -o ./kfimgs1.2/cache-server.tar
docker save gcr.io/kubeflow-images-public/centraldashboard:vmaster-g8097cfeb -o ./kfimgs1.2/centraldashboard.tar

docker save gcr.io/kubeflow-images-public/jupyter-web-app:vmaster-g845af298 -o ./kfimgs1.2/jupyter-web-app.tar
docker save docker.io/kubeflowkatib/katib-controller:v1beta1-a96ff59 -o ./kfimgs1.2/katib-controller.tar
docker save docker.io/kubeflowkatib/katib-db-manager:v1beta1-a96ff59 -o ./kfimgs1.2/katib-db-manager.tar

docker save mysql:8 -o ./kfimgs1.2/mysql8.tar
docker save docker.io/kubeflowkatib/katib-ui:v1beta1-a96ff59 -o ./kfimgs1.2/katib-ui.tar
docker save python:3.7 -o ./kfimgs1.2/python3.7.tar
docker save mysql:8.0.3 -o ./kfimgs1.2/mysql8.0.3.tar

docker save gcr.io/ml-pipeline/envoy:metadata-grpc -o ./kfimgs1.2/envoy.tar
docker save gcr.io/tfx-oss-public/ml_metadata_store_server:v0.21.1 -o ./kfimgs1.2/ml_metadata_store_server.tar
docker save gcr.io/ml-pipeline/metadata-writer:1.0.4 -o ./kfimgs1.2/metadata-writer.tar
docker save gcr.io/ml-pipeline/minio:RELEASE.2019-08-14T20-37-41Z-license-compliance -o ./kfimgs1.2/minio.tar
docker save gcr.io/ml-pipeline/api-server:1.0.4 -o ./kfimgs1.2/api-server.tar
docker save gcr.io/ml-pipeline/persistenceagent:1.0.4 -o ./kfimgs1.2/persistenceagent.tar
docker save gcr.io/ml-pipeline/scheduledworkflow:1.0.4 -o ./kfimgs1.2/scheduledworkflow.tar
docker save gcr.io/ml-pipeline/frontend:1.0.4 -o ./kfimgs1.2/frontend.tar
docker save gcr.io/ml-pipeline/viewer-crd-controller:1.0.4 -o ./kfimgs1.2/viewer-crd-controller.tar

docker save gcr.io/ml-pipeline/visualization-server:1.0.4 -o ./kfimgs1.2/visualization-server.tar
docker save mpioperator/mpi-operator:latest -o ./kfimgs1.2/mpi-operator.tar
docker save kubeflow/mxnet-operator:v1.0.0-20200625 -o ./kfimgs1.2/mxnet-operator.tar
docker save gcr.io/ml-pipeline/mysql:5.6 -o ./kfimgs1.2/mysql5.6.tar
docker save gcr.io/kubeflow-images-public/notebook-controller:vmaster-g6eb007d0 -o ./kfimgs1.2/notebook-controller.tar

docker save gcr.io/kubeflow-images-public/profile-controller:vmaster-ga49f658f -o ./kfimgs1.2/profile-controller.tar
docker save gcr.io/kubeflow-images-public/pytorch-operator:vmaster-g518f9c76 -o ./kfimgs1.2/pytorch-operator.tar
docker save docker.io/seldonio/seldon-core-operator:1.4.0 -o ./kfimgs1.2/seldon-core-operator.tar
docker save gcr.io/spark-operator/spark-operator:v1beta2-1.1.0-2.4.5 -o ./kfimgs1.2/spark-operator.tar

docker save gcr.io/google_containers/spartakus-amd64:v1.1.0 -o ./kfimgs1.2/spartakus-amd64.tar
docker save gcr.io/kubeflow-images-public/tf_operator:vmaster-gda226016 -o ./kfimgs1.2/tf_operator.tar
docker save argoproj/workflow-controller:v2.3.0 -o ./kfimgs1.2/workflow-controller.tar

#===============================================
docker save gcr.io/kubeflow-images-public/kfam:vmaster-g9f3bfd00 -o ./kfam.tar
docker save gcr.io/kfserving/kfserving-controller:v0.4.1 -o ./kfserving-controller.tar
#===============================================

#statefulset 
docker save gcr.io/kubeflow-images-public/ingress-setup:latest -o ./kfimgs1.2/ingress-setup.tar
docker save gcr.io/kubeflow-images-public/kubernetes-sigs/application:1.0-beta -o ./kfimgs1.2/application.tar
df -h
docker save gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0 -o ./kfimgs1.2/kube-rbac-proxy.tar
docker save metacontroller/metacontroller:v0.3.0 -o ./kfimgs1.2/metacontroller.tar

6、load到docker images

从*.tar载入到docker系统中,在microk8s上使用juju进行安装(使用containerd),不需要使用这里的方法。如果安装失败,是访问juju服务器出现问题。

#######deployent ########################
docker load -i ./kfimgs1.2/admission-webhook.tar
docker load -i ./kfimgs1.2/argoui.tar
docker load -i ./kfimgs1.2/cache-deployer.tar
docker load -i ./kfimgs1.2/cache-server.tar
docker load -i ./kfimgs1.2/centraldashboard.tar
docker load -i ./kfimgs1.2/jupyter-web-app.tar
docker load -i ./kfimgs1.2/katib-controller.tar
docker load -i ./kfimgs1.2/katib-db-manager.tar
docker load -i ./kfimgs1.2/mysql8.tar
docker load -i ./kfimgs1.2/katib-ui.tar
docker load -i ./kfimgs1.2/python3.7.tar
docker load -i ./kfimgs1.2/mysql8.0.3.tar
docker load -i ./kfimgs1.2/envoy.tar
docker load -i ./kfimgs1.2/ml_metadata_store_server.tar
docker load -i ./kfimgs1.2/metadata-writer.tar
docker load -i ./kfimgs1.2/minio.tar
docker load -i ./kfimgs1.2/api-server.tar
docker load -i ./kfimgs1.2/persistenceagent.tar
docker load -i ./kfimgs1.2/scheduledworkflow.tar
docker load -i ./kfimgs1.2/frontend.tar
docker load -i ./kfimgs1.2/viewer-crd-controller.tar
docker load -i ./kfimgs1.2/visualization-server.tar
docker load -i ./kfimgs1.2/mpi-operator.tar
docker load -i ./kfimgs1.2/mxnet-operator.tar
docker load -i ./kfimgs1.2/mysql5.6.tar
docker load -i ./kfimgs1.2/notebook-controller.tar
docker load -i ./kfimgs1.2/profile-controller.tar
docker load -i ./kfimgs1.2/pytorch-operator.tar
docker load -i ./kfimgs1.2/seldon-core-operator.tar
docker load -i ./kfimgs1.2/spark-operator.tar
docker load -i ./kfimgs1.2/spartakus-amd64.tar
docker load -i ./kfimgs1.2/tf_operator.tar
docker load -i ./kfimgs1.2/workflow-controller.tar

#==================================================
docker load -i ./kfimgs1.2/kfam.tar
docker load -i ./kfimgs1.2/kfserving-controller.tar
#==================================================

#statefulset 
docker load -i ./kfimgs1.2/ingress-setup.tar
docker load -i ./kfimgs1.2/application.tar
docker load -i ./kfimgs1.2/kube-rbac-proxy.tar
docker load -i ./kfimgs1.2/metacontroller.tar

查看一下镜像是否已经进来:

#使用docker
docker images

#或者使用microk8s
microk8s.ctr images list

过一定时间后, kubernetes的pod将会自动重新创建,逐步进入正常运行。

展开阅读全文
加载中

作者的其它热门文章

打赏
0
0 收藏
分享
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部