文档章节

使用kubeadm创建Kubernetes集群

openthings
 openthings
发布于 2018/03/02 10:48
字数 3304
阅读 1180
收藏 1

使用kubeadm创建Kubernetes集群

Using kubeadm to Create a Cluster

kubeadm is a toolkit that helps you bootstrap a best-practice Kubernetes cluster in an easy, reasonably secure and extensible way. It also supports managing Bootstrap Tokens for you and upgrading/downgrading clusters.

kubeadm aims to set up a minimum viable cluster that pass the Kubernetes Conformance tests, but installing other addons than really necessary for a functional cluster is out of scope.

It by design does not install a networking solution for you, which means you have to install a third-party CNI-compliant networking solution yourself using kubectl apply.

kubeadm expects the user to bring a machine to execute on, the type doesn’t matter, can be a Linux laptop, virtual machine, physical/cloud server or Raspberry Pi. This makes kubeadm well suited to integrate with provisioning systems of different kinds (e.g. Terraform, Ansible, etc.).

kubeadm is designed to be a simple way for new users to start trying Kubernetes out, possibly for the first time, a way for existing users to test their application on and stitch together a cluster easily, and also to be a building block in other ecosystem and/or installer tool with a larger scope.

You can install kubeadm very easily on operating systems that support installing deb or rpm packages. The responsible SIG for kubeadm, SIG Cluster Lifecycle, provides these packages pre-built for you, but you may also on other OSes.

kubeadm Maturity

Area Maturity Level
Command line UX beta
Implementation beta
Config file API alpha
Self-hosting alpha
kubeadm alpha subcommands alpha
CoreDNS alpha
DynamicKubeletConfig alpha

kubeadm’s overall feature state is Beta and will soon be graduated to General Availability (GA) during 2018. Some sub-features, like self-hosting or the configuration file API are still under active development. The implementation of creating the cluster may change slightly as the tool evolves, but the overall implementation should be pretty stable. Any commands under kubeadm alpha are by definition, supported on an alpha level.

Support timeframes

Kubernetes releases are generally supported for nine months, and during that period a patch release may be issued from the release branch if a severe bug or security issue is found. Here are the latest Kubernetes releases and the support timeframe; which also applies to kubeadm.

Kubernetes version Release month End-of-life-month
v1.6.x March 2017 December 2017
v1.7.x June 2017 March 2018
v1.8.x September 2017 June 2018
v1.9.x December 2017 September 2018  

Before you begin

  1. One or more machines running a deb/rpm-compatible OS, e.g. Ubuntu or CentOS
  2. 2 GB or more of RAM per machine (any less will leave little room for your apps)
  3. 2 CPUs or more on the master
  4. Full network connectivity between all machines in the cluster (public or private network is fine)

Objectives

  • Install a secure Kubernetes cluster on your machines
  • Install a Pod network on the cluster so that your Pods can talk to each other

Instructions

(1/4) Installing kubeadm on your hosts

See Installing kubeadm.

Note: If you already have kubeadm installed, you should do a apt-get update && apt-get upgrade or yum update to get the latest version of kubeadm.

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. This crashloop is expected and normal, please proceed with the next step and the kubelet will start running normally.

(2/4) Initializing your master

The master is the machine where the control plane components run, including etcd (the cluster database) and the API server (which the kubectl CLI communicates with).

To initialize the master, pick one of the machines you previously installed kubeadm on, and run:

kubeadm init

Notes:

  • Please refer to the kubeadm reference guide if you want to read more about the flags kubeadm init provides. You can also specify a configuration fileinstead of using flags.
  • You need to choose a Pod Network Plugin in the next step. Depending on what third-party provider you choose, you might have to set the --pod-network-cidr to something provider-specific. The tabs below will contain a notice about what flags on kubeadm init are required.
  • Unless otherwise specified, kubeadm uses the default gateway’s network interface to advertise the master’s IP. If you want to use a different network interface, specify --apiserver-advertise-address=<ip-address> argument to kubeadm init. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you must specify an IPv6, e.g. --apiserver-advertise-address=fd00::101
  • If you would like to customise control plane components including optional IPv6 assignment to liveness probe for control plane components and etcd server, you can do so by providing extra args to each one, as documented here.
  • kubeadm init will first run a series of prechecks to ensure that the machine is ready to run Kubernetes. It will expose warnings and exit on errors. It will then download and install the cluster database and control plane components. This may take several minutes.
  • You can’t run kubeadm init twice without tearing down the cluster in between (unless you’re upgrading from v1.6 to v1.7), see Tear Down.

The output should look like:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.511972 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: <token>
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

To make kubectl work for your non-root user, you might want to run these commands (which is also a part of the kubeadm init output):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you could run this:

export KUBECONFIG=/etc/kubernetes/admin.conf

Make a record of the kubeadm join command that kubeadm init outputs. You will need this in a moment.

The token is used for mutual authentication between the master and the joining nodes. The token included here is secret, keep it safe as anyone with this token can add authenticated nodes to your cluster. These tokens can be listed, created and deleted with the kubeadm token command. See thereference guide for more details.

(3/4) Installing a pod network

You MUST install a pod network add-on so that your pods can communicate with each other.

The network must be deployed before any applications. Also, kube-dns, an internal helper service, will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).

Several projects provide Kubernetes pod networks using CNI, some of which also support Network Policy. See the add-ons page for a complete list of available network add-ons. IPv6 support was added in CNI v0.6.0CNI bridge and local-ipam are the only supported IPv6 network plugins in 1.9.

Note: kubeadm sets up a more secure cluster by default and enforces use of RBAC. Please make sure that the network manifest of choice supports RBAC.

You can install a pod network add-on with the following command:

kubectl apply -f <add-on.yaml>

NOTE: You can install only one pod network per cluster.

Please select one of the tabs to see installation instructions for the respective third-party Pod Network Provider.

Refer to the Calico documentation for a kubeadm quickstart, a kubeadm installation guide, and other resources.

Note:

  • In order for Network Policy to work correctly, you need to pass --pod-network-cidr=192.168.0.0/16 to kubeadm init.
  • Calico works on amd64 only.
kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

The official Canal set-up guide is here.

Note:

  • For Canal to work correctly, --pod-network-cidr=10.244.0.0/16 has to be passed to kubeadm init.
  • Canal works on amd64 only.
kubectl apply -f https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/canal.yaml

Note:

  • For flannel to work correctly, --pod-network-cidr=10.244.0.0/16 has to be passed to kubeadm init.
  • flannel works on amd64armarm64 and ppc64le, but for it to work on a platform other than amd64 you have to manually download the manifest and replace amd64 occurrences with your chosen platform.
  • Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
  • For more information about flannel, please see here.

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.

Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes. Therefore, use kubeadm init with the --pod-network-cidr flag.

Kube-router provides pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.

For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official setup guide.

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.

The official Romana set-up guide is here.

Note: Romana works on amd64 only.

kubectl apply -f https://raw.githubusercontent.com/romana/romana/master/containerize/specs/romana-kubeadm.yml

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.

The official Weave Net set-up guide is here.

Note: Weave Net works on amd64armarm64 and ppc64le without any extra action required. Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address if they don’t know their PodIP.

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

Once a pod network has been installed, you can confirm that it is working by checking that the kube-dns pod is Running in the output of kubectl get pods --all-namespaces. And once the kube-dns pod is up and running, you can continue by joining your nodes.

If your network is not working or kube-dns is not in the Running state, check out our troubleshooting docs.

Master Isolation

By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

With output looking something like:

node "test-01" untainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

(4/4) Joining your nodes

The nodes are where your workloads (containers and pods, etc) run. To add new nodes to your cluster do the following for each machine:

  • SSH to the machine
  • Become root (e.g. sudo su -)
  • Run the command that was output by kubeadm init. For example:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

Note: To specify an IPv6 tuple for <master-ip>:<master-port>, IPv6 address must be enclosed in square brackets, for example: [fd00::101]:2073.

The output should look something like:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.138.0.4:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.138.0.4:6443"
[discovery] Requesting info from "https://10.138.0.4:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.138.0.4:6443"
[discovery] Successfully established connection with API Server "10.138.0.4:6443"
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

A few seconds later, you should notice this node in the output from kubectl get nodes when run on the master.

(Optional) Controlling your cluster from machines other than the master

In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your master to your workstation like this:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

Note:

  • The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.
  • The admin.conf file gives the user superuser privileges over the cluster. This file should be used sparsingly. For normal users, it’s recommended to generate an unique credential to which you whitelist privileges. You can do this with the kubeadm alpha phase kubeconfig user --client-name <CN> command. That command will print out a KubeConfig file to STDOUT which you should save to a file and distribute to your user. After that, whitelist privileges by using kubectl create (cluster)rolebinding.

(Optional) Proxying API Server to localhost

If you want to connect to the API Server from outside the cluster you can use kubectl proxy:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf proxy

You can now access the API Server locally at http://localhost:8001/api/v1

Tear down

To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.

Talking to the master with the appropriate credentials, run:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

Then, on the node being removed, reset all kubeadm installed state:

kubeadm reset

If you wish to start over simply run kubeadm init or kubeadm join with the appropriate arguments.

More options and information about the kubeadm reset command.

Upgrading a kubeadm cluster

Instructions for upgrading kubeadm clusters are available for:

Explore other add-ons

See the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.

What’s next

Feedback

Version skew policy

The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1). kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).

Due to that we can’t see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.

Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to v1.8.

Please also check our installation guide for more information on the version skew between kubelets and the control plane.

kubeadm works on multiple platforms

kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x following the multi-platform proposal.

Only some of the network providers offer solutions for all platforms. Please consult the list of network providers above or the documentation from each provider to figure out whether the provider supports your chosen platform.

Limitations

Please note: kubeadm is a work in progress and these limitations will be addressed in due course.

  1. The cluster created here has a single master, with a single etcd database running on it. This means that if the master fails, your cluster may lose data and may need to be recreated from scratch. Adding HA support (multiple etcd servers, multiple API servers, etc) to kubeadm is still a work-in-progress.

    Workaround: regularly back up etcd. The etcd data directory configured by kubeadm is at /var/lib/etcd on the master.

Troubleshooting

If you are running into difficulties with kubeadm, please consult our troubleshooting docs.

本文转载自:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

openthings
粉丝 324
博文 1140
码字总数 689435
作品 1
东城
架构师
私信 提问
Kubernetes 使用 Kubeadm 推出自己的托管服务

最近的 Kubernetes 容器协调软件kubeadm发布。 Kubeadm 可以通过单个命令启动 Kubernetes 集群。群集的设置由最佳实践默认值定义,完全符合 Certified Kubernetes 准则。 Kubeadm 在部署 Ku...

程六金
2018/12/22
1K
0
Kubernetes :Launcher 基于 kubeadm 的部署工具

支持的集群类型 单主机集群 可增删主节点的高可用集群 可增删主节点的高可用集群 用途:建议用户在生产环境中使用 功能:高可用的Etcd集群;支持至少2个Master节点;支持高可用的Vespace存储...

店家小二
2018/12/14
0
0
使用kubeadm安装Kubernetes 1.8版本

kubeadm是Kubernetes官方提供的快速安装和初始化Kubernetes集群的工具,目前的还处于孵化开发状态,伴随Kubernetes每个版本的发布都会同步更新。 当然,目前的kubeadm是不能用于生产环境的。...

店家小二
2018/12/14
0
0
使用kubeadm一键部署kubernetes集群

使用kubeadm一键部署kubernetes集群 k8s-deploy使用kubeadm一键部署kubernetes集群,让你完美避过各种坑。 特点: 一键部署,避免踩坑 不需要翻墙,不需要设置代理 使用国内的Yum源、Docker镜...

信志
2018/10/06
842
1
Kubernetes 1.13 发布,默认使用 CoreDNS

Kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本。Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展...

王练
2018/12/04
2.2K
2

没有更多内容

加载失败,请刷新页面

加载更多

DevOps是如何实现效率的提升?

随着企业业务对软件系统日益依赖,IT管理与研发模式也随之对“敏态”模式产生了需求,也就是今天时常提起的DevOps。提升效率,是DevOps实践的核心内容之一。就让我们来一起从软件生命周期的业...

嘉为科技
24分钟前
2
0
总结:linux目录之proc

我们系统大部分的基础数据采集,其实就是读取proc目录下的文件,并解析获取数据的过程。 1、如cpu利用率:直接cat /proc/cpuinfo命令,然后获取输出内容,并解析里面的数据,如cpu核数等; ...

浮躁的码农
26分钟前
2
0
比原Bapp红包应用

喜迎国庆期间,比原链在自己的移动端钱包Bycoin(下载地址)和google插件钱byone中推出了红包应用,在国庆期间深受大家好评。 那我们今天就来大概介绍一下比原红包,以及基于比原链开发dapp应用...

比原链Bytom
27分钟前
2
0
Linux中没有rc.local文件的解决方法

Linux中没有rc.local文件的解决方法是什么呢?这应该是很多工程师比较头疼的问题,下面就给大家例举几个解决办法。 比较新的Linux发行版已经没有rc.local文件了。因为已经将其服务化了。 解决...

xiangyunyan
27分钟前
2
0
数据中台在阿里巴巴集团内部的实践情况

作者:品鉴 数据中台门在阿里巴巴集团干什么的,由哪个部门掌管?数据中台在阿里巴巴的主要作用是什么呢?外面吹嘘这么神秘的数据中台在阿里实践的如何呢?今天小编正好要采访数据技术及产品...

阿里云官方博客
28分钟前
2
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部