标签:ret 指定 let cal nod 信息 ssi ant win
NetWork###flannel安装
yum install flannel -y
####启动命令
/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} -etcd-prefix=${FLANNEL_ETCD_PREFIX} $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
WantedBy=docker.service
####配置文件
cat /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
#####说明
etcd的地址FLANNEL_ETCD_ENDPOINT
etcd查询的目录,包含docker的IP地址段配置。FLANNEL_ETCD_PREFIX
#####如果是多网卡(例如vagrant环境),则需要在FLANNEL_OPTIONS中增加指定的外网出口的网卡,例如iface=eth2
####在etcd中创建网络配置
#####只需要在master中执行一次
etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem mkdir /kube-centos/network
etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem mk /kube-centos/network/config ‘{"Network":"10.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}‘
######验证flannel
/usr/local/bin/etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem ls /kube-centos/network/config
######删除etcd中docker网络配置信息
/usr/local/bin/etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem rm /kube-centos/network/config
在kubelet的启动服务文件中增加设置如下
kubelet 中的dns地址是cluster的IP也就是服务service的IP并不是pod的IP,要在--service-cluster-ip-range 这个IP段里
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin
Kubernetes上执行rbac.yaml授权文件,然后执行calico.yaml文件进行部署.
Calico把每个操作系统的协议栈认为是一个路由器,然后把所有的容器认为是连在这个路由器上的网络终端,在路由器之间跑标准的路由协议——BGP的协议,然后让它们自己去学习这个网络拓扑该如何转发。所以Calico方案其实是一个纯三层的方案,也就是说让每台机器的协议栈的三层去确保两个容器,跨主机容器之间的三层连通。
Calico不使用重叠网络比如flannel和libnetwork重叠网络驱动,它是一个纯三层的方法,使用虚拟路由代替虚拟交换,每一台虚拟路由通过BGP协议传播可达信息(路由)到剩余数据中心。
Calico在每一个计算节点利用Linux Kernel实现了一个高效的vRouter来负责数据转发,而每个vRouter通过BGP协议负责把自己上运行的workload的路由信息像整个Calico网络内传播——小规模部署可以直接互联,大规模下可通过指定的BGP route reflector来完成。
calico# vim calico.yaml
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "https://10.3.1.15:2379,https://10.3.1.16:2379,https://10.3.1.17:2379"
# If you‘re using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/etcd-ca" #取消原来的注释即可
etcd_cert: "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key"
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d ‘\n‘) #将输出结果填写在这里
etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d ‘\n‘) #将输出结果填写在这里
etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d ‘\n‘) #将输出结果填写在这里
#如果etcd没用启用tls则为null
#上面是必须要修改的参数,文件中有一个参数是设置pod network地址的,根据实际情况做修改:
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
calico-node服务的主要参数:
CALICO_IPV4POOL_CIDR: Calico IPAM的IP地址池,Pod的IP地址将从该池中进行分配。
CALICO_IPV4POOL_IPIP:是否启用IPIP模式,启用IPIP模式时,Calico将在node上创建一个tunl0的虚拟隧道。
FELIX_LOGSEVERITYSCREEN: 日志级别。
FELIX_IPV6SUPPORT : 是否启用IPV6。
IP Pool可以使用两种模式:BGP或IPIP。使用IPIP模式时,设置 CALICO_IPV4POOL_IPIP="always",不使用IPIP模式时,设置为"off",此时将使用BGP模式。
IPIP是一种将各Node的路由之间做一个tunnel,再把两个网络连接起来的模式,启用IPIP模式时,Calico将在各Node上创建一个名为"tunl0"的虚拟网络接口。
使用bgp模式
- name: CALICO_IPV4POOL_IPIP #ipip模式关闭
value: "never"
- name: FELIX_IPINIPENABLED #felix关闭ipip
value: "false"
官网建议:
生产环境,node数量在50以内
typha_service_name: "none"
replicas: 0
node数量为:100-200,
In the ConfigMap named calico-config, locate the typha_service_name, delete the none value, and replace it with calico-typha.
Modify the replica count in theDeployment named calico-typha to the desired number of replicas.
typha_service_name: "calico-typha"
replicas: 3
node数量每增加200个实例:
We recommend at least one replica for every 200 nodes and no more than 20 replicas. In production, we recommend a minimum of three replicas to reduce the impact of rolling upgrades and failures.
我们建议每200个节点至少复制一个副本,不超过20个副本。 在生产中,我们建议至少使用三个副本来减少滚动升级和故障的影响。
Warning: If you set typha_service_name without increasing the replica count from its default of 0 Felix will try to connect to Typha, find no Typha instances to connect to, and fail to start.
警告:如果设置typha_service_name而不将副本计数从默认值0增加.Felix将尝试连接到Typha,找不到要连接的Typha实例,并且无法启动
标签:ret 指定 let cal nod 信息 ssi ant win
原文地址:https://blog.51cto.com/phospherus/2445748