标签:效果 http from lock amd config cal color trie
试了下比较流行的几种SDN,感觉flannel还是比较好用,这里简单记录一下。
用的是virtualbox,3个机器,分别为:
虚机信息如下
[root@localhost yum.repos.d]# uname -mars
Linux localhost.localdomain 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost yum.repos.d]# cat /etc/*-release
CentOS Linux release 7.3.1611 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.3.1611 (Core)
CentOS Linux release 7.3.1611 (Core)
[root@localhost yum.repos.d]# docker version
Client:
Version: 1.12.5
API version: 1.24
Go version: go1.6.4
Git commit: 7392c3b
Built: Fri Dec 16 02:23:59 2016
OS/Arch: linux/amd64
随便选择两台机器run一下,在容器中ifconfig:
[root@localhost ~]# docker run -it busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1016 (1016.0 B) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
发现参数完全相同,单纯bridge模式下并没有跨主机互通,而host模式是并不建议使用的。
先yum install -y etcd flannel
,如果没问题是再好不过了。
etcd 3.x支持--config-file
参数,需要的话可以从源代码install(需要golang 1.6+)。
先从etcd开始,简单说就是"distributed key value store"。
etcd集群的3种方式:
DNS discovery主要是用srv record,这里先不搞DNS服务,下面对static和etcd discovery两种方式简单说明一下。
参数可以在启动时置顶,或者写到配置文件中,默认配置文件为/etc/etcd/etcd.conf
。
genesis的配置如下:
ETCD_NAME=genesis
ETCD_DATA_DIR="/var/lib/etcd/genesis"
ETCD_LISTEN_PEER_URLS="http://192.168.99.103:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.103:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.103:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.103:2379"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etct-fantasy"
ETCD_INITIAL_CLUSTER="exodus=http://192.168.99.105:2380,genesis=http://192.168.99.103:2380"
exodus的配置如下:
ETCD_NAME=exodus
ETCD_DATA_DIR="/var/lib/etcd/exodus"
ETCD_LISTEN_PEER_URLS="http://192.168.99.105:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.105:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.105:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.105:2379"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etct-fantasy"
ETCD_INITIAL_CLUSTER="exodus=http://192.168.99.105:2380,genesis=http://192.168.99.103:2380"
启动方式看自己喜好,如果打算用systemctl
启动的话,注意/usr/lib/systemd/system/etcd.service
中的内容可能不会如你所愿。
启动后,检查一下集群状态,顺便也看看有哪些成员:
[root@localhost etcd]# etcdctl cluster-health
member 7a4f27f78a05e755 is healthy: got healthy result from http://192.168.99.103:2379
failed to check the health of member 8e8718b335c6c9a2 on http://192.168.99.105:2379: Get http://192.168.99.105:2379/health: dial tcp 192.168.99.105:2379: i/o timeout
member 8e8718b335c6c9a2 is unreachable: [http://192.168.99.105:2379] are all unreachable
cluster is healthy
提示"member unreachable",看来是被exodus的防火墙拦住了,我们先粗暴一点。
[root@localhost etcd]# systemctl stop firewalld
[root@localhost etcd]# etcdctl cluster-health
member 7a4f27f78a05e755 is healthy: got healthy result from http://192.168.99.103:2379
member 8e8718b335c6c9a2 is healthy: got healthy result from http://192.168.99.105:2379
cluster is healthy
当然,这样配置的前提是已经知道各个节点的信息。
但实际场景中可能无法预知各个member,所以我们需要让etcd自己去发现(discovery)。
首先,etcd提供了一个public discovery service - discovery.etcd.io,我们用它来生成一个discovery token,并在genesis创建目录:
[root@localhost etcd]# curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/6321c0706046c91f2b2598206ffa3272
[root@localhost etcd]# etcdctl set /discovery/6321c0706046c91f2b2598206ffa3272/_config/size 3
修改exodus的配置,用discovery代替之前的cluster:
ETCD_NAME=exodus
ETCD_DATA_DIR="/var/lib/etcd/exodus"
ETCD_LISTEN_PEER_URLS="http://192.168.99.105:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.105:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.105:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.105:2379"
ETCD_DISCOVERY=http://192.168.99.103:2379/v2/keys/discovery/98a976dac265a218f1a1959eb8dde57f
如果启动后一直显示如下错误(参考: raft election):
rafthttp: the clock difference against peer ?????? is too high [??????s > 1s]
简单的解决方法是通过ntp:
[root@localhost etcd]yum install ntp -y
[root@localhost etcd]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@localhost etcd]# systemctl start ntpd
set一个路径给flannel用:
etcdctl set /coreos.com/network/config ‘{ "Network": "10.1.0.0/16" }‘
以systemctl start flanneld
方式启动时,如果出现以下错误
network.go:53] Failed to retrieve network config: 100: Key not found (/coreos.net) [9]
注意/etc/sysconfig/flanneld
中的内容,FLANNEL_ETCD_PREFIX很可能是/atomic.io/network
,将其改为/coreos.com/network
。
或者也可以通过-etcd-prefix指定。
启动成功后,查看subnet:
[root@localhost etcd]# etcdctl ls /coreos.com/network/subnets
/coreos.com/network/subnets/10.1.90.0-24
/coreos.com/network/subnets/10.1.30.0-24
/coreos.com/network/subnets/10.1.18.0-24
flannel启动成功后会生成/run/flannel/docker
,内容如下:
DOCKER_OPT_BIP="--bip=10.1.30.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.1.30.1/24 --ip-masq=true --mtu=1450 "
用以下方式启动docker:
[root@localhost etcd]# source /run/flannel/docker
[root@localhost etcd]# docker daemon ${DOCKER_NETWORK_OPTIONS} >> /dev/null 2>&1 &
/run/flannel/docker
是怎么来的?
参考flanneld的两个启动参数,-subnet-dir和-subnet-file。
在genesis进入容器看看效果:
[root@localhost etcd]# docker run -it busybox
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:0A:01:5A:02
inet addr:10.1.90.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::42:aff:fe01:5a02/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:508 (508.0 B) TX bytes:508 (508.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
在exodus也做类似操作,在exodus中ping一下10.1.90.2,发现是通的。
并在各自容器中也ping一下,检查是否跨主机互通。
标签:效果 http from lock amd config cal color trie
原文地址:http://www.cnblogs.com/kavlez/p/flannel-install-guide.html