标签:multicast cli etl 外部 agent points images dockerd 三种模式
“网络1”解决了同一宿主机间不同网络间通信,基于bridge网络容器间没办法在互联网中通信(不同宿主机间通信),只能在同一个宿主机内不同容器间通信。host 网络 直接使用宿主机网络,性能无衰减,通常用于本地调试, 容器性能的衰减主要来自网络。
docker daemon一旦停止,所有容器将退出;dockerd启动docker0虚拟网桥默认分配172.17.0.1/16网络,可修改配置文件:"bip":"192.168.1.1/24" 修改docker0默认配置/etc/docker/daemon.json。如果不同宿主机间容器能够通信,那么host1 和host2的默认网络启动的容器分配到的IP一致,将导致网络冲突。so...
跨主机容器通信:
docker的自定义网络:macvlan、overlay(实际应用比较少很麻烦)
sdn:software define network
CoreOS:flannel
calico
weave
ovs:openstack
flannel 基于overlay实现,go语言编写。
etcd: 基于目录的k/v存储 ,类似单根倒树 ,etcd是一款分布式键值存储,为集群提供可靠方法存储数据。在发生网络分区很好解决领导选举,允许节点故障包括主节点。
实验环境
etcd安装配置
下载etcd:https://github.com/etcd-io/etcd/releases
etcd操作指南:https://coreos.com/etcd/docs/latest/dev-guide/interacting_v3.html
本次实验基于二进制包安装配置:
etcd启动:etcd --name etcd --data-dir /var/lib/etcd --advertise-client-urls http://192.168.175.20:2379,http://127.0.0.1:2379 --listen-client-urls http://192.168.175.20:2379,http://127.0.0.1:2379 &
flannel安装启动配置:
flannel下载:https://github.com/coreos/flannel/releases
操作指南:https://coreos.com/flannel/docs/latest/flannel-config.html
unitfile 服务文件配置:/etc/systemd/system/ /usr/lib/systemd/system/
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStartPost=/root/flannel/mk-docker-opts.sh
ExecStart=/root/flannel/flanneld --etcd-endpoints=http://192.168.175.20:2379 --iface=192.168.175.20 --ip-masq=true --etcd-prefix=/coreos.com/network
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
systemctl daemon reload && systemctl start flanneld.service
此时查看网络信息:
[root@docker0 flannel]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:e7:83:5b:05 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens36: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.175.20 netmask 255.255.255.0 broadcast 192.168.175.255
inet6 fe80::20c:29ff:fe00:dae prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:00:0d:ae txqueuelen 1000 (Ethernet)
RX packets 3401 bytes 299192 (292.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2867 bytes 386631 (377.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.0.9.0 netmask 255.255.0.0 destination 10.0.9.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 1879 bytes 114128 (111.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1879 bytes 114128 (111.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flanneld还没有接管docker0 ,需要执行 mk-docker-opts.sh生成一系列变量,位于 /run/docker_ops.env 文件,然后修改docker启动脚本:
[root@docker0 flannel]# cat /run/docker_opts.env
DOCKER_OPT_BIP="--bip=10.0.9.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_OPTS=" --bip=10.0.9.1/24 --ip-masq=false --mtu=1472"
在docker.service中添加:
EnvironmentFile=/run/docker_opts.env
ExecStart=/usr/bin/dockerd $DOCKER_OPTS#把参数传递给dockerd的配置/etc/docker/daemon.json文件对docker0的配置需要删除。此时即可看到docker0由flannel分配IP:
[root@docker0 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472
inet 10.0.9.1 netmask 255.255.255.0 broadcast 10.0.9.255
inet6 fe80::42:e7ff:fe83:5b05 prefixlen 64 scopeid 0x20<link>
ether 02:42:e7:83:5b:05 txqueuelen 0 (Ethernet)
RX packets 10 bytes 616 (616.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18 bytes 1404 (1.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@docker0 ~]# etcdctl --endpoints http://127.0.0.1:2379 set /coreos.com/network/config ‘{"Network": "10.0.0.0/16","SubnetLen":<br/>24,"SubnetMin": "10.0.1.0","SubnetMax":"10.0.20.0","Backen": {"Type": "vxlan"}}‘ {"Network": "10.0.0.0/16","SubnetLen": 24,"SubnetMin": "10.0.1.0","SubnetMax":"10.0.20.0","Backen": {"Type": "vxlan"}} #json格式<br/>
此时启动docker容器:
[root@docker0 ~]# docker run -it alpine:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue state UP
link/ether 02:42:0a:00:09:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.9.2/24 brd 10.0.9.255 scope global eth0
valid_lft forever preferred_lft forever
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStartPost=/root/flannel/mk-docker-opts.sh
ExecStart=/root/flannel/flanneld --etcd-endpoints=http://192.168.175.20:2379 --iface=192.168.175.30 \ #修改为host2对应IP
--ip-masq=true --etcd-prefix=/coreos.com/network
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:f7:c7:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.175.30/24 brd 192.168.175.255 scope global ens36
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef7:c751/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP
link/ether 02:42:b8:20:b7:2c brd ff:ff:ff:ff:ff:ff
inet 10.0.12.1/24 brd 10.0.12.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b8ff:fe20:b72c/64 scope link
valid_lft forever preferred_lft forever
5: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 10.0.12.0/16 scope global flannel0
valid_lft forever preferred_lft forever
7: veth31f85ea@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue master docker0 state UP
link/ether 8a:8d:92:78:de:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::888d:92ff:fe78:de08/64 scope link
valid_lft forever preferred_lft forever
[root@docker0 ~]# docker run -it alpine:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue state UP
link/ether 02:42:0a:00:09:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.9.2/24 brd 10.0.9.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 10.0.12.2
PING 10.0.12.2 (10.0.12.2): 56 data bytes
64 bytes from 10.0.12.2: seq=0 ttl=60 time=25.784 ms
64 bytes from 10.0.12.2: seq=1 ttl=60 time=1.435 ms
64 bytes from 10.0.12.2: seq=2 ttl=60 time=1.580 ms
64 bytes from 10.0.12.2: seq=3 ttl=60 time=1.955 ms
--- 10.0.12.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 1.435/7.688/25.784 ms
host2:docker1运行alpine容器
[root@docker1 ~]# docker run -it alpine:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1472 qdisc noqueue state UP
link/ether 02:42:0a:00:0c:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.12.2/24 brd 10.0.12.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping -w2 -c3 10.0.9.2
PING 10.0.9.2 (10.0.9.2): 56 data bytes
64 bytes from 10.0.9.2: seq=0 ttl=60 time=9.082 ms
64 bytes from 10.0.9.2: seq=1 ttl=60 time=1.724 ms
同在宿主机与容器通信:
[root@docker0 ~]# ping 10.0.12.2
PING 10.0.12.2 (10.0.12.2) 56(84) bytes of data.
64 bytes from 10.0.12.2: icmp_seq=1 ttl=61 time=0.609 ms
64 bytes from 10.0.12.2: icmp_seq=2 ttl=61 time=2.16 ms
64 bytes from 10.0.12.2: icmp_seq=3 ttl=61 time=1.53 ms
64 bytes from 10.0.12.2: icmp_seq=4 ttl=61 time=2.63 ms
参考文档:
https://coreos.com/etcd/docs/latest/
https://coreos.com/flannel/docs/latest/
标签:multicast cli etl 外部 agent points images dockerd 三种模式
原文地址:http://blog.51cto.com/12580678/2330194