标签:kubernetes docker flannel etcd glusterfs
ClusterIp:通过VIP来访问,
NodePort: 需要自己搭建负载据衡器
LoadBalancer:仅仅用于特定的云提供商 和 Google Container Engine
https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/
port:相当于服务端口(对及集群内客户访问)
targetPort: 相当于pods端口
nodePort; 宿主机端口(也是服务端口,只不过是对集群外客户访问)
hostPort: 相当于docker run 中的 -p (localhost 端口)
访问流程:
client ——》nodePort ——》node local random Port ——》Cluster-Ip Port ——》targetPort ——》containerPort
服务发现有两种:
环境变量:默认支持,
DNS:需要以插件的形式安装skyDNS
注:
环境变量方式存在限制:Pod必须在Service之后创建,DNS则没有这个限制
kubenetes(ppc64le):
0. 关闭firewalld服务器
a. systemctl disable firewalld
b. systemctl stop firewalld
1. 在所有上安装at9.0:
# yum install -y advance-toolchain-at9.0-runtime \
advance-toolchain-at9.0-devel \
advance-toolchain-at9.0-perf \
advance-toolchain-at9.0-mcore-libs
# echo "export PATH=/opt/at9.0/bin:/opt/at9.0/sbin:$PATH" >> /etc/profile.d/at9.sh
# source /etc/profile.d/at9.sh
# /opt/at9.0/sbin/ldconfig
2. master节点:
a. # git clone https://github.com/Pensu/pause.git
# cd pause
# make
b. yum install kubernetes-client kubernetes-master etcd
c. 配置/etc/kubernetes/config:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://172.16.9.158:8080"
d. 配置/etc/kubernetes/apiserver :
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
#KUBE_API_PORT="--port=8080"
# KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.9.158:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
e. 配置/etc/etcd/etcd.conf:
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
f. 启动服务:
# for SERVICES in kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
3. 配置Minion:
a. yum install docker-io kubernetes-client kubernetes-node
b. 配置 /etc/kubernetes/kubelet:
KUBELET_ADDRESS="--address=0.0.0.0"
# KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=172.16.9.158"
KUBELET_API_SERVER="--api-servers=http://172.16.9.158:8080"
KUBELET_ARGS="--pod-infra-container-image=docker.repo:5000/kube/pause:0.8.0"
c. 启动服务:
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
4. 应用:
a. 编写replicationControl:
cat > httprc.yml <<EOF
apiVersion: v1
kind: ReplicationController
metadata:
name: my-httpd
spec:
replicas: 2
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: ppc64le/httpd
ports:
- containerPort: 80
EOF
b. 编写Service:
cat > httpsvc-nodeport.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: httpdsvc-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
selector:
app: httpd
EOF
c. 分别创建replicationControl和Service:
kubectl create -f httprc.yml
kubectl create -f httpsvc-nodeport.yml
d. 查看应用:
kubectl get pod
kubectl get rc
kubectl get svc
5. flannel安装与配置:
http://blog.shippable.com/kubernetes-cluster-with-flannel-overlay-network
a. yum -y install flannel (在minion节点)
b. 设置通讯子网
etcdctl set /atomic.io/network/config ‘{"Network":"10.10.0.0/16", "Backend":{"Type":"vxlan"}}‘ (在master节点)
curl http://172.16.0.204:4001/v2/keys//atomic.io/network/config -XPUT -d value=‘{"Network":"10.10.0.0/16", "Backend":{"Type":"vxlan"}}‘
c. 配置/etc/sysconfig/flanneld
FLANNEL_ETCD="http://172.16.9.158:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
#FLANNEL_OPTIONS=""
d. service flanneld start;
e. 修改flannel.conf为以下内容:
cat >/usr/lib/systemd/system/docker.service.d/flannel.conf<<EOF
[Service]
EnvironmentFile=-/run/flannel/docker
ExecStart=
ExecStart=/usr/bin/docker daemon $DOCKER_NETWORK_OPTIONS -H fd://
EOF
注:添加此行的目的主要是要添加 $DOCKER_NETWORK_OPTIONS 此变量,由于默认安装的docker-io(1.9.1,此版本在/etc/sysconfig/下也仅有docker文件,并没有像其它的rpm安装包一样,包含docker-network和docker-storage等等文件,因此也没有定义相关的变量,当然这些文件必须在docker.service里面使用EnvironmentFile来引用)里面的启动服务脚本中并没有添加此变量,所以要重新进行添加,而此处的这个变量是由flannel(具体是由mk-docker-opts.sh)产生的.
f. systemctl deamon-reload; service docker restart
防火墙(公共)要开启端口的服务有:
kube-apiserver: 默认是8080
kube-proxy: 对应nodePort
etcd: 默认是2379
flanneld:默认bankend,使用UDP(8285)来封装数据报文,建议使用另一种后端类型:vxlan(8472),有公司评测过,不管是在兼容性和性能上都比较好,评测网址:https://blog.tingyun.com/web/article/detail/406
glusterfs: glusterd(24007)、 glusterfsd(49152)、 rpcbind(111)
cAdvisor: 4194
skydns: 53
kubelet: 10250
kube2sky: 8081
skydns和kube2sky:
启动skydns
skydns -addr=0.0.0.0:53 -machines=http://172.16.17.199:2379 -domain=cluster.local. &
启动kube2sky
kube2sky --domain=cluster.local --etcd-server=http://172.16.17.199:2379 --kube-master-url=http://172.16.17.199:8080 &
配置kubelet,在node节点上
指定kubelet的集群域名及集群内部的DNS(skydns)地址
在kubelet的配置文件(/etc/kubernetes/kubelet)中KUBELET_ARGS添加以下两个参数:
KUBELET_ARGS="--cluster-dns=172.16.17.199 --cluster-domain=cluster.local"
重启kubelet以使配置生效。
启动脚本:
a. skydns:
cat > /usr/lib/systemd/system/skydns.service <<EOF
[Unit]
Description=SkyDNS service
After=etcd.service
[Service]
ExecStart=/usr/local/bin/skydns \
-addr=0.0.0.0:53 \
-domain=cluster.local. \
-machines=http://172.16.17.199:2379
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
b. kube2sky:
cat > /usr/lib/systemd/system/kube2sky.service <<EOF
[Unit]
Description=Kube2sky service
After=etcd.service skydns.service kube-apiserver.service
[Service]
ExecStart=/usr/local/bin/kube2sky \
--domain=cluster.local \
--etcd-server=http://172.16.17.199:2379 \
--kube-master-url=http://172.16.17.199:8080
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
管理node的标签
使用kubectl label nodes {nodename} {key=value} 进行标签的添加。如:
kubectl label nodes 10.126.72.31 points=test
Kubernetes的持久存储Glusterfs:
在minion节点上:
1. 更改/etc/kubernetes/config为如下内容:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=true"(此项只有在glusterfs作为kubeneters的pod时使用)
KUBE_MASTER="--master=http://172.16.9.158:8080"
2. 更改/etc/kubernetes/kubelet为如下内容:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=172.16.9.158"
KUBELET_API_SERVER="--api-servers=http://172.16.9.158:8080"
KUBELET_ARGS="--pod-infra-container-image=docker.repo:5000/kube/pause:0.8.0 \
--register-node=true \
--host-network-sources=*"(此项只有在glusterfs作为kubeneters的pod时使用)
3. 配置glusterfs的yum repo:
cat > glusterfs.repo <<EOF
# Place this file in your /etc/yum.repos.d/ directory
[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-$releasever/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/rsa.pub
[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-$releasever/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/rsa.pub
[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-$releasever/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/rsa.pub
EOF
4. glusterfs服务端的软件安装:
yum -y install glusterfs-server
5. glusterfs客户端的软件安装:
yum -y install glusterfs glusterfs-fuse
6. 在glusterfs服务端启动glusterd服务:
service glusterd start
7. 配置hosts文件:
a. 在gluster1节点上,配置hosts文件如下:
cat >> /etc/hosts <<EOF
127.0.0.1 gluster1
172.16.12.10gluster2
EOF
b. 在gluster2节点上,配置hosts文件如下:
cat >> /etc/hosts <<EOF
172.16.12.9gluster1
127.0.0.1gluster2
EOF
8. 组建glusterfs集群:
a. gluster peer probe gluster2
b. 查看集群节点状态:
gluster peer status
9. 需要一个独立的分区,用于glusterfs创建数据卷,此处假设为/dev/vdb
a. mkdir /brick;mkfs.xfs /dev/vdb1
b. mount /dev/vdb1 /brick
c. 创建glusterfs卷:
gluster volume create gv1 replica 2 gluster1:/brick/gv1 gluster2:/brick/gv1
d. 启动gv1卷:
gluster volume start gv1
c. 查看gv1卷的状态:
gluster volume info
10. 客户端挂载测试:
a. 客户端先配置好hosts
cat >> /etc/hosts <<EOF
172.16.12.9gluster1
172.16.12.10gluster2
EOF
b. mount -t glusterfs gluster1:/gv1 /mnt
11. 与k8s集成:
a. 创建glusterfs的endpoints(相当于glusterfs文件系统的入口):
cat > glusterfs-endpoints.yml <<EOF
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 172.16.12.9
ports:
- port: 1
- addresses:
- ip: 172.16.12.10
ports:
- port: 1
EOF
b. 配置glusterfs-endpoints的services:
cat > glusterfs-service.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: 1
EOF
c. 部署用glusterfs做后端存储的应用,此处用ppc64le/httpd来举例:
cat > apache-deployment.yml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpd-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: ppc64le/httpd
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/local/apache2/htdocs
name: glusterfsvol
volumes:
- name: glusterfsvol
glusterfs:
endpoints: glusterfs-cluster
path: gv1
EOF
d. 定义ppc64le/httpd应用对应的Service
cat > apache-service.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080
selector:
app: apache
EOF
e. 依次执行以下创建:
kubectl create -f glusterfs-endpoints.yml
kubectl create -f glusterfs-service.yml
kubectl create -f apache-deployment.yml
kubectl create -f apache-service.yml
Kubernetes集群的容器化部署方法:
master节点:
1. 安装kubernetes-node、kubernetes-client、etcd和flannel
a. yum -y install etcd flannel kubernetes-node kubernetes-client
b. 配置flannel 和 docker:
同普通方式的kube的部署方法,如果docker版本是>1.11.* 那么在docker.service 里面还要加上:
--exec-root=/var/lib/docker
2. 修改/etc/kubernetes/kubelet如下:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=172.16.8.68"
KUBELET_API_SERVER="--api-servers=http://172.16.8.68:8080"
KUBELET_ARGS="--pod-infra-container-image=docker.repo:5000/kube/pause:0.8.0 \
--config=/etc/kubernetes/manifests \
--register-schedulable=false \
--cluster-dns=10.254.53.53 \
--cluster-domain=cluster.local"
3. 修改/etc/kubernetes/config如下:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=true"
KUBE_MASTER="--master=http://172.16.8.68:8080"
4. 在/usr/lib/systemd/system/kubelet.service的[Service]段添加如下内容:
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
5. 配置/etc/kubernetes/manifests/kube-apiserver.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: index.tenxcloud.com/google_containers/kube-apiserver-ppc64le:v1.3.0
command:
- /usr/local/bin/kube-apiserver
- --insecure-bind-address=0.0.0.0
- --etcd-servers=http://172.16.11.244:4001
- --advertise-address=172.16.11.244
- --allow-privileged=true
- --service-cluster-ip-range=10.254.0.0/16
- --admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota
- --runtime-config=extensions/v1beta1=true, extensions/v1beta1/thirdpartresources=true
ports:
- containerPort: 8080
hostPort: 8080
name: local
6. 配置/etc/kubernetes/manifests/kube-proxy.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: index.tenxcloud.com/google_containers/kube-proxy-ppc64le:v1.3.0
command:
- /usr/local/bin/kube-proxy
- --master=http://172.16.11.244:8080
- --proxy-mode=iptables
securityContext:
privileged: true
7. 配置/etc/kubernetes/manifests/kube-controller-manager.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: index.tenxcloud.com/google_containers/kube-controller-manager-ppc64le:v1.3.0
command:
- /usr/local/bin/kube-controller-manager
- --master=http://172.16.11.244:8080
- --leader-elect=true
8. 配置/etc/kubernetes/manifests/kube-scheduler.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: index.tenxcloud.com/google_containers/kube-scheduler-ppc64le:v1.3.0
command:
- /usr/local/bin/kube-scheduler
- --master=http://172.16.11.244:8080
- --leader-elect=true
9. 启动kubelet服务:
systemctl start kubelet
10. 验证apiserver端口8080是否开启:
ss -an src :8080
minion节点:
1. 安装kubernetes-node、kubernetes-client和flannel
yum -y install flannel kubernetes-node kubernetes-client
2. 修改/etc/kubernetes/kubelet如下:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override={{ inventory_hostname }}"
KUBELET_API_SERVER="--api-servers=http://172.16.8.68:8080"
KUBELET_ARGS="--pod-infra-container-image=docker.repo:5000/kube/pause:0.8.0 \
--config=/etc/kubernetes/manifests \
--register-node=true \
--cluster-dns=10.254.53.53 \
--cluster-domain=cluster.local"
3. 修改/etc/kubernetes/config如下:
同master节点
4. 在/usr/lib/systemd/system/kubelet.service的[Service]段添加如下内容:
同master节点
5. 配置/etc/kubernetes/manifests/kube-proxy.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: index.tenxcloud.com/google_containers/kube-proxy-ppc64le:v1.3.0
command:
- /usr/local/bin/kube-proxy
- --master=http://172.16.11.244:8080
- --proxy-mode=iptables
securityContext:
privileged: true
6. 启动服务:
a. systemctl daemon-reload
b. for i in flanneld docker kubelet; do
systemctl enable $i
systemctl restart $i
systemctl status $i
done
7. 在master节点查看minion节点是否注册:
kubectl get nodes
本文出自 “一切皆有可能” 博客,请务必保留此出处http://noican.blog.51cto.com/4081966/1825495
Kubernetes 集群的两种部署过程(daemon部署和容器化部署)以及glusterfs的应用!
标签:kubernetes docker flannel etcd glusterfs
原文地址:http://noican.blog.51cto.com/4081966/1825495