标签:
docker实现了更便捷的单机容器虚拟化的管理, docker的位置处于操作系统层与应用层之间;
相对传统虚拟化(KVM,XEN):
docker可以更加灵活的去实现一些应用层功能, 同时对资源的利用率也更高
相对应用:
docker可以把应用更操作系统(镜像)做更好的结合, 降低部署与维护的的成本
处于这样一个位置在单机使用docker进行业务部署是可以感觉到质的提升; 但是针对跨机器, 大规模, 需要对业务质量进行保证的时候, docker本身又有些不足, 而传统的运维自动化工具无论是在docker内部部署还是用来管理docker都显得有些不伦不类.
Kubernetes则实现了大规模,分布式, 并且确保高可用的docker集群的管理.
可以把kuberntes理解为容器级别的自动化运维工具, 之前的针对操作系统(linux, windows)的自动化运维工具比如puppet, saltstack, chef所做的工作是确保代码状态的正确, 配置文件状态的正确, 进程状态的正确, 本质是状态本身的维护; 而kubernetes实际上也是状态的维护, 只不过是容器级别的状态维护; 不过kubernetes在容器级别要做到不仅仅状态的维护, 还需要docker跨机器之间通信的问题.
1: pod
2: Replicateion controller
3: service
整个架构大体分为控制节点和计算节点; 控制节点发命令, 计算节点干活.
首先试图从图本身试图对架构做一些理解
主机环境
110和111部署etcd, 110作为kubenetes的控制节点, 111和112作为计算节点
环境准备:
yum install epel-release
systemctl stop firewalld
systemctl disable firewalld
etcd是一个分布式, 高性能, 高可用的键值存储系统,由CoreOS开发并维护的,灵感来自于 ZooKeeper 和 Doozer,它使用Go语言编写,并通过Raft一致性算法处理日志复制以保证强一致性。
可靠: 使用Raft保证一致性
1: 安装包:
yum install etcd -y
2: 编辑配置: /etc/etcd/etcd.conf
# [member]
ETCD_NAME=192.168.56.110 #member节点名字 要与后面的ETCD_INITIAL_CLUSTER对应
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据存储目录
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.56.110:2380" #集群同步地址与端口
ETCD_LISTEN_CLIENT_URLS="http://192.168.56.110:4001" #client通信端口
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.56.110:2380" #peer初始化广播端口
ETCD_INITIAL_CLUSTER="192.168.56.110=http://192.168.56.110:2380,192.168.56.111=http:// 192.168.56.111:2380" #集群成员, 格式: $节点名字=$节点同步端口 节点之前用","隔开
ETCD_INITIAL_CLUSTER_STATE="new" #初始化状态, 初始化之后会变为existing
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群名字
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.110:4001" #client广播端口
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#
#[proxy]
#ETCD_PROXY="off"
#
#[security]
#ETCD_CA_FILE=""
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_PEER_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
systemctl enable etcd
systemctl start etcd
#etcdctl member list
dial tcp 127.0.0.1:2379: connection refused
#etcd默认连接127.0.0.1的2379端口, 而咱们配置的是192.168.56.110的4001端口
# etcdctl -C 192.168.56.110:4001 member list
no endpoints available
#如果依然出现了上面的问题, 查看服务是否启动
# netstat -lnp | grep etcd
tcp 0 0 192.168.56.110:4001 0.0.0.0: LISTEN 18869/etcd
tcp 0 0 192.168.56.110:2380 0.0.0.0: LISTEN 18869/etcd #然后查看端口是否畅通
telnet 192.168.56.111 4001
Trying 192.168.56.111...
Connected to 192.168.56.111.
Escape character is ‘^]‘.
^C
# etcdctl -C 192.168.56.110:4001 member list
10f1c239a15ba875: name=192.168.56.110 peerURLs=http://192.168.56.110:2380 clientURLs=http://192.168.56.110:4001
f7132cc88f7a39fa: name=192.168.56.111 peerURLs=http://192.168.56.111:2380 clientURLs=http://192.168.56.111:4001
#etcdctl -C 192.168.56.110:4001 mk /coreos.com/network/config ‘{"Network":"10.0.0.0/16"}‘
{"Network":"10.0.0.0/16"}
# etcdctl -C 192.168.56.110:4001 get /coreos.com/network/config
{"Network":"10.0.0.0/16"}
1: 控制节点安装
yum -y install kubernetes
2: 配置文件: /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"
# Comma separated list of nodes in the etcd cluster
#KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.56.110:4001,http://192.168.56.111:4001"
# 修改为咱们配置的etcd服务
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--portal_net=192.168.56.150/28"
# 外网网段, kubenetes通过改网络把服务暴露出去
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
3: 启动服务
API的启动脚本有问题
/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
PermissionsStartOnly=true
ExecStartPre=-/usr/bin/mkdir /var/run/kubernetes
ExecStartPre=-/usr/bin/chown -R kube:kube /var/run/kubernetes/
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
systemctl enable kube-apiserver kube-controller-manager kube-scheduler
systemctl restart kube-apiserver kube-controller-manager kube-scheduler
# ps aux | grep kube
kube 20505 5.4 1.6 45812 30808 ? Ssl 22:05 0:07 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://192.168.56.110:2380,http://192.168.56.110:2380 --address=0.0.0.0 --allow_privileged=false --portal_net=192.168.56.0/24 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota
kube 20522 1.8 0.6 24036 12064 ? Ssl 22:05 0:02 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --machines=127.0.0.1 --master=http://127.0.0.1:8080
kube 20539 1.3 0.4 17420 8760 ? Ssl 22:05 0:01 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
2: 计算节点安装
yum -y install kubernetes docker flannel bridge-utils net-tools
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.110:8080" #将改IP改为控制节点IP
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.56.111" #本机地址
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=192.168.56.111" #本机地址
# location of the api-server
KUBELET_API_SERVER="--api_servers=http://192.168.56.110:8080" #控制节点地址
# Add your own!
KUBELET_ARGS="--pod-infra-container-image=docker.io/kubernetes/pause:latest"
#kubenet服务的启动需要依赖以pause这个镜像, 默认kubenet会从google镜像服务下载, 而由于*原因, 下载不成功, 这里我们指定为的docker的镜像
#镜像下载: docker pull docker.io/kubernetes/pause
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://192.168.56.110:4001,http://192.168.56.111:4001" #修改为etcd服务地址
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
3: 服务修改
kubernetes的默认服务启动有问题, 需要做写调整
cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS
LimitNOFILE=65535
LimitNPROC=10240
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl start docker
systemctl stop docker
ifconfig docker0 down
brctl delbr docker0
启动服务
systemctl enable kube-proxy kubelet flanneld docker
systemctl restart kube-proxy kubelet flanneld docker
# kubectl get nodes
NAME LABELS STATUS
192.168.56.111 kubernetes.io/hostname=192.168.56.111 Ready
192.168.56.112 kubernetes.io/hostname=192.168.56.112 Ready
kubenetes的管理实际上就是针对pod, rc, services的管理, 命令行针对kubenetes的管理建议基于配置文件进行, 这样更便于管理, 也更规范
kubectl create -h
Create a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
kubectl create -f FILENAME [flags]
Examples:
// Create a pod using the data in pod.json.
$ kubectl create -f pod.json
// Create a pod based on the JSON passed into stdin.
$ cat pod.json | kubectl create -f -
格式规范:
apiVersion: v1beta3 #API版本, 要在 kubectl api-versions
kind: ReplicationController #Pod, ReplicationController, Service
metadata: #元数据, 主要是name与label
name: test
spec: #配置, 根据不同的kind, 具体配置项会有所不同
***
+-----------+
| |
| logic | #逻辑处理服务
| |
+---+--+----+
| |
+----+ +----+
| |
| |
+----v-----+ +----v----+
| | | |
| DB | | redis | #调用其他服务
| | | |
+----------+ +---------+
思路: 每个pod内提供一组完整的服务
1: 准备镜像
2: rc配置wechat-rc.yaml:
apiVersion: v1beta3
kind: ReplicationController
metadata:
name: wechatv4
labels:
name: wechatv4
spec:
replicas: 1
selector:
name: wechatv4
template:
metadata:
labels:
name: wechatv4
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
- name: postgres
image: opslib/wechat_db
ports:
- containerPort: 5432
- name: wechat
image: opslib/wechat1
ports:
- containerPort: 80
# kubectl create -f wechat-rc.yaml
replicationcontrollers/wechat
sql_connection=‘postgresql://wechat:wechatpassword@127.0.0.1/wechat‘
cached_backend=‘redis://127.0.0.1:6379/0‘
apiVersion: v1beta3
kind: Service
metadata:
name: wechat
labels:
name: wechat
spec:
ports:
- port: 80
selector:
name: wechatv4
# kubectl create -f wechat-service.yaml
services/wechat
kubectl get service wechat
NAME LABELS SELECTOR IP(S) PORT(S)
wechat name=wechat name=wechatv4 192.168.56.156 80/TCP
# curl -i http://192.168.56.156
HTTP/1.1 200 OK
Content-Length: 0
Access-Control-Allow-Headers: X-Auth-Token, Content-type
Server: TornadoServer/4.2
Etag: "da39a3ee5e6b4b0d3255bfef95601890afd80709"
Date: Mon, 06 Jul 2015 09:04:49 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Content-Type: application/json
基本的业务部署完成后, 在服务要更新的时候, kubenetes可以利用滚动更新,基本上实现了业务的热更新.
#kubectl rolling-update wechatv3 -f wechatv3.yaml
Creating wechatv4
At beginning of loop: wechatv3 replicas: 0, wechatv4 replicas: 1
Updating wechatv3 replicas: 0, wechatv4 replicas: 1
At end of loop: wechatv3 replicas: 0, wechatv4 replicas: 1
Update succeeded. Deleting wechatv3
wechatv4
当需要同一服务需要启动多个实例, 服务本身一样, 但是启动服务的配置不一样时候
一般我们可能会有3种需求:
可以在配置文件中针对不同的container设置不同的设置.
apiVersion: v1beta3
kind: ReplicationController
metadata:
name: new
labels:
name: new
spec:
replicas: 1
selector:
name: new
template:
metadata:
labels:
name: new
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
- name: postgres
image: opslib/wechat_db
ports:
- containerPort: 5432
- name: wechat
image: opslib/wechat1
command: #container的启动命令有外部定义
- ‘/bin/bash‘
- ‘-c‘
- ‘/usr/bin/wechat_api‘
- ‘--config=/etc/wechat/wechat.conf‘
resources: #限制container的资源
request: #请求的资源
cpu: "0.5"
memory: "512Mi"
limits: #最大可以使用的资源
cpu: "1"
memory: "1024Mi"
ports:
- containerPort: 80
volumeMounts: #挂载目录
- name: data
mountPath: /data
volumes:
- name: data
标签:
原文地址:http://www.cnblogs.com/ilinuxer/p/5866706.html