标签:生产 准备 ast 类型 简介 imp man 解析 0.0.0.0
一、环境准备
1、准备三台虚拟机,具体信息如下,配置好root账户,安装好docker,安装方法参见https://www.cnblogs.com/liangyuntao-ts/p/10657009.html
系统类型 IP地址 节点角色 CPU Memory Hostname centos7 192.168.100.101 worker 1 2G work01 centos7 192.168.100.102 master 1 2G master centos7 192.168.100.103 worker 1 2G work02
2、三台服务器启动docker
[root@server02 ~]# systemctl start docker [root@server02 ~]# systemctl enable docker [root@server02 ~]# docker version Client: Version: 18.09.6 API version: 1.39 Go version: go1.10.8 Git commit: 481bc77156 Built: Sat May 4 02:34:58 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.6 API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 481bc77 Built: Sat May 4 02:02:43 2019 OS/Arch: linux/amd64 Experimental: false
3、系统设置,关闭防火墙,selinux,设置路由转发,不对bridge数据进行处理
[root@server02 ~]# systemctl stop firewalld [root@server02 ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@server02 ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) since 六 2019-05-18 12:35:54 CST; 55s ago Docs: man:firewalld(1) Main PID: 525 (code=exited, status=0/SUCCESS) 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...n?). 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...ame. 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...n?). 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...ame. 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...n?). 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...ame. 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...n?). 5月 18 11:47:59 server02 firewalld[525]: WARNING: COMMAND_FAILED: ‘/usr/sbin/iptables -w2 -t filter -...ame. 5月 18 12:35:51 server02 systemd[1]: Stopping firewalld - dynamic firewall daemon... 5月 18 12:35:54 server02 systemd[1]: Stopped firewalld - dynamic firewall daemon. Hint: Some lines were ellipsized, use -l to show in full. #写入配置文件 [root@server02 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF #生效配置文件 [root@server02 ~]# sysctl -p /etc/sysctl.d/k8s.conf
4、配置host文件
#配置host,使每个Node都可以通过名字解析到ip地址 [root@server02 ~]# vi /etc/hosts #加入如下片段(ip地址和servername替换成自己的) 192.168.100.101 server01 192.168.100.102 master 192.168.100.103 server02
5、准备二进制文件
下载地址:
链接:https://pan.baidu.com/s/13izNNZ3Bkem61Zemhkj8gQ
提取码:0ykv
下载完成后,将文件上传至所有的服务器的/home目录下
6、准备配置文件
#到home目录下载项目 [root@server02 ~]# cd /home [root@server02 ~]# git clone https://github.com/liuyi01/kubernetes-starter.git #看看git内容 [root@server02 ~]# cd ~/kubernetes-starter && ls
7、修改配置文件,生产适应环境的配置,三台服务器都需要设置
[root@server02 home]# vim kubernetes-starter/config.properties #kubernetes二进制文件目录,eg: /home/michael/bin BIN_PATH=/home/bin #当前节点ip, eg: 192.168.1.102 NODE_IP=192.168.100.102 #etcd服务集群列表, eg: http://192.168.1.102:2379 #如果已有etcd集群可以填写现有的。没有的话填写:http://${MASTER_IP}:2379 (MASTER_IP自行替换成自己的主节点ip) ##如果用了证书,就要填写https://${MASTER_IP}:2379 (MASTER_IP自行替换成自己的主节点ip) ETCD_ENDPOINTS=http://192.168.100.102:2379 #kubernetes主节点ip地址, eg: 192.168.1.102 MASTER_IP=192.168.100.102 ####根据自己的配置进行设置 [root@server02 home]# ./gen-config.sh simple [root@server02 home]# mv kubernetes-bins/ bin 将该解压的文件夹修改为/home/bin ,在将该路径加入环境遍历中 [root@server02 home]# vi ~/.bash_profile PATH=$PATH:$HOME/bin [root@server02 home]# export PATH=$PATH:/home/bin //设置环境变量
二、基础服务部署
1、ETCD服务部署,二进制文件已经准备好,现在把它做成系统服务并启动(主节点操作)
#把服务配置文件copy到系统服务目录 [root@server02 ~]# cp /home/kubernetes-starter/target/master-node/etcd.service /lib/systemd/system/ #enable服务 [root@server02 ~]# systemctl enable etcd.service #创建工作目录(保存数据的地方) [root@server02 ~]# mkdir -p /var/lib/etcd # 启动服务 [root@server02 ~]# systemctl start etcd # 查看服务日志,看是否有错误信息,确保服务正常 [root@server02 ~]# journalctl -f -u etcd.service 5月 18 12:17:31 server02 etcd[2179]: dialing to target with scheme: "" 5月 18 12:17:31 server02 etcd[2179]: could not get resolver for scheme: "" 5月 18 12:17:31 server02 etcd[2179]: serving insecure client requests on 192.168.100.102:2379, this is strongly discouraged! 5月 18 12:17:31 server02 etcd[2179]: ready to serve client requests 5月 18 12:17:31 server02 etcd[2179]: dialing to target with scheme: "" 5月 18 12:17:31 server02 etcd[2179]: could not get resolver for scheme: "" 5月 18 12:17:31 server02 etcd[2179]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! 5月 18 12:17:31 server02 etcd[2179]: set the initial cluster version to 3.2 5月 18 12:17:31 server02 etcd[2179]: enabled capabilities for version 3.2 5月 18 12:17:31 server02 systemd[1]: Started Etcd Server. ####ETCD服务正常启动
2、部署APIServer(主节点)
简介:
kube-apiserver是Kubernetes最重要的核心组件之一,主要提供以下的功能
[root@server02 ~]# cd /home/ [root@server02 home]# cp kubernetes-starter/target/master-node/kube-apiserver.service /lib/systemd/system/ [root@server02 home]# systemctl enable kube-apiserver.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. [root@server02 home]# systemctl start kube-apiserver [root@server02 home]# journalctl -f -u kube-apiserver -- Logs begin at 六 2019-05-18 11:47:54 CST. -- 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.688480 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.certificates.k8s.io/status: (46.900994ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.691365 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.policy/status: (40.847972ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.692039 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.storage.k8s.io/status: (41.81334ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.703752 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.rbac.authorization.k8s.io/status: (11.64213ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.704980 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1.networking.k8s.io/status: (13.967816ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.710226 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.rbac.authorization.k8s.io/status: (5.19179ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:19 server02 kube-apiserver[2333]: I0518 12:57:19.710252 2333 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.storage.k8s.io/status: (5.695826ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.559583 2333 wrap.go:42] GET /api/v1/namespaces/default: (4.524421ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.563896 2333 wrap.go:42] GET /api/v1/namespaces/default/services/kubernetes: (2.544183ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330] 5月 18 12:57:29 server02 kube-apiserver[2333]: I0518 12:57:29.566296 2333 wrap.go:42] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.280719ms) 200 [[kube-apiserver/v1.9.0 (linux/amd64) kubernetes/925c127] 127.0.0.1:49330]
####日志都是提示的,没有异常
查看端口是否起来 [root@server02 home]# netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.100.102:2379 0.0.0.0:* LISTEN 2179/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 2179/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 2179/etcd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 836/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1069/master tcp6 0 0 :::6443 :::* LISTEN 2333/kube-apiserver tcp6 0 0 :::8080 :::* LISTEN 2333/kube-apiserver tcp6 0 0 :::22 :::* LISTEN 836/sshd tcp6 0 0 ::1:25 :::* LISTEN 1069/master
3、部署ControllerManager
Controller Manager由kube-controller-manager和cloud-controller-manager组成,是Kubernetes的大脑,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。 kube-controller-manager由一系列的控制器组成,像Replication Controller控制副本,Node Controller节点控制,Deployment Controller管理deployment等等 cloud-controller-manager在Kubernetes启用Cloud Provider的时候才需要,用来配合云服务提供商的控制
标签:生产 准备 ast 类型 简介 imp man 解析 0.0.0.0
原文地址:https://www.cnblogs.com/liangyuntao-ts/p/10885352.html