标签:cli spec 没有 storage tables 过程 apply wing rgs
OpenShift简介微服务架构应用日渐广泛,Docker和Kubernetes技术是不可或缺的。Red Hat OpenShift 3是建立在Docker和Kubernetes基础之上的容器应用平台,用于开发和部署企业应用程序。
OpenShift Dedicated(Enterprise)
OpenShift Container Platform(Enterprise)
OKD
OpenShift开源社区版(Origin Community Distribution of Kubernetes)
使用Ansible安装openshift,仅需配置一些Node信息和参数即可完成集群安装,大大提高了安装速度。
Masters
Nodes
RPM-based Installations | System Container Installations | |
---|---|---|
Delivery Mechanism | RPM packages using yum | System container images using docker |
Service Management | systemd | docker and systemd units |
Operating System | Red Hat Enterprise Linux (RHEL) | RHEL Atomic Host |
RPM安装通过包管理器来安装和配置服务,system container安装使用系统容器镜像来安装服务, 服务运行在独立的容器内。
从OKD 3.10开始, 如果使用Red Hat Enterprise Linux (RHEL)操作系统,将使用RPM方法安装OKD组件。如果使用RHEL Atomic,将使用system container方法。不同安装类型提供相同的功能, 安装类型的选择依赖于操作系统、你想使用的服务管理和系统升级方法。
本文使用RPM安装方法。
Configmaps定义Node配置, OKD 3.10忽略openshift_node_labels值。默认创建了下面的ConfigMaps:
集群安装时选择node-config-master、node-config-infra、node-config-compute。
为快速了解OpenShift安装,我们先使用第一种环境,成功后再安装第二种环境。Ansible一般使用单独的机器,两种情况分别需要创建4和10台EC2。
# yum update
安装OpenShift需要Red Hat账号并订阅了RHEL,依次执行以下命令启用必须的repo:
# subscription-manager register
# subscription-manager list --available
# subscription-manager attach --pool=8a85f98b62dd96fc0162f04efb0e6350
# subscription-manager repos --list
# subscription-manager repos --enable rhel-7-server-ansible-2.6-debug-rpms
# subscription-manager repos --enable rhel-7-server-rpms
# subscription-manager repos --enable rhel-7-server-extras-rpms
检查/etc/selinux/config,确保内容如下:
SELINUX=enforcing
SELINUXTYPE=targeted
为了使用更清晰的名字,需要创建额外的DNS服务器,为EC2配置合适的域名,如下:
master1.itrunner.org A 10.64.33.100
master2.itrunner.org A 10.64.33.103
node1.itrunner.org A 10.64.33.101
node2.itrunner.org A 10.64.33.102
EC2需要配置DNS服务器,创建dhclient.conf文件
# vi /etc/selinux/config
添加如下内容:
supersede domain-name-servers 10.164.18.18;
配置完毕后需要重启才能生效,重启后/etc/resolv.conf内容如下:
# Generated by NetworkManager
search cn-north-1.compute.internal
nameserver 10.164.18.18
OKD使用了dnsmasq,安装成功后会自动配置所有Node,/etc/resolv.conf会被修改,nameserver变为本机IP。Pod将使用Node作为DNS,Node转发请求。
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
# Generated by NetworkManager
search cluster.local cn-north-1.compute.internal itrunner.org
nameserver 10.64.33.100
hostnamectl set-hostname --static master1.itrunner.org
编辑/etc/cloud/cloud.cfg文件,在底部添加以下内容:
preserve_hostname: true
所有Node需安装。
# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
所有Node需安装。
# yum install docker
# systemctl enable docker
# systemctl start docker
检查docker安装:
# docker info
仅Ansible EC2需安装。
# yum install ansible
Ansible需要能访问其他所有机器才能完成安装,因此需要配置免密登录。将密钥拷贝到ec2-user/.ssh目录下,然后授权:
$ cd .ssh/
$ chmod 600 *
配置成功后逐一测试连接:
ssh master1.itrunner.org
如使用密码或需要密码的密钥登录,请使用keychain。
Security Group | Port |
---|---|
All OKD Hosts | tcp/22 from host running the installer/Ansible |
etcd Security Group | tcp/2379 from masters, tcp/2380 from etcd hosts |
Master Security Group | tcp/8443 from 0.0.0.0/0, tcp/53 from all OKD hosts, udp/53 from all OKD hosts, tcp/8053 from all OKD hosts, udp/8053 from all OKD hosts |
Node Security Group | tcp/10250 from masters, udp/4789 from nodes |
Infrastructure Nodes | tcp/443 from 0.0.0.0/0, tcp/80 from 0.0.0.0/0 |
第二种场景下需要配置ELB。
使用外部ELB时,Inventory文件不需定义lb,需要指定openshift_master_cluster_hostname、openshift_master_cluster_public_hostname、openshift_master_default_subdomain三个参数(请参见后面章节)。
openshift_master_cluster_hostname和openshift_master_cluster_public_hostname负责master的load balance,ELB定义时指向Master Node,其中openshift_master_cluster_hostname供内部使用,openshift_master_cluster_public_hostname供外部访问(Web Console),两者可以设置为同一域名,但openshift_master_cluster_hostname所使用的ELB必须配置为Passthrough。
为了安全,生产环境openshift_master_cluster_hostname和openshift_master_cluster_public_hostname应设置为两个不同域名。
openshift_master_default_subdomain定义OpenShift部署应用的域名,ELB指向Infra Node。
因此,共需创建三个ELB:
为了方便使用,openshift_master_cluster_public_hostname、openshift_master_default_subdomain一般配置为企业的域名,不直接使用AWS ELB的DNS名称。
$ cd ~
$ git clone https://github.com/openshift/openshift-ansible
$ cd openshift-ansible
$ git checkout release-3.10
Inventory文件定义了host和配置信息,默认文件为/etc/ansible/hosts。
场景一
master、compute、infra各一个结点,etcd部署在master上。
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user
# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true
openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Defining htpasswd users
#openshift_master_htpasswd_users={‘user1‘: ‘<pre-hashed password>‘, ‘user2‘: ‘<pre-hashed password>‘
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>
# host group for masters
[masters]
master1.itrunner.org
# host group for etcd
[etcd]
master1.itrunner.org
# host group for nodes, includes region info
[nodes]
master1.itrunner.org openshift_node_group_name=‘node-config-master‘
compute1.itrunner.org openshift_node_group_name=‘node-config-compute‘
infra1.itrunner.org openshift_node_group_name=‘node-config-infra‘
场景二
master、compute、infra各三个结点,在非生产环境下,load balance可以不使用外部ELB,使用HAProxy,etcd可以单独部署,也可以与master部署在一起。
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com
# apply updated node defaults
openshift_node_kubelet_args={‘pods-per-core‘: [‘10‘], ‘max-pods‘: [‘250‘], ‘image-gc-high-threshold‘: [‘90‘], ‘image-gc-low-threshold‘: [‘80‘]}
# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true
# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com
# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com
# Specify load balancer host
[lb]
lb.example.com
# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name=‘node-config-master‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
infra-node1.example.com openshift_node_group_name=‘node-config-infra‘
infra-node2.example.com openshift_node_group_name=‘node-config-infra‘
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com
# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com
# host group for etcd
[etcd]
master1.example.com
master2.example.com
master3.example.com
# Specify load balancer host
[lb]
lb.example.com
# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name=‘node-config-master‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
infra-node1.example.com openshift_node_group_name=‘node-config-infra‘
infra-node2.example.com openshift_node_group_name=‘node-config-infra‘
使用外部ELB,不需定义lb,需要指定openshift_master_cluster_hostname、openshift_master_cluster_public_hostname、openshift_master_default_subdomain。
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
# Since we are providing a pre-configured LB VIP, no need for this group
#lb
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user
# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true
openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Defining htpasswd users
#openshift_master_htpasswd_users={‘user1‘: ‘<pre-hashed password>‘, ‘user2‘: ‘<pre-hashed password>‘
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>
# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-master-internal-123456b57ac7be6c.elb.cn-north-1.amazonaws.com.cn
openshift_master_cluster_public_hostname=openshift.itrunner.org
openshift_master_default_subdomain=apps.itrunner.org
#openshift_master_api_port=443
#openshift_master_console_port=443
# host group for masters
[masters]
master1.itrunner.org
master2.itrunner.org
master3.itrunner.org
# host group for etcd
[etcd]
master1.itrunner.org
master2.itrunner.org
master3.itrunner.org
# Since we are providing a pre-configured LB VIP, no need for this group
#[lb]
#lb.itrunner.org
# host group for nodes, includes region info
[nodes]
master[1:3].itrunner.org openshift_node_group_name=‘node-config-master‘
node1.itrunner.org openshift_node_group_name=‘node-config-compute‘
node1.itrunner.org openshift_node_group_name=‘node-config-compute‘
infra-node1.itrunner.org openshift_node_group_name=‘node-config-infra‘
infra-node2.itrunner.org openshift_node_group_name=‘node-config-infra‘
一切准备就绪,使用ansible安装OpenShift非常简单,仅需运行prerequisites.yml和deploy_cluster.yml两个playbook。
$ ansible-playbook ~/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook ~/openshift-ansible/playbooks/deploy_cluster.yml
如没有使用默认的inventory文件,可以使用-i指定文件位置:
$ ansible-playbook [-i /path/to/inventory] ~/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook [-i /path/to/inventory] ~/openshift-ansible/playbooks/deploy_cluster.yml
deploy过程中如出现错误,修正后可以运行提示中的playbook测试,然后再运行deploy_cluster.yml。
# oc get nodes
场景一,使用master hostname访问web console: https://master1.itrunner.org:8443/console
场景二,使用域名访问web console: https://openshift.itrunner.org:8443/console
创建两个用户:
# htpasswd /etc/origin/master/htpasswd admin
# htpasswd /etc/origin/master/htpasswd developer
使用system:admin登录:
# oc login -u system:admin
用户授权:
# oc adm policy add-cluster-role-to-user cluster-admin admin
# oc adm policy add-role-to-user admin admin
CLI配置文件
oc login命令自动创建和管理CLI配置文件~/.kube/config。
使用安装时的inventory文件
$ ansible-playbook ~/openshift-ansible/playbooks/adhoc/uninstall.yml
新建一个inventory文件,配置要卸载的node:
[OSEv3:children]
nodes
[OSEv3:vars]
ansible_ssh_user=ec2-user
openshift_deployment_type=origin
[nodes]
node3.example.com openshift_node_group_name=‘node-config-infra‘
指定新的inventory文件,运行uninstall.yml playbook:
$ ansible-playbook -i /path/to/new/file ~/openshift-ansible/playbooks/adhoc/uninstall.yml
OpenShift
OpenShift Github
OpenShift Documentation
OKD
OKD Latest Documentation
Ansible Documentation
External Load Balancer Integrations with OpenShift Enterprise 3
Red Hat OpenShift on AWS
Docker Documentation
Kubernetes Documentation
Kubernetes中文社区
Kubernetes-基于EFK进行统一的日志管理
SSL For Free
标签:cli spec 没有 storage tables 过程 apply wing rgs
原文地址:http://blog.51cto.com/7308310/2171091