码迷,mamicode.com
首页 > 其他好文 > 详细

AWS RHEL 7快速安装配置OpenShift

时间:2018-09-06 16:19:29      阅读:139      评论:0      收藏:0      [点我收藏+]

标签:cli   spec   没有   storage   tables   过程   apply   wing   rgs   

OpenShift简介

微服务架构应用日渐广泛,Docker和Kubernetes技术是不可或缺的。Red Hat OpenShift 3是建立在Docker和Kubernetes基础之上的容器应用平台,用于开发和部署企业应用程序。
技术分享图片

OpenShift版本

OpenShift Dedicated(Enterprise)

  • Private, high-availability OpenShift clusters hosted on Amazon Web Services or Google Cloud Platform
  • Delivered as a hosted service and supported by Red Hat

OpenShift Container Platform(Enterprise)

  • Across cloud and on-premise infrastructure
  • Customizable, with full administrative control

OKD
OpenShift开源社区版(Origin Community Distribution of Kubernetes)

OpenShift架构

技术分享图片

  • Master Node提供的组件:API Server (负责处理客户端请求, 包括node、user、 administrator和其他的infrastructure系统);Controller Manager Server (包括scheduler和replication controller);OpenShift客户端工具 (oc)
  • Compute Node(Application Node) 部署application
  • Infra Node 运行router、image registry和其他的infrastructure服务
  • etcd 可以部署在Master Node,也可以单独部署, 用来存储共享数据:master state、image、 build、deployment metadata等
  • Pod 最小的Kubernetes object,可以部署一个或多个container

安装计划

软件环境

  • AWS RHEL 7.5
  • OKD 3.10
  • Ansible 2.6.3
  • Docker 1.13.1
  • Kubernetes 1.10

使用Ansible安装openshift,仅需配置一些Node信息和参数即可完成集群安装,大大提高了安装速度。

硬件需求

Masters

  • 最小4 vCPU
  • 最小16 GB RAM
  • /var/最小40 GB硬盘空间
  • /usr/local/bin/最小1 GB硬盘空间
  • 临时目录最小1 GB硬盘空间

Nodes

  • 1 vCPU
  • 最小8 GB RAM
  • /var/最小15 GB硬盘空间
  • /usr/local/bin/最小1 GB硬盘空间
  • 临时目录最小1 GB硬盘空间

安装类型

RPM-based Installations System Container Installations
Delivery Mechanism RPM packages using yum System container images using docker
Service Management systemd docker and systemd units
Operating System Red Hat Enterprise Linux (RHEL) RHEL Atomic Host

RPM安装通过包管理器来安装和配置服务,system container安装使用系统容器镜像来安装服务, 服务运行在独立的容器内。
从OKD 3.10开始, 如果使用Red Hat Enterprise Linux (RHEL)操作系统,将使用RPM方法安装OKD组件。如果使用RHEL Atomic,将使用system container方法。不同安装类型提供相同的功能, 安装类型的选择依赖于操作系统、你想使用的服务管理和系统升级方法。

本文使用RPM安装方法。

Node ConfigMaps

Configmaps定义Node配置, OKD 3.10忽略openshift_node_labels值。默认创建了下面的ConfigMaps:

  • node-config-master
  • node-config-infra
  • node-config-compute
  • node-config-all-in-one
  • node-config-master-infra

集群安装时选择node-config-master、node-config-infra、node-config-compute。

环境场景

  • Master、Compute、Infra Node各一,etcd部署在master上
  • Master、Compute、Infra Node各三,etcd部署在master上

为快速了解OpenShift安装,我们先使用第一种环境,成功后再安装第二种环境。Ansible一般使用单独的机器,两种情况分别需要创建4和10台EC2。

前期准备

更新系统

# yum update

Red Hat订阅

安装OpenShift需要Red Hat账号并订阅了RHEL,依次执行以下命令启用必须的repo:

# subscription-manager register
# subscription-manager list --available
# subscription-manager attach --pool=8a85f98b62dd96fc0162f04efb0e6350
# subscription-manager repos --list
# subscription-manager repos --enable rhel-7-server-ansible-2.6-debug-rpms
# subscription-manager repos --enable rhel-7-server-rpms
# subscription-manager repos --enable rhel-7-server-extras-rpms

检查SELinux

检查/etc/selinux/config,确保内容如下:

SELINUX=enforcing
SELINUXTYPE=targeted

配置DNS

为了使用更清晰的名字,需要创建额外的DNS服务器,为EC2配置合适的域名,如下:

master1.itrunner.org    A   10.64.33.100
master2.itrunner.org    A   10.64.33.103
node1.itrunner.org      A   10.64.33.101
node2.itrunner.org      A   10.64.33.102

EC2需要配置DNS服务器,创建dhclient.conf文件

# vi /etc/selinux/config

添加如下内容:

supersede domain-name-servers 10.164.18.18;

配置完毕后需要重启才能生效,重启后/etc/resolv.conf内容如下:

# Generated by NetworkManager
search cn-north-1.compute.internal
nameserver 10.164.18.18

OKD使用了dnsmasq,安装成功后会自动配置所有Node,/etc/resolv.conf会被修改,nameserver变为本机IP。Pod将使用Node作为DNS,Node转发请求。

# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
# Generated by NetworkManager
search cluster.local cn-north-1.compute.internal itrunner.org
nameserver 10.64.33.100

配置hostname

hostnamectl set-hostname --static master1.itrunner.org

编辑/etc/cloud/cloud.cfg文件,在底部添加以下内容:

preserve_hostname: true

安装基础包

所有Node需安装。

# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct

安装Docker

所有Node需安装。

# yum install docker
# systemctl enable docker
# systemctl start docker

检查docker安装:

# docker info

安装Ansible

仅Ansible EC2需安装。

# yum install ansible

Ansible需要能访问其他所有机器才能完成安装,因此需要配置免密登录。将密钥拷贝到ec2-user/.ssh目录下,然后授权:

$ cd .ssh/
$ chmod 600 *

配置成功后逐一测试连接:

ssh master1.itrunner.org

如使用密码或需要密码的密钥登录,请使用keychain。

配置Security Group

Security Group Port
All OKD Hosts tcp/22 from host running the installer/Ansible
etcd Security Group tcp/2379 from masters, tcp/2380 from etcd hosts
Master Security Group tcp/8443 from 0.0.0.0/0, tcp/53 from all OKD hosts, udp/53 from all OKD hosts, tcp/8053 from all OKD hosts, udp/8053 from all OKD hosts
Node Security Group tcp/10250 from masters, udp/4789 from nodes
Infrastructure Nodes tcp/443 from 0.0.0.0/0, tcp/80 from 0.0.0.0/0

配置ELB

第二种场景下需要配置ELB。
使用外部ELB时,Inventory文件不需定义lb,需要指定openshift_master_cluster_hostname、openshift_master_cluster_public_hostname、openshift_master_default_subdomain三个参数(请参见后面章节)。
openshift_master_cluster_hostname和openshift_master_cluster_public_hostname负责master的load balance,ELB定义时指向Master Node,其中openshift_master_cluster_hostname供内部使用,openshift_master_cluster_public_hostname供外部访问(Web Console),两者可以设置为同一域名,但openshift_master_cluster_hostname所使用的ELB必须配置为Passthrough。
技术分享图片
技术分享图片
为了安全,生产环境openshift_master_cluster_hostname和openshift_master_cluster_public_hostname应设置为两个不同域名。
openshift_master_default_subdomain定义OpenShift部署应用的域名,ELB指向Infra Node。
因此,共需创建三个ELB:

  • openshift_master_cluster_hostname 必须创建网络负载均衡器,协议为TCP,默认端口8443,Target要使用IP方式。
  • openshift_master_cluster_public_hostname ELB/ALB,协议HTTPS,默认端口8443。
  • openshift_master_default_subdomain ELB/ALB,协议HTTPS,默认端口443;协议HTTP,默认端口80。

为了方便使用,openshift_master_cluster_public_hostname、openshift_master_default_subdomain一般配置为企业的域名,不直接使用AWS ELB的DNS名称。

安装OpenShift

下载openshift-ansible

$ cd ~
$ git clone https://github.com/openshift/openshift-ansible
$ cd openshift-ansible
$ git checkout release-3.10

配置Inventory文件

Inventory文件定义了host和配置信息,默认文件为/etc/ansible/hosts。
场景一
master、compute、infra各一个结点,etcd部署在master上。

# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user

# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true

openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Defining htpasswd users
#openshift_master_htpasswd_users={‘user1‘: ‘<pre-hashed password>‘, ‘user2‘: ‘<pre-hashed password>‘
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>

# host group for masters
[masters]
master1.itrunner.org

# host group for etcd
[etcd]
master1.itrunner.org

# host group for nodes, includes region info
[nodes]
master1.itrunner.org openshift_node_group_name=‘node-config-master‘
compute1.itrunner.org openshift_node_group_name=‘node-config-compute‘
infra1.itrunner.org openshift_node_group_name=‘node-config-infra‘

场景二
master、compute、infra各三个结点,在非生产环境下,load balance可以不使用外部ELB,使用HAProxy,etcd可以单独部署,也可以与master部署在一起。

  1. Multiple Masters Using Native HA with External Clustered etcd
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]

# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# apply updated node defaults
openshift_node_kubelet_args={‘pods-per-core‘: [‘10‘], ‘max-pods‘: [‘250‘], ‘image-gc-high-threshold‘: [‘90‘], ‘image-gc-low-threshold‘: [‘80‘]}

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name=‘node-config-master‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
infra-node1.example.com openshift_node_group_name=‘node-config-infra‘
infra-node2.example.com openshift_node_group_name=‘node-config-infra‘
  1. Multiple Masters Using Native HA with Co-located Clustered etcd
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
master1.example.com
master2.example.com
master3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name=‘node-config-master‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
infra-node1.example.com openshift_node_group_name=‘node-config-infra‘
infra-node2.example.com openshift_node_group_name=‘node-config-infra‘
  1. ELB Load Balancer

使用外部ELB,不需定义lb,需要指定openshift_master_cluster_hostname、openshift_master_cluster_public_hostname、openshift_master_default_subdomain。

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
# Since we are providing a pre-configured LB VIP, no need for this group
#lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user

# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true

openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Defining htpasswd users
#openshift_master_htpasswd_users={‘user1‘: ‘<pre-hashed password>‘, ‘user2‘: ‘<pre-hashed password>‘
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-master-internal-123456b57ac7be6c.elb.cn-north-1.amazonaws.com.cn
openshift_master_cluster_public_hostname=openshift.itrunner.org
openshift_master_default_subdomain=apps.itrunner.org
#openshift_master_api_port=443
#openshift_master_console_port=443

# host group for masters
[masters]
master1.itrunner.org
master2.itrunner.org
master3.itrunner.org

# host group for etcd
[etcd]
master1.itrunner.org
master2.itrunner.org
master3.itrunner.org

# Since we are providing a pre-configured LB VIP, no need for this group
#[lb]
#lb.itrunner.org

# host group for nodes, includes region info
[nodes]
master[1:3].itrunner.org openshift_node_group_name=‘node-config-master‘
node1.itrunner.org openshift_node_group_name=‘node-config-compute‘
node1.itrunner.org openshift_node_group_name=‘node-config-compute‘
infra-node1.itrunner.org openshift_node_group_name=‘node-config-infra‘
infra-node2.itrunner.org openshift_node_group_name=‘node-config-infra‘

安装OpenShift

一切准备就绪,使用ansible安装OpenShift非常简单,仅需运行prerequisites.yml和deploy_cluster.yml两个playbook。

$ ansible-playbook ~/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook ~/openshift-ansible/playbooks/deploy_cluster.yml

如没有使用默认的inventory文件,可以使用-i指定文件位置:

$ ansible-playbook [-i /path/to/inventory] ~/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook [-i /path/to/inventory] ~/openshift-ansible/playbooks/deploy_cluster.yml

deploy过程中如出现错误,修正后可以运行提示中的playbook测试,然后再运行deploy_cluster.yml。

验证安装

  1. 验证所有结点是否成功安装,在Master上运行:
# oc get nodes
  1. 验证Web Console

场景一,使用master hostname访问web console: https://master1.itrunner.org:8443/console
场景二,使用域名访问web console: https://openshift.itrunner.org:8443/console

用户与权限

创建两个用户:

# htpasswd /etc/origin/master/htpasswd admin
# htpasswd /etc/origin/master/htpasswd developer

使用system:admin登录:

# oc login -u system:admin

用户授权:

# oc adm policy add-cluster-role-to-user cluster-admin admin
# oc adm policy add-role-to-user admin admin

CLI配置文件
oc login命令自动创建和管理CLI配置文件~/.kube/config。

卸载OpenShift

  • 卸载所有Node

使用安装时的inventory文件

$ ansible-playbook ~/openshift-ansible/playbooks/adhoc/uninstall.yml
  • 卸载部分Node

新建一个inventory文件,配置要卸载的node:

[OSEv3:children]
nodes 

[OSEv3:vars]
ansible_ssh_user=ec2-user
openshift_deployment_type=origin

[nodes]
node3.example.com openshift_node_group_name=‘node-config-infra‘

指定新的inventory文件,运行uninstall.yml playbook:

$ ansible-playbook -i /path/to/new/file ~/openshift-ansible/playbooks/adhoc/uninstall.yml

参考资料

OpenShift
OpenShift Github
OpenShift Documentation
OKD
OKD Latest Documentation
Ansible Documentation
External Load Balancer Integrations with OpenShift Enterprise 3
Red Hat OpenShift on AWS
Docker Documentation
Kubernetes Documentation
Kubernetes中文社区
Kubernetes-基于EFK进行统一的日志管理
SSL For Free

AWS RHEL 7快速安装配置OpenShift

标签:cli   spec   没有   storage   tables   过程   apply   wing   rgs   

原文地址:http://blog.51cto.com/7308310/2171091

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!