码迷,mamicode.com
首页 > 其他好文 > 详细

Ovirt实现虚拟机通过NAT上网

时间:2015-09-21 15:55:58      阅读:1184      评论:0      收藏:0      [点我收藏+]

标签:ovirt kvm linux nat vm

环境说明

  • OS: CentOS Linux release 7.1.1503 (Core)

  • Ovirt-engine: ovirt-engine-3.5.3.1-1.el7

  • VDSM: vdsm-4.16.20-0.el7

  • GuestOS: CentOS release 6.5 (Final)

  • 硬件说明: 单网卡且只有一个IP:10.10.19.100(可连接外网)

  • :此主机同时充当engine和node角色

1. 安装Centos7-mini并update(省略)

2. 安装和配置ovirt

  • 初始化相关系统设置

    [root@localhost ~]# echo "ovirthost01.ctcnet.com" >/etc/hostname
    
    [root@localhost ~]# systemctl stop NetworkManager
    [root@localhost ~]# systemctl disable NetworkManager
    
    [root@ovirthost01 ~]# cat >/etc/sysconfig/network-scripts/ifcfg-p4p1 <<EOF
    > DEVICE=p4p1
    > TYPE=Ethernet
    > ONBOOT=yes
    > BOOTPROTO=static
    > IPADDR=10.10.19.100
    > PFEFIX=24
    > GATEWAY=10.10.19.254
    > DNS1=10.10.19.254
    > EOF
    
    [root@ovirthost01 ~]# echo "10.10.19.100 ovirthost01 ovirthost01.ctcnet.com" >>/etc/hosts
    
    [root@ovirthost01 ~]# echo "export PATH=/bin:/sbin:$PATH" >>~/.bashrc &&source .bashrc &&echo $PATH
    
    [root@ovirthost01 ~]# setenforce 0 &&sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config 
    
    [root@ovirthost01 ~]# systemctl stop firewalld && systemctl disable firewalld    
    
    [root@ovirthost01 ~]# systemctl stop iptables && systemctl disable iptables    
    
    [root@ovirthost01 ~]# yum install net-tools
    
    [root@ovirthost01 ~]# service postgresql initdb
    [root@ovirthost01 ~]# service postgresql start
    [root@ovirthost01 ~]# chkconfig postgresql on
  • 安装和配置ovirt-engine

    [root@ovirthost01 ~]# yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm 
    
    [root@ovirthost01 ~]# yum install -y ovirt-engine
    
    [root@ovirthost01 ~]# engine-setup

    遇到如下错误:

    [ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf‘
    [ INFO  ] Stage: Transaction commit
    [ INFO  ] Stage: Closing up
    [ INFO  ] Restarting nfs services
    [ ERROR ] Failed to execute stage ‘Closing up‘: Command ‘/bin/systemctl‘ failed to execute
    [ INFO  ] Stage: Clean up
              Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150804004920-op260z.log
    [ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20150804005258-setup.conf‘
    [ INFO  ] Stage: Pre-termination
    [ INFO  ] Stage: Termination
    [ ERROR ] Execution of setup failed

    检查日志:

    less /var/log/ovirt-engine/setup/ovirt-engine-setup-20150804004920-op260z.log

    日志中记录了如下错误(启动nfs-server失败):

    2015-08-04 00:52:58 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:932 execute-output: (‘/bin/systemctl‘, ‘start‘, ‘nfs-server.service‘) stdout:
    
    2015-08-04 00:52:58 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:937 execute-output: (‘/bin/systemctl‘, ‘start‘, ‘nfs-server.service‘) stderr:
    Job for nfs-server.service failed. See ‘systemctl status nfs-server.service‘ and ‘journalctl -xn‘ for details.
    
    2015-08-04 00:52:58 DEBUG otopi.context context._executeMethod:152 method exception
    Traceback (most recent call last):
      File "/usr/lib/python2.7/site-packages/otopi/context.py", line 142, in _executeMethod
        method[‘method‘]()
      File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/system/nfs.py", line 307, in _closeup
        state=state,
      File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 138, in state
        ‘start‘ if state else ‘stop‘
      File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 77, in _executeServiceCommand
        raiseOnError=raiseOnError
      File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 942, in execute
        command=args[0],
    RuntimeError: Command ‘/bin/systemctl‘ failed to execute

    根据日志提示检查下nfs-server服务状态(failed):

    [root@ovirthost01 ~]# systemctl status nfs-server.service
    
    nfs-server.service - NFS server and services
       Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled)
       Active: failed (Result: exit-code) since Tue 2015-08-04 00:52:58 EDT; 7min ago
      Process: 21922 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
      Process: 21919 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
     Main PID: 21922 (code=exited, status=1/FAILURE)
       CGroup: /system.slice/nfs-server.service
    
    Aug 04 00:52:58 ovirthost01.ctcnet.com rpc.nfsd[21922]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
    Aug 04 00:52:58 ovirthost01.ctcnet.com rpc.nfsd[21922]: rpc.nfsd: unable to set any sockets for nfsd
    Aug 04 00:52:58 ovirthost01.ctcnet.com systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
    Aug 04 00:52:58 ovirthost01.ctcnet.com systemd[1]: Failed to start NFS server and services.
    Aug 04 00:52:58 ovirthost01.ctcnet.com systemd[1]: Unit nfs-server.service entered failed state.

    搜索下得知要先启动rpcbind:

    [root@ovirthost01 ~]# systemctl start rpcbind

    再次启动nfs-server并检查其状态(success):

    [root@ovirthost01 ~]# systemctl start  nfs-server.service
    
    [root@ovirthost01 ~]# systemctl status nfs-server.service
    
    nfs-server.service - NFS server and services
       Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled)
       Active: active (exited) since Tue 2015-08-04 01:10:26 EDT; 11s ago
      Process: 22042 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
      Process: 22040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
     Main PID: 22042 (code=exited, status=0/SUCCESS)
       CGroup: /system.slice/nfs-server.service
    
    Aug 04 01:10:26 ovirthost01.ctcnet.com systemd[1]: Starting NFS server and services...
    Aug 04 01:10:26 ovirthost01.ctcnet.com systemd[1]: Started NFS server and services.

    再次运行engine-setup成功:

    [ INFO  ] Cleaning stale zombie tasks and commands
    
      --== CONFIGURATION PREVIEW ==--
    
      Firewall manager                        : firewalld
      Update Firewall                         : True
      Host FQDN                               : ovirthost01.ctcnet.com
      Engine database name                    : engine
      Engine database secured connection      : False
      Engine database host                    : localhost
      Engine database user name               : engine
      Engine database host name validation    : False
      Engine database port                    : 5432
      Engine installation                     : True
      PKI organization                        : ctcnet.com
      NFS mount point                         : /data/iso
      Configure WebSocket Proxy               : True
      Engine Host FQDN                        : ovirthost01.ctcnet.com
    
      Please confirm installation settings (OK, Cancel) [OK]: 
    [ INFO  ] Cleaning async tasks and compensations
    [ INFO  ] Checking the Engine database consistency
    [ INFO  ] Stage: Transaction setup
    [ INFO  ] Stopping engine service
    [ INFO  ] Stopping ovirt-fence-kdump-listener service
    [ INFO  ] Stopping websocket-proxy service
    [ INFO  ] Stage: Misc configuration
    [ INFO  ] Stage: Package installation
    [ INFO  ] Stage: Misc configuration
    [ INFO  ] Backing up database localhost:engine to ‘/var/lib/ovirt-engine/backups/engine-20150804012706.WAg6oL.dump‘.
    [ INFO  ] Creating/refreshing Engine database schema
    [ INFO  ] Configuring WebSocket Proxy
    [ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf‘
    [ INFO  ] Stage: Transaction commit
    [ INFO  ] Stage: Closing up
    
              --== SUMMARY ==--
    
    [WARNING] Less than 16384MB of memory is available
              SSH fingerprint: F6:83:42:C9:FF:0F:A3:CE:ED:F5:85:EC:27:22:5F:E7
              Internal CA A3:CF:B5:C1:B4:29:8B:36:A5:9B:EB:69:99:A6:8D:5B:55:81:36:8F
              Web access is enabled at:
                  http://ovirthost01.ctcnet.com:80/ovirt-engine
                  https://ovirthost01.ctcnet.com:443/ovirt-engine
    
              --== END OF SUMMARY ==--
    
    [ INFO  ] Starting engine service
    [ INFO  ] Restarting httpd
    [ INFO  ] Stage: Clean up
              Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150804012601-6zut1o.log
    [ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20150804012750-setup.conf‘
    [ INFO  ] Stage: Pre-termination
    [ INFO  ] Stage: Termination
    [ INFO  ] Execution of setup completed successfully
  • 备份ovirt-engine文件和数据库

    [root@ovirthost01 ~]# engine-backup --mode=backup --file=engine_backup00 --log=./engine_backup00.log

    如果想了解具体包含哪些配置文件,可以解压engine_bakcup00来查看:

    [root@ovirthost01 ~]# tar -jxvf engine_backup00
  • 添加数据中心,集群,主机,数据存储域和ISO域(省略)

      Item              Name                 Path/IP
      -------------------------------------------------
      DataCenter        ctc_dc                    ——
      Cluster           ctc_cluster01            ——
      Host                host01                10.10.19.100
      Data_Stor_Dom     data_stor            10.10.19.99:/volume1/nfsstor
      ISO_Stor_Dom      iso_stor            10.10.19.100:/data/iso

    注意:由于我采用vdsm和engine都在同一主机上,添加此主机到集群(安装vdsm)后会自动启动iptables,导致无法访问webadmin-portal(port:443/80)。添加相关条目到/etc/sysconfig/iptables来解决,内容如下:

    [root@ovirthost01 ~]# cat /etc/sysconfig/iptables
    
    # Generated by iptables-save v1.4.21 on Tue Aug  4 02:58:26 2015
    *filter
    :INPUT ACCEPT [0:0]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [250:63392]
    -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
    -A INPUT -p icmp -j ACCEPT
    -A INPUT -i lo -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 54321 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
    -A INPUT -p udp -m udp --dport 111 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
    -A INPUT -p udp -m udp --dport 161 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 16514 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
    -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
    -A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT
    -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
    -A INPUT -j REJECT --reject-with icmp-host-prohibited
    -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
    COMMIT
    # Completed on Tue Aug  4 02:58:26 2015
    
    [root@ovirthost01 ~]# systemctl restart iptables

3. 创建虚拟机(通过NAT连接外网)

1.导入iso镜像文件

    [root@ovirthost01 ~]# engine-iso-uploader -i iso_stor upload  /data/iso/*.iso

    Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): 
    Uploading, please wait...
    INFO: Start uploading /data/iso/CentOS-6.5-x86_64-minimal.iso 
    INFO: /data/iso/CentOS-6.5-x86_64-minimal.iso uploaded successfully
    INFO: Start uploading /data/iso/CentOS-7.0-1406-x86_64-Minimal.iso 
    INFO: /data/iso/CentOS-7.0-1406-x86_64-Minimal.iso uploaded successfully
    INFO: Start uploading /data/iso/virtio-win-0.1.96.iso 
    INFO: /data/iso/virtio-win-0.1.96.iso uploaded successfully

2.创建NAT网络配置文件/etc/libvirt/qemu/networks/nat.xml,内容如下

    <network>
        <name>nat</name>
        <uuid>b09d09a8-ebbd-476d-9045-e66012c9e83d</uuid>
        <forward mode=‘nat‘/>
        <bridge name=‘natbr0‘ stp=‘on‘ delay=‘0‘ />
        <mac address=‘52:54:00:9D:82:DE‘/>
        <ip address=‘192.168.1.1‘ netmask=‘255.255.255.0‘>
            <dhcp>
                <range start=‘192.168.1.2‘ end=‘192.168.1.250‘ />
            </dhcp>
        </ip>
    </network>

3.通过libvirt/virsh创建NAT网络

    [root@ovirthost01 ~]# cat /etc/pki/vdsm/keys/libvirt_password 
    shibboleth

    [root@ovirthost01 ~]# virsh

    Welcome to virsh, the virtualization interactive terminal.

    Type:  ‘help‘ for help with commands
           ‘quit‘ to quit

    virsh # connect qemu:///system
    Please enter your authentication name: vdsm@ovirt
    Please enter your password: shibboleth

    virsh # net-list
     Name                 State      Autostart     Persistent
    ----------------------------------------------------------
     ;vdsmdummy;          active     no            no
     vdsm-ovirtmgmt       active     yes           yes

    virsh # net-define /etc/libvirt/qemu/networks/nat.xml
    Network nat defined from /etc/libvirt/qemu/networks/nat.xml

    virsh # net-autostart nat
    Network nat marked as autostarted

    virsh # net-start nat
    Network nat started

    virsh # net-list --all
     Name                 State      Autostart     Persistent
    ----------------------------------------------------------
     ;vdsmdummy;          active     no            no
     nat                  active     yes           yes
     vdsm-ovirtmgmt       active     yes           yes

    以上操作将创建nat功能的网桥,如下
    [root@ovirthost01 ~]# brctl show
    bridge name     bridge id               STP enabled     interfaces
    ;vdsmdummy;             8000.000000000000       no
    natbr0          8000.5254009d82de       yes             natbr0-nic
    ovirtmgmt               8000.b083fea27fed       no              p4p1

4.安装vdsm-hook-extnet

    [root@ovirthost01 ~]# yum install -y vdsm-hook-extnet

注:此处将下载extnet的hooks文件并存放到以下两目录

    [root@ovirthost01 ~]# ll /usr/libexec/vdsm/hooks/before_device_create
    total 4
    -rwxr-xr-x. 1 root root 1925 Jun  5 01:47 50_extnet
    [root@ovirthost01 ~]# ll /usr/libexec/vdsm/hooks/before_nic_hotplug
    total 4
    -rwxr-xr-x. 1 root root 1925 Jun  5 01:47 50_extnet

5.添加自定义设备属性extnet

    [root@ovirthost01 ~]# engine-config -s CustomDeviceProperties=‘{type=interface;prop={extnet=^[a-zA-Z0-9_ ---]+$}}‘
    Please select a version:
    1. 3.0
    2. 3.1
    3. 3.2
    4. 3.3
    5. 3.4
    6. 3.5
    6

    [root@ovirthost01 ~]# engine-config -g CustomDeviceProperties
    CustomDeviceProperties:  version: 3.0
    CustomDeviceProperties:  version: 3.1
    CustomDeviceProperties:  version: 3.2
    CustomDeviceProperties:  version: 3.3
    CustomDeviceProperties: {type=interface;prop={SecurityGroups=^(?:(?:[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}, *)*[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}|)$}} version: 3.4
    CustomDeviceProperties: {type=interface;prop={extnet=^[a-zA-Z0-9_ ---]+$}} version: 3.5

    [root@ovirthost01 ~]# systemctl restart ovirt-engine

6.通过webadmin-portal创建虚拟机,并通过ISO安装GuestOS(省略)

7.添加nat端口配置集

技术分享

8.添加vnic到虚拟机,并关联nat端口配置集

技术分享

9.进入到虚拟机验证(成功)

检查网卡是否添加

    [root@VM01-CentOS6 ~]# ifconfig -a

技术分享

从vdsm主机的dhcp服务器处获取IP

    [root@VM01-CentOS6 ~]# dhclient eth0

技术分享

通过ping外网来检查NAT是否成功

    [root@VM01-CentOS6 ~]# ping www.ovirt.org

技术分享

10.参考资料

http://www.ovirt.org/VDSM-Hooks/network-nat  

http://blog.lofyer.org/add-nat-ovirt-vdsm-hooks/   

http://users.ovirt.narkive.com/WVp1moNk/ovirt-users-ovirt-3-5-nat   https://access.redhat.com/documentation/zh-CN/RedHatEnterpriseVirtualization/3.5/html-single/InstallationGuide/index.html

本文出自 “向往” 博客,请务必保留此出处http://zhongq.blog.51cto.com/276337/1696734

Ovirt实现虚拟机通过NAT上网

标签:ovirt kvm linux nat vm

原文地址:http://zhongq.blog.51cto.com/276337/1696734

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!