码迷,mamicode.com
首页 > 其他好文 > 详细

使用Fabric批量部署上线和线上环境监控

时间:2017-07-19 21:55:22      阅读:190      评论:0      收藏:0      [点我收藏+]

标签:code   linux   环境   monit   stop   --   如何   cpu   group   

本文讲述如何使用fabric进行批量部署上线的功能

这个功能对于小应用,可以避免开发部署上线的平台,或者使用linux expect开发不优雅的代码。

前提条件:

1、运行fabric脚本的机器和其他机器tcp_port=22端口通

2、ssh可以登录,你有账号密码

 

一、先说批量部署上线

先上代码,再仔细讲解,脚本如下

# -*- coding:utf-8 -*-
from fabric.colors import *
from fabric.api import *
from contextlib import contextmanager as _contextmanager

# 自动载入
env.user=data_monitor
env.hosts=[10.93.21.21, 10.93.18.34, 10.93.18.35]
env.password=datamonitor@123
# 手动加入
env.activate = source /home/data_monitor/.bash_profile
env.directory = /home/data_monitor/dmonitor/dmonitor


@_contextmanager
def virtualenv():
    with cd(env.directory):
        with prefix(env.activate):
            yield


@task
def update():
    with virtualenv():
        run("git pull origin master")


@task
def start():
    with virtualenv():
        run("$(nohup gunicorn --worker-class=gevent dmonitor.wsgi:application -b 0.0.0.0:8009 -w 4 &> /dev/null &) && sleep 1", warn_only=True)
        run("$(nohup python manage.py celery worker -Q high -c 30 &> /dev/null &) && sleep 1 ", warn_only=True)
        run("$(nohup python manage.py celery worker -Q mid -c 30 &> /dev/null &) && sleep 1 ", warn_only=True)
        run("$(nohup python manage.py celery worker -Q low -c 30 &> /dev/null &) && sleep 1", warn_only=True)


@task
def stop():
    with virtualenv():
        run("ps -ef | grep gunicorn | grep -v grep | awk ‘{print $2}‘| xargs kill -9", warn_only=True)
        run("ps -ef | grep celery | grep worker | grep -v grep | awk ‘{print $2}‘ | xargs kill -9", warn_only=True)


@task
def deploy():
    update()
    stop()
    start()

 

2、线上环境监控

当然一般线上环境没有用fabric监控的,但是开发环境和测试环境的话,一般都是虚拟机,没有人管你。

所以自己开发一个小型监控程序,监控一下硬盘cpu内存,或者是一些进程(redis/mysql...),还是挺有用的。

先上代码

这个文件是各种task

import logging

from fabric.api import *
from fabric.context_managers import *
from fabric.colors import red, yellow, green
from common.redis import Redis
from common.config import redis as redis_config

logger = logging.getLogger(__name__)
redis = Redis(redis_config.get(ip), redis_config.get(port))


# hard_disk_monitor, item_name=hard_disk
@task
def hard_disk_monitor(item_group, item_name, threshold):
    with settings(hide(warnings, running, stdout, stderr), parallel=True, warn_only=True):
        host = run(hostname -i)
        hard_disk = run("df -hl | grep /dev/vda3 | awk -F ‘ ‘ ‘{print $5}‘")
        print green(host + : + hard_disk)
        if int(hard_disk.strip(%)) > threshold:
            redis("lpush %s %s" % (:.join([machine, item_group, item_name]), host))


# memory_monitor, item_name=memory
@task
def memory_monitor(item_group, item_name, threshold):
    with settings(hide(warnings, running, stdout, stderr), parallel=True, warn_only=True):
        host = run(hostname -i)
        memory = run("cat /proc/meminfo | grep MemFree | awk -F ‘ ‘ ‘{print $2}‘")
        print yellow(host + : + memory)
        if int(memory.strip()) < threshold:
            redis("lpush %s %s" % (:.join([machine, item_group, item_name]), host))


# base_services_monitor, item_name != hard_disk or item_name != memory
@task
def base_services_monitor(item_group, item_name, threshold):
    with settings(hide(warnings,running,stdout,stderr),parallel=True,warn_only=True):
        host = run(hostname -i)
        count = run("ps -ef | grep %s | grep -v grep | wc -l" % item_name)
        print red(host + : + count)
        if int(count.strip()) != threshold:
            redis("hset %s %s %s" % (:.join([machine, item_group, item_name]), host, count))
            redis(incr %s % :.join([machine, item_group, item_name, host]))
            redis(expire %s 1800 % :.join([machine, item_group, item_name, host]))


# restart_services_monitor, item_name = tomcat-7.0.57-mis or item_name = tomcat-httpapi
@task
def restart_services_monitor(item_start):
    with settings(hide(warnings, running, stdout, stderr), parallel=True,warn_only=True):
        host = run(hostname -i)
        run(item_start)
        print green(host + : + item_start)

这个文件是执行task

# -*- coding:utf-8 -*-

from fabric.api import *
from fabric.context_managers import *
execute(monitors.hard_disk_monitor, item_group, item_name, item_threshold,
                        hosts=json.loads(item_param.get(item_hosts)))
                hosts = self.redis(lrange %s 0 -1 % :.join([machine, item_group, item_name]))

 

使用Fabric批量部署上线和线上环境监控

标签:code   linux   环境   monit   stop   --   如何   cpu   group   

原文地址:http://www.cnblogs.com/kangoroo/p/7207782.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!