码迷,mamicode.com
首页 > 其他好文 > 详细

docker日常管理

时间:2020-07-25 23:25:06      阅读:87      评论:0      收藏:0      [点我收藏+]

标签:ipv4   load   fatal   style   容器   脚本   was   直接   pat   

Linux版本下载地址:https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
Windows版本下载地址:https://download.docker.com/win/static/stable/x86_64/

容器可以理解带有孔洞的内存泡,通过孔洞可以直接访问物理机的资源。

镜像的命名:

  1. 如果不涉及仓库,镜像可以任意命名
  2. 如果涉及到往仓库中push镜像,则需要满足一定规则:
    • 服务器IP:端口/分类/镜像名:tag
    • 端口默认:80
    • tag默认:latest

镜像的管理:

  1. docker pull image #拉取镜像
  2. docker push image #推送镜像
  3. docker rmi image #删除镜像
  4. docker tag image new_name #给镜像打标签
  5. docker images #查看当前镜像
  6. docker save docker.io/mysql > mysql.tar #导出镜像
  7. docker load -i mysql.tar #导入镜像
  8. docker save docker.io/nginx docker.io/mysql hub.c.163.com/mengkzhaoyun/cloud/ansible-kubernetes hub.c.163.com/public/centos > all.tar #导出所有镜像
  9. docker history docker.io/mysql:latest #查看镜像分层
  10. docker history docker.io/mysql:latest --no-trunc #显示完整内容

容器的管理:

  1. docker ps #查看当前正在运行的容器
    • -a 查看所有容器
  2. 生命周期:(默认情况下,一个容器只存在一个进程)
    • 镜像运行的进程相当于灵魂,容器相当于肉体 
    • docker run docker.io/nginx 容器运行的一瞬间,即已结束
    • docker ps 不会有任何输出
    • docker ps -a 会看到运行过的容器
  3. docker run -t -i -d docker.io/nginx #启动容器
    • -t 生成一个终端
    • -i 提供一个交互
    • -d 放入后台
  4. docker run -t -i -d --restart=always docker.io/nginx #从容器中退出,容器不关闭
  5. docker attach 0d182c82cc13 #进入放入后台的容器
  6. docker rm -f 0d182c82cc13 #删除容器
  7. docker run -dit -name=c1 --restart=always docker.io/nginx #重命名
  8. docker stop c1 #停止容器c1
  9. docker start c1 #启动容器c1
  10. docker run -dit -name=c1 --rm docker.io/nginx #创建临时容器,容器一旦退出,会自动删除
  11. docker run -dit -name=c1 docker.io/nginx sleep 20
  12. docker run -it --name=c2 --restart=always -e name1=tom1 -e name2=tom2 docker.io/tomcat
    • -e 指定变量,变量会传递到容器中(echo $name1;echo $name2)
  13. docker run -it --name=db --restart=always -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=devin -e MUSQL_PASSWORD=redhat -e MYSQL_DATABASE=students docker.io/mysql #mysql中的变量
  14. docker inspect db #查看容器db的信息
  15. docker exec db ‘ip a‘ #在db容器中执行“”ip a“”命令
  16. docker exec -it db bash #在容器内打开额外的bash进程
  17. docker cp 1.txt db:/opt
  18. docker exec db ls /opt
  19. docker cp db:/etc/hosts .
  20. docker attach db #进入放于后台的容器
  21. docker run -dit --name=db -p 3306 docker.io/mysql bash #指定容器端口3306,物理机上分配一个随机端口
  22. docker run -dit --name=db -p 8080:3306 docker.io/mysql bash #指定容器端口3306映射到物理机上端口为8080
  23. docker top db #查看容器内跑的什么进程
  24. docker logs -f db
    • -f 持续刷新
  25. 删除所有镜像脚本
    #!/bin/bash
    file=$(mktemp)
    docker images |grep -v TAG |awk {print $1 ":" $2} > $file
    while read aa ; 
    do
        docker rmi $aa
    done < $file
    rm -rf $file

数据卷的管理:

  1. docker run -it --name=web -v /data hub.c.163.com/public/centos bash #指定容器中的目录为/data,物理机上会随机分配一个目录
  2. docker run -it --name=web -v /xx:/data hub.c.163.com/public/centos bash #指定容器中的目录为/data,映射到物理机的位置为/xx
  3. docker run -it --name=web -v /xx:/data:rw hub.c.163.com/public/centos bash #指定容器中的目录为/data,映射到物理机的位置为/xx
  4. docker inspect web #中的mounts条目可以查看挂载情况
    • "Source": "/data"
    • "Destination": "/xxx"

网络的管理:

   可以类比vmware workstation中的网络来理解

  1. docker network list
    • bridge #相当于workstation中的nat网络
    • host 复制物理机的网络信息
  2. docker run -it --name=db --restart=always --net bridge -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=devin -e MUSQL_PASSWORD=redhat -e MYSQL_DATABASE=students docker.io/mysql #指定网桥
  3. man -k docker
  4. man docker-network-create
  5. docker network create --driver=bridge --subnet=10.254.0.0/16 --ip-range=10.254.97.0/24 --gateway=10.254.97.254br0
  6. docker network inspect br0
  7. docker network rm br0

wordpress+mysql搭建个人博客

  • docker run -dit --name=db --restart=always -v /db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_DATABASE=wordpress hub.c.163.com/library/mysql:5.7
  • docker run -dit --name=blog -v /web:/var/www/html -p 80:80 -e WORDPRESS_DB_HOST=172.17.0.2 -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=redhat -e WORDPRESS_DB_NAME=wordpress hub.c.163.com/library/wordpress #通过ip地址绑定,容器stop后,ip会被释放继而别其他容器占用,建议用吓一条命令。
  • docker run -it --name=blog -v /web:/var/www/html -p 80:80 --link db:xx -e WORDPRESS_DB_HOST=xx -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=redhat -e WORDPRESS_DB_NAME=wordpress hub.c.163.com/public/wordpress bash #将db容器附一个别名xx,以后再连接可以通过别名来连接,可以解决ip释放的问题。如果别名自定义xx,“-e WORDPRESS_DB_HOST=xx -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=redhat -e WORDPRESS_DB_NAME=wordpress”需要指定,如果别名定义为mysql,变量可以省略。

自定义镜像

         基镜像+dockerfile生成一个临时容器,这个临时容器导出为新镜像后,临时容器会自动删除。

  1. 自定义dockerfile
    FROM hub.c.163.com/library/centos
    MAINTAINER devin
    
    RUN rm -rf /etc/yum.repos.d/*
    COPY CentOS-Base.repo /etc/yum.repos.d/
    ADD epel.repo /etc/yum.repos.d/
    ENV aa=xyz RUN yum makecache RUN yum install openssh-clients openssh-server -y RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key RUN ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key RUN ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key RUN sed -i ‘/UseDNS/cUseDNS no/‘ /etc/ssh/sshd_config
    RUN useradd devin
    USER devin VOLUME /data1 RUN echo ‘root:redhat‘ | chpasswd EXPOSE 22 CMD ["/usr/sbin/sshd","-D"]
    • ADD  拷贝,对压缩文件既拷贝又解压缩
    • COPY 只是单纯的拷贝
    • RUN 运行操作系统命令
    • ENV 指定变量
    • VOLUME 指定容器内的目录
    • USER 以devin身份登录
    • EXPOSE 只是标记而已
  2. docker build -t centos:ssh . -f dockerfile_v3
    • -t:指定新镜像的名字
    • -f:指定dockerfile文件
    • .:当前目录

配置docker本地仓库

  1. docker pull hub.c.163.com/library/registry #下载仓库镜像
  2. docker run -dit --name=myregistry -p 5000:5000 -v /myregistry:/var/lib/registry hub.c.163.com/library/registry #运行仓库容器
  3. docker tag docker.io/centos:latest 192.168.108.101:5000/cka/centos:v1 #修改要推送至仓库的镜像标签,上传到那台服务器是由镜像名字决定的
  4. docker push 192.168.108.101:5000/cka/centos:v1 #会出现如下报错,原因:默认使用https的方式通讯,而我们的推送方式是http,解决办法有两种,如7。
        The push refers to a repository [192.168.108.101:5000/cka/centos]
         Get https://192.168.108.101:5000/v1/_ping: http: server gave HTTP response to HTTPS client
  5. 解决http通讯问题
    vim /etc/docker/daemon.json
    {
    "insecure-registries": ["192.168.108.101:5000"]
    } vim
    /etc/sysconfig/docker
    OPTIONS最后面加上下面条目
    --insecure-registry=192.168.108.101:5000
  6. systemctl restart docker
  7. docker start myregistry
  8. curl -s 192.168.108.101:5000/v2/_catalog | json_reformat #查看仓库中所有的镜像
  9. curl -s 192.168.108.101:5000/v2/cka/centos/tags/list #查看仓库中已有的镜像的版本
  10. 客户端指定默认的仓库下载地址(默认为docker.io)
  11. 修改配置
    vim /etc/sysconfig/docker
    ADD_REGISTRY="--add-registry 192.168.108.101:5000"
  12. 由docker run -dit --name=myregistry -p 5000:5000 -v /myregistry:/var/lib/registry hub.c.163.com/library/registry知道镜像存放的物理位置是/myregistry
  13. 删除仓库中的镜像 delete_docker_registry_image
    技术图片
    #!/usr/bin/env python
    """
    Usage:
    Shut down your registry service to avoid race conditions and possible data loss
    and then run the command with an image repo like this:
    delete_docker_registry_image.py --image awesomeimage --dry-run
    """
    
    import argparse
    import json
    import logging
    import os
    import sys
    import shutil
    import glob
    
    logger = logging.getLogger(__name__)
    
    
    def del_empty_dirs(s_dir, top_level):
        """recursively delete empty directories"""
        b_empty = True
    
        for s_target in os.listdir(s_dir):
            s_path = os.path.join(s_dir, s_target)
            if os.path.isdir(s_path):
                if not del_empty_dirs(s_path, False):
                    b_empty = False
            else:
                b_empty = False
    
        if b_empty:
            logger.debug("Deleting empty directory ‘%s‘", s_dir)
            if not top_level:
                os.rmdir(s_dir)
    
        return b_empty
    
    
    def get_layers_from_blob(path):
        """parse json blob and get set of layer digests"""
        try:
            with open(path, "r") as blob:
                data_raw = blob.read()
                data = json.loads(data_raw)
                if data["schemaVersion"] == 1:
                    result = set([entry["blobSum"].split(":")[1] for entry in data["fsLayers"]])
                else:
                    result = set([entry["digest"].split(":")[1] for entry in data["layers"]])
                    if "config" in data:
                        result.add(data["config"]["digest"].split(":")[1])
                return result
        except Exception as error:
            logger.critical("Failed to read layers from blob:%s", error)
            return set()
    
    
    def get_digest_from_blob(path):
        """parse file and get digest"""
        try:
            with open(path, "r") as blob:
                return blob.read().split(":")[1]
        except Exception as error:
            logger.critical("Failed to read digest from blob:%s", error)
            return ""
    
    
    def get_links(path, _filter=None):
        """recursively walk `path` and parse every link inside"""
        result = []
        for root, _, files in os.walk(path):
            for each in files:
                if each == "link":
                    filepath = os.path.join(root, each)
                    if not _filter or _filter in filepath:
                        result.append(get_digest_from_blob(filepath))
        return result
    
    
    class RegistryCleanerError(Exception):
        pass
    
    
    class RegistryCleaner(object):
        """Clean registry"""
    
        def __init__(self, registry_data_dir, dry_run=False):
            self.registry_data_dir = registry_data_dir
            if not os.path.isdir(self.registry_data_dir):
                raise RegistryCleanerError("No repositories directory found inside "                                        "REGISTRY_DATA_DIR ‘{0}‘.".
                                           format(self.registry_data_dir))
            self.dry_run = dry_run
    
        def _delete_layer(self, repo, digest):
            """remove blob directory from filesystem"""
            path = os.path.join(self.registry_data_dir, "repositories", repo, "_layers/sha256", digest)
            self._delete_dir(path)
    
        def _delete_blob(self, digest):
            """remove blob directory from filesystem"""
            path = os.path.join(self.registry_data_dir, "blobs/sha256", digest[0:2], digest)
            self._delete_dir(path)
    
        def _blob_path_for_revision(self, digest):
            """where we can find the blob that contains the json describing this digest"""
            return os.path.join(self.registry_data_dir, "blobs/sha256",
                                digest[0:2], digest, "data")
    
        def _blob_path_for_revision_is_missing(self, digest):
            """for each revision, there should be a blob describing it"""
            return not os.path.isfile(self._blob_path_for_revision(digest))
    
        def _get_layers_from_blob(self, digest):
            """get layers from blob by digest"""
            return get_layers_from_blob(self._blob_path_for_revision(digest))
    
        def _delete_dir(self, path):
            """remove directory from filesystem"""
            if self.dry_run:
                logger.info("DRY_RUN: would have deleted %s", path)
            else:
                logger.info("Deleting %s", path)
                try:
                    shutil.rmtree(path)
                except Exception as error:
                    logger.critical("Failed to delete directory:%s", error)
    
        def _delete_from_tag_index_for_revision(self, repo, digest):
            """delete revision from tag indexes"""
            paths = glob.glob(
                os.path.join(self.registry_data_dir, "repositories", repo,
                             "_manifests/tags/*/index/sha256", digest)
            )
            for path in paths:
                self._delete_dir(path)
    
        def _delete_revisions(self, repo, revisions, blobs_to_keep=None):
            """delete revisions from list of directories"""
            if blobs_to_keep is None:
                blobs_to_keep = []
            for revision_dir in revisions:
                digests = get_links(revision_dir)
                for digest in digests:
                    self._delete_from_tag_index_for_revision(repo, digest)
                    if digest not in blobs_to_keep:
                        self._delete_blob(digest)
    
                self._delete_dir(revision_dir)
    
        def _get_tags(self, repo):
            """get all tags for given repository"""
            path = os.path.join(self.registry_data_dir, "repositories", repo, "_manifests/tags")
            if not os.path.isdir(path):
                logger.critical("No repository ‘%s‘ found in repositories directory %s",
                                 repo, self.registry_data_dir)
                return None
            result = []
            for each in os.listdir(path):
                filepath = os.path.join(path, each)
                if os.path.isdir(filepath):
                    result.append(each)
            return result
    
        def _get_repositories(self):
            """get all repository repos"""
            result = []
            root = os.path.join(self.registry_data_dir, "repositories")
            for each in os.listdir(root):
                filepath = os.path.join(root, each)
                if os.path.isdir(filepath):
                    inside = os.listdir(filepath)
                    if "_layers" in inside:
                        result.append(each)
                    else:
                        for inner in inside:
                            result.append(os.path.join(each, inner))
            return result
    
        def _get_all_links(self, except_repo=""):
            """get links for every repository"""
            result = []
            repositories = self._get_repositories()
            for repo in [r for r in repositories if r != except_repo]:
                path = os.path.join(self.registry_data_dir, "repositories", repo)
                for link in get_links(path):
                    result.append(link)
            return result
    
        def prune(self):
            """delete all empty directories in registry_data_dir"""
            del_empty_dirs(self.registry_data_dir, True)
    
        def _layer_in_same_repo(self, repo, tag, layer):
            """check if layer is found in other tags of same repository"""
            for other_tag in [t for t in self._get_tags(repo) if t != tag]:
                path = os.path.join(self.registry_data_dir, "repositories", repo,
                                    "_manifests/tags", other_tag, "current/link")
                manifest = get_digest_from_blob(path)
                try:
                    layers = self._get_layers_from_blob(manifest)
                    if layer in layers:
                        return True
                except IOError:
                    if self._blob_path_for_revision_is_missing(manifest):
                        logger.warn("Blob for digest %s does not exist. Deleting tag manifest: %s", manifest, other_tag)
                        tag_dir = os.path.join(self.registry_data_dir, "repositories", repo,
                                               "_manifests/tags", other_tag)
                        self._delete_dir(tag_dir)
                    else:
                        raise
            return False
    
        def _manifest_in_same_repo(self, repo, tag, manifest):
            """check if manifest is found in other tags of same repository"""
            for other_tag in [t for t in self._get_tags(repo) if t != tag]:
                path = os.path.join(self.registry_data_dir, "repositories", repo,
                                    "_manifests/tags", other_tag, "current/link")
                other_manifest = get_digest_from_blob(path)
                if other_manifest == manifest:
                    return True
    
            return False
    
        def delete_entire_repository(self, repo):
            """delete all blobs for given repository repo"""
            logger.debug("Deleting entire repository ‘%s‘", repo)
            repo_dir = os.path.join(self.registry_data_dir, "repositories", repo)
            if not os.path.isdir(repo_dir):
                raise RegistryCleanerError("No repository ‘{0}‘ found in repositories "
                                           "directory {1}/repositories".
                                           format(repo, self.registry_data_dir))
            links = set(get_links(repo_dir))
            all_links_but_current = set(self._get_all_links(except_repo=repo))
            for layer in links:
                if layer in all_links_but_current:
                    logger.debug("Blob found in another repository. Not deleting: %s", layer)
                else:
                    self._delete_blob(layer)
            self._delete_dir(repo_dir)
    
        def delete_repository_tag(self, repo, tag):
            """delete all blobs only for given tag of repository"""
            logger.debug("Deleting repository ‘%s‘ with tag ‘%s‘", repo, tag)
            tag_dir = os.path.join(self.registry_data_dir, "repositories", repo, "_manifests/tags", tag)
            if not os.path.isdir(tag_dir):
                raise RegistryCleanerError("No repository ‘{0}‘ tag ‘{1}‘ found in repositories "
                                           "directory {2}/repositories".
                                           format(repo, tag, self.registry_data_dir))
            manifests_for_tag = set(get_links(tag_dir))
            revisions_to_delete = []
            blobs_to_keep = []
            layers = []
            all_links_not_in_current_repo = set(self._get_all_links(except_repo=repo))
            for manifest in manifests_for_tag:
                logger.debug("Looking up filesystem layers for manifest digest %s", manifest)
    
                if self._manifest_in_same_repo(repo, tag, manifest):
                    logger.debug("Not deleting since we found another tag using manifest: %s", manifest)
                    continue
                else:
                    revisions_to_delete.append(
                        os.path.join(self.registry_data_dir, "repositories", repo,
                                     "_manifests/revisions/sha256", manifest)
                    )
                    if manifest in all_links_not_in_current_repo:
                        logger.debug("Not deleting the blob data since we found another repo using manifest: %s", manifest)
                        blobs_to_keep.append(manifest)
    
                    layers.extend(self._get_layers_from_blob(manifest))
    
            layers_uniq = set(layers)
            for layer in layers_uniq:
                if self._layer_in_same_repo(repo, tag, layer):
                    logger.debug("Not deleting since we found another tag using digest: %s", layer)
                    continue
    
                self._delete_layer(repo, layer)
                if layer in all_links_not_in_current_repo:
                    logger.debug("Blob found in another repository. Not deleting: %s", layer)
                else:
                    self._delete_blob(layer)
    
            self._delete_revisions(repo, revisions_to_delete, blobs_to_keep)
            self._delete_dir(tag_dir)
    
        def delete_untagged(self, repo):
            """delete all untagged data from repo"""
            logger.debug("Deleting utagged data from repository ‘%s‘", repo)
            repositories_dir = os.path.join(self.registry_data_dir, "repositories")
            repo_dir = os.path.join(repositories_dir, repo)
            if not os.path.isdir(repo_dir):
                raise RegistryCleanerError("No repository ‘{0}‘ found in repositories "
                                           "directory {1}/repositories".
                                           format(repo, self.registry_data_dir))
            tagged_links = set(get_links(repositories_dir, _filter="current"))
            layers_to_protect = []
            for link in tagged_links:
                layers_to_protect.extend(self._get_layers_from_blob(link))
    
            unique_layers_to_protect = set(layers_to_protect)
            for layer in unique_layers_to_protect:
                logger.debug("layer_to_protect: %s", layer)
    
            tagged_revisions = set(get_links(repo_dir, _filter="current"))
    
            revisions_to_delete = []
            layers_to_delete = []
    
            dir_for_revisions = os.path.join(repo_dir, "_manifests/revisions/sha256")
            for rev in os.listdir(dir_for_revisions):
                if rev not in tagged_revisions:
                    revisions_to_delete.append(os.path.join(dir_for_revisions, rev))
                    for layer in self._get_layers_from_blob(rev):
                        if layer not in unique_layers_to_protect:
                            layers_to_delete.append(layer)
    
            unique_layers_to_delete = set(layers_to_delete)
    
            self._delete_revisions(repo, revisions_to_delete)
            for layer in unique_layers_to_delete:
                self._delete_blob(layer)
                self._delete_layer(repo, layer)
    
    
        def get_tag_count(self, repo):
            logger.debug("Get tag count of repository ‘%s‘", repo)
            repo_dir = os.path.join(self.registry_data_dir, "repositories", repo)
            tags_dir = os.path.join(repo_dir, "_manifests/tags")
    
            if os.path.isdir(tags_dir):
                tags = os.listdir(tags_dir)
                return len(tags)
            else:
                logger.info("Tags directory does not exist: ‘%s‘", tags_dir)
                return -1
    
    def main():
        """cli entrypoint"""
        parser = argparse.ArgumentParser(description="Cleanup docker registry")
        parser.add_argument("-i", "--image",
                            dest="image",
                            required=True,
                            help="Docker image to cleanup")
        parser.add_argument("-v", "--verbose",
                            dest="verbose",
                            action="store_true",
                            help="verbose")
        parser.add_argument("-n", "--dry-run",
                            dest="dry_run",
                            action="store_true",
                            help="Dry run")
        parser.add_argument("-f", "--force",
                            dest="force",
                            action="store_true",
                            help="Force delete (deprecated)")
        parser.add_argument("-p", "--prune",
                            dest="prune",
                            action="store_true",
                            help="Prune")
        parser.add_argument("-u", "--untagged",
                            dest="untagged",
                            action="store_true",
                            help="Delete all untagged blobs for image")
        args = parser.parse_args()
    
    
        handler = logging.StreamHandler()
        handler.setFormatter(logging.Formatter(u%(levelname)-8s [%(asctime)s]  %(message)s))
        logger.addHandler(handler)
    
        if args.verbose:
            logger.setLevel(logging.DEBUG)
        else:
            logger.setLevel(logging.INFO)
    
    
        # make sure not to log before logging is setup. that‘ll hose your logging config.
        if args.force:
            logger.info(
                "You supplied the force switch, which is deprecated. It has no effect now, and the script defaults to doing what used to be only happen when force was true")
    
        splitted = args.image.split(":")
        if len(splitted) == 2:
            image = splitted[0]
            tag = splitted[1]
        else:
            image = args.image
            tag = None
    
        if REGISTRY_DATA_DIR in os.environ:
            registry_data_dir = os.environ[REGISTRY_DATA_DIR]
        else:
            registry_data_dir = "/opt/registry_data/docker/registry/v2"
    
        try:
            cleaner = RegistryCleaner(registry_data_dir, dry_run=args.dry_run)
            if args.untagged:
                cleaner.delete_untagged(image)
            else:
                if tag:
                    tag_count = cleaner.get_tag_count(image)
                    if tag_count == 1:
                        cleaner.delete_entire_repository(image)
                    else:
                        cleaner.delete_repository_tag(image, tag)
                else:
                    cleaner.delete_entire_repository(image)
    
            if args.prune:
                cleaner.prune()
        except RegistryCleanerError as error:
            logger.fatal(error)
            sys.exit(1)
    
    
    if __name__ == "__main__":
        main()
    View Code
  14. export REGISTRY_DATA_DIR=/myregistry/docker/registry/v2 #导出镜像存放的根路径
  15. ./delete_docker_registry_image -i cka/centos:v1 #删除镜像centos:v1

监控容器

  1. docker stats #字符的形式显示容器使用的资源
  2. 由谷歌开发一款图形界面的监控工具cadvisor,以容器的形式实现,其实质就是数据卷的挂载。容器c1、c2、c3在物理机中对应的目录会在cadvisor容器中被挂载,cadvisor容器通过对挂载的目录进行分析,从而实现对容器c1、c2、c3的监控。
    docker pull hub.c.163.com/xbingo/cadvisor:latest

    docker run \
    -v /var/run:/var/run \
    -v /sys/:/sys:ro \
    -v /var/lib/docker:/var/lib/docker:ro -d \
    -p 8080:8080 \
    --name=mon hub.c.163.com/xbingo/cadvisor:latest

    192.168.108.101:8080 #访问图形界面

编排工具compose

  1. yum install docker -y
  2. systemctl enable docker --now
  3. yum install docker-compose -y
  4. vim docker-compose.yaml #格式
    blog:
            image: hub.c.163.com/public/wordpress:4.5.2
            restart: always
            links:
                    - db:mysql
            ports:
                    - "80:80"
    
    db:
            image: hub.c.163.com/library/mysql
            restart: always
            environment:
                    - MYSQL_ROOT_PASSWORD=redhat
                    - MYSQL_DATABASE=wordpress
       volumes:
    - /xx:/var/lib/mysql
  5. echo 1 >  /proc/sys/net/ipv4/ip_forward
  6. docker-compose up [ -d ]
  7. 192.168.108.102:80 #访问wordpress图形界面
  8. docker-compose ps
  9. docker-compose stop
  10. docker-compose start
  11. docker-compose rm (stop为前提)

harbor的使用(web界面进行管理,使用compose创建)

  1. harbor(harbor-offline-installer-v1.10.4.tgz)下载地址:https://github.com/goharbor/harbor/releases
  2. tar zxvf harbor-offline-installer-v1.10.4.tgz
  3. cd harbor
  4. docker load -i harbor.v1.10.4.tar.gz
  5. vim harbor.yml (改为hostname: 192.168.108.103,其他因个人情况而定,这里使用默认)
  6. ?????

容器资源限制(基于linux系统自带的Cgroup)

  1. docker run -dit -m 512m --name=c1 centos:v1 #分配c1容器512m内存
  2. docker run -dit --cpuset-cpus=0,1 --name=c1 centos:v1 #绑定cpu
  3. ps mo pid,comm,psr $(pgrep cat) #查看运行在哪些cpu上

 

docker日常管理

标签:ipv4   load   fatal   style   容器   脚本   was   直接   pat   

原文地址:https://www.cnblogs.com/hym-by/p/13237099.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!