码迷,mamicode.com
首页 > 编程语言 > 详细

Python 模块续 configparser、shutil、XML、paramiko、系统命令、

时间:2016-06-27 17:15:15      阅读:569      评论:0      收藏:0      [点我收藏+]

标签:

一、configparse

# 注释1
;  注释2
 
[section1] # 节点
k1 = v1    # 值
k2:v2       # 值
 
[section2] # 节点
k1 = v1    # 值

 1、获取所有节点

import configparser
config = configparser.ConfigParser()
config.read(‘test‘,encoding=‘utf-8‘)
ret = config.sections()
print(ret)#[‘section1‘, ‘section2‘]

 2、获取指定节点下所有的键值对

import configparser
config = configparser.ConfigParser()
config.read(‘test‘,encoding=‘utf-8‘)
ret1 = config.items(‘section1‘)
print(ret1)#[(‘k1‘, ‘v1    # 值‘), (‘k2‘, ‘v2       # 值‘)]

 3、获取指定节点下所有的建

import configparser
config = configparser.ConfigParser()
config.read(‘xxxooo‘, encoding=‘utf-8‘)
ret2 = config.options(‘section1‘)
print(ret2)#‘k1‘, ‘k2‘]

 4、获取指定节点下指定key的值

import configparser
config = configparser.ConfigParser()
config.read(‘test‘,encoding=‘utf-8‘)
v = config.get(‘section1‘,‘k1‘)
print(v)#v1    # 值
# v = config.getint(‘section1‘, ‘k1‘)
# v = config.getfloat(‘section1‘, ‘k1‘)
# v = config.getboolean(‘section1‘, ‘k1‘)

 5、检查、删除、添加节点

import configparser
config = configparser.ConfigParser()
config.read(‘xxxooo‘, encoding=‘utf-8‘)
 
 
# 检查
has_sec = config.has_section(‘section1‘)
print(has_sec)#True
 
# 添加节点
config.add_section("section3")
config.write(open(‘test‘, ‘w‘))
 
# 删除节点
config.remove_section("section2")
config.write(open(‘test‘, ‘w‘))

技术分享 技术分享

6、检查、删除、设置指定组内的键值对

import configparser
 
config = configparser.ConfigParser()
config.read(‘test‘, encoding=‘utf-8‘)
 
# 检查
has_opt = config.has_option(‘section1‘, ‘k1‘)
print(has_opt)#True
 
# 删除
config.remove_option(‘section1‘, ‘k1‘)
config.write(open(‘test‘, ‘w‘))
 
# 设置
config.set(‘section1‘, ‘k10‘, "123")
config.write(open(‘test‘, ‘w‘))

 技术分享

二、XML

XML是实现不同语言或程序之间进行数据交换的协议,XML文件格式如下:

<data>
    <country name="Liechtenstein">
        <rank updated="yes">2</rank>
        <year>2023</year>
        <gdppc>141100</gdppc>
        <neighbor direction="E" name="Austria" />
        <neighbor direction="W" name="Switzerland" />
    </country>
    <country name="Singapore">
        <rank updated="yes">5</rank>
        <year>2026</year>
        <gdppc>59900</gdppc>
        <neighbor direction="N" name="Malaysia" />
    </country>
    <country name="Panama">
        <rank updated="yes">69</rank>
        <year>2026</year>
        <gdppc>13600</gdppc>
        <neighbor direction="W" name="Costa Rica" />
        <neighbor direction="E" name="Colombia" />
    </country>
</data>

 1、解析XML

技术分享
from xml.etree import ElementTree as ET

# 打开文件,读取XML内容
str_xml = open(xo.xml, r).read()

# 将字符串解析成xml特殊对象,root代指xml文件的根节点
root = ET.XML(str_xml)

利用ElementTree.XML将字符串解析成xml对象
利用ElementTree.XML将字符串解析成xml对象

 利用ElementTree.parse将文件直接解析成xml对象

技术分享
from xml.etree import ElementTree as ET

# 直接解析xml文件
tree = ET.parse("xo.xml")

# 获取xml文件的根节点
root = tree.getroot()
利用ElementTree.parse将文件直接解析成xml对象

2、操作XML

XML格式类型是节点嵌套节点,对于每一个节点均有以下功能,以便对当前节点进行操作:

技术分享
class Element:
    """An XML element.

    This class is the reference implementation of the Element interface.

    An element‘s length is its number of subelements.  That means if you
    want to check if an element is truly empty, you should check BOTH
    its length AND its text attribute.

    The element tag, attribute names, and attribute values can be either
    bytes or strings.

    *tag* is the element name.  *attrib* is an optional dictionary containing
    element attributes. *extra* are additional element attributes given as
    keyword arguments.

    Example form:
        <tag attrib>text<child/>...</tag>tail

    """

    当前节点的标签名
    tag = None
    """The element‘s name."""

    当前节点的属性

    attrib = None
    """Dictionary of the element‘s attributes."""

    当前节点的内容
    text = None
    """
    Text before first subelement. This is either a string or the value None.
    Note that if there is no text, this attribute may be either
    None or the empty string, depending on the parser.

    """

    tail = None
    """
    Text after this element‘s end tag, but before the next sibling element‘s
    start tag.  This is either a string or the value None.  Note that if there
    was no text, this attribute may be either None or an empty string,
    depending on the parser.

    """

    def __init__(self, tag, attrib={}, **extra):
        if not isinstance(attrib, dict):
            raise TypeError("attrib must be dict, not %s" % (
                attrib.__class__.__name__,))
        attrib = attrib.copy()
        attrib.update(extra)
        self.tag = tag
        self.attrib = attrib
        self._children = []

    def __repr__(self):
        return "<%s %r at %#x>" % (self.__class__.__name__, self.tag, id(self))

    def makeelement(self, tag, attrib):
        创建一个新节点
        """Create a new element with the same type.

        *tag* is a string containing the element name.
        *attrib* is a dictionary containing the element attributes.

        Do not call this method, use the SubElement factory function instead.

        """
        return self.__class__(tag, attrib)

    def copy(self):
        """Return copy of current element.

        This creates a shallow copy. Subelements will be shared with the
        original tree.

        """
        elem = self.makeelement(self.tag, self.attrib)
        elem.text = self.text
        elem.tail = self.tail
        elem[:] = self
        return elem

    def __len__(self):
        return len(self._children)

    def __bool__(self):
        warnings.warn(
            "The behavior of this method will change in future versions.  "
            "Use specific ‘len(elem)‘ or ‘elem is not None‘ test instead.",
            FutureWarning, stacklevel=2
            )
        return len(self._children) != 0 # emulate old behaviour, for now

    def __getitem__(self, index):
        return self._children[index]

    def __setitem__(self, index, element):
        # if isinstance(index, slice):
        #     for elt in element:
        #         assert iselement(elt)
        # else:
        #     assert iselement(element)
        self._children[index] = element

    def __delitem__(self, index):
        del self._children[index]

    def append(self, subelement):
        为当前节点追加一个子节点
        """Add *subelement* to the end of this element.

        The new element will appear in document order after the last existing
        subelement (or directly after the text, if it‘s the first subelement),
        but before the end tag for this element.

        """
        self._assert_is_element(subelement)
        self._children.append(subelement)

    def extend(self, elements):
        为当前节点扩展 n 个子节点
        """Append subelements from a sequence.

        *elements* is a sequence with zero or more elements.

        """
        for element in elements:
            self._assert_is_element(element)
        self._children.extend(elements)

    def insert(self, index, subelement):
        在当前节点的子节点中插入某个节点,即:为当前节点创建子节点,然后插入指定位置
        """Insert *subelement* at position *index*."""
        self._assert_is_element(subelement)
        self._children.insert(index, subelement)

    def _assert_is_element(self, e):
        # Need to refer to the actual Python implementation, not the
        # shadowing C implementation.
        if not isinstance(e, _Element_Py):
            raise TypeError(expected an Element, not %s % type(e).__name__)

    def remove(self, subelement):
        在当前节点在子节点中删除某个节点
        """Remove matching subelement.

        Unlike the find methods, this method compares elements based on
        identity, NOT ON tag value or contents.  To remove subelements by
        other means, the easiest way is to use a list comprehension to
        select what elements to keep, and then use slice assignment to update
        the parent element.

        ValueError is raised if a matching element could not be found.

        """
        # assert iselement(element)
        self._children.remove(subelement)

    def getchildren(self):
        获取所有的子节点(废弃)
        """(Deprecated) Return all subelements.

        Elements are returned in document order.

        """
        warnings.warn(
            "This method will be removed in future versions.  "
            "Use ‘list(elem)‘ or iteration over elem instead.",
            DeprecationWarning, stacklevel=2
            )
        return self._children

    def find(self, path, namespaces=None):
        获取第一个寻找到的子节点
        """Find first matching element by tag name or path.

        *path* is a string having either an element tag or an XPath,
        *namespaces* is an optional mapping from namespace prefix to full name.

        Return the first matching element, or None if no element was found.

        """
        return ElementPath.find(self, path, namespaces)

    def findtext(self, path, default=None, namespaces=None):
        获取第一个寻找到的子节点的内容
        """Find text for first matching element by tag name or path.

        *path* is a string having either an element tag or an XPath,
        *default* is the value to return if the element was not found,
        *namespaces* is an optional mapping from namespace prefix to full name.

        Return text content of first matching element, or default value if
        none was found.  Note that if an element is found having no text
        content, the empty string is returned.

        """
        return ElementPath.findtext(self, path, default, namespaces)

    def findall(self, path, namespaces=None):
        获取所有的子节点
        """Find all matching subelements by tag name or path.

        *path* is a string having either an element tag or an XPath,
        *namespaces* is an optional mapping from namespace prefix to full name.

        Returns list containing all matching elements in document order.

        """
        return ElementPath.findall(self, path, namespaces)

    def iterfind(self, path, namespaces=None):
        获取所有指定的节点,并创建一个迭代器(可以被for循环)
        """Find all matching subelements by tag name or path.

        *path* is a string having either an element tag or an XPath,
        *namespaces* is an optional mapping from namespace prefix to full name.

        Return an iterable yielding all matching elements in document order.

        """
        return ElementPath.iterfind(self, path, namespaces)

    def clear(self):
        清空节点
        """Reset element.

        This function removes all subelements, clears all attributes, and sets
        the text and tail attributes to None.

        """
        self.attrib.clear()
        self._children = []
        self.text = self.tail = None

    def get(self, key, default=None):
        获取当前节点的属性值
        """Get element attribute.

        Equivalent to attrib.get, but some implementations may handle this a
        bit more efficiently.  *key* is what attribute to look for, and
        *default* is what to return if the attribute was not found.

        Returns a string containing the attribute value, or the default if
        attribute was not found.

        """
        return self.attrib.get(key, default)

    def set(self, key, value):
        为当前节点设置属性值
        """Set element attribute.

        Equivalent to attrib[key] = value, but some implementations may handle
        this a bit more efficiently.  *key* is what attribute to set, and
        *value* is the attribute value to set it to.

        """
        self.attrib[key] = value

    def keys(self):
        获取当前节点的所有属性的 key

        """Get list of attribute names.

        Names are returned in an arbitrary order, just like an ordinary
        Python dict.  Equivalent to attrib.keys()

        """
        return self.attrib.keys()

    def items(self):
        获取当前节点的所有属性值,每个属性都是一个键值对
        """Get element attributes as a sequence.

        The attributes are returned in arbitrary order.  Equivalent to
        attrib.items().

        Return a list of (name, value) tuples.

        """
        return self.attrib.items()

    def iter(self, tag=None):
        在当前节点的子孙中根据节点名称寻找所有指定的节点,并返回一个迭代器(可以被for循环)。
        """Create tree iterator.

        The iterator loops over the element and all subelements in document
        order, returning all elements with a matching tag.

        If the tree structure is modified during iteration, new or removed
        elements may or may not be included.  To get a stable set, use the
        list() function on the iterator, and loop over the resulting list.

        *tag* is what tags to look for (default is to return all elements)

        Return an iterator containing all the matching elements.

        """
        if tag == "*":
            tag = None
        if tag is None or self.tag == tag:
            yield self
        for e in self._children:
            yield from e.iter(tag)

    # compatibility
    def getiterator(self, tag=None):
        # Change for a DeprecationWarning in 1.4
        warnings.warn(
            "This method will be removed in future versions.  "
            "Use ‘elem.iter()‘ or ‘list(elem.iter())‘ instead.",
            PendingDeprecationWarning, stacklevel=2
        )
        return list(self.iter(tag))

    def itertext(self):
        在当前节点的子孙中根据节点名称寻找所有指定的节点的内容,并返回一个迭代器(可以被for循环)。
        """Create text iterator.

        The iterator loops over the element and all subelements in document
        order, returning all inner text.

        """
        tag = self.tag
        if not isinstance(tag, str) and tag is not None:
            return
        if self.text:
            yield self.text
        for e in self:
            yield from e.itertext()
            if e.tail:
                yield e.tail
节点功能一览表

由于 每个节点 都具有以上的方法,并且在上一步骤中解析时均得到了root(xml文件的根节点),so   可以利用以上方法进行操作xml文件。

a. 遍历XML文档的所有内容

技术分享
from xml.etree import ElementTree as ET

############ 解析方式一 ############
"""
# 打开文件,读取XML内容
str_xml = open(‘xo.xml‘, ‘r‘).read()

# 将字符串解析成xml特殊对象,root代指xml文件的根节点
root = ET.XML(str_xml)
"""
############ 解析方式二 ############

# 直接解析xml文件
tree = ET.parse("xo.xml")

# 获取xml文件的根节点
root = tree.getroot()


### 操作

# 顶层标签
print(root.tag)


# 遍历XML文档的第二层
for child in root:
    # 第二层节点的标签名称和标签属性
    print(child.tag, child.attrib)
    # 遍历XML文档的第三层
    for i in child:
        # 第二层节点的标签名称和内容
        print(i.tag,i.text)
View Code

 b、遍历XML中指定的节点

技术分享
from xml.etree import ElementTree as ET

############ 解析方式一 ############
"""
# 打开文件,读取XML内容
str_xml = open(‘xo.xml‘, ‘r‘).read()

# 将字符串解析成xml特殊对象,root代指xml文件的根节点
root = ET.XML(str_xml)
"""
############ 解析方式二 ############

# 直接解析xml文件
tree = ET.parse("xo.xml")

# 获取xml文件的根节点
root = tree.getroot()


### 操作

# 顶层标签
print(root.tag)


# 遍历XML中所有的year节点
for node in root.iter(year):
    # 节点的标签名称和内容
    print(node.tag, node.text)
View Code

c、修改节点内容

由于修改的节点时,均是在内存中进行,其不会影响文件中的内容。所以,如果想要修改,则需要重新将内存中的内容写到文件。

技术分享
from xml.etree import ElementTree as ET

############ 解析方式一 ############

# 打开文件,读取XML内容
str_xml = open(xo.xml, r).read()

# 将字符串解析成xml特殊对象,root代指xml文件的根节点
root = ET.XML(str_xml)

############ 操作 ############

# 顶层标签
print(root.tag)

# 循环所有的year节点
for node in root.iter(year):
    # 将year节点中的内容自增一
    new_year = int(node.text) + 1
    node.text = str(new_year)

    # 设置属性
    node.set(name, alex)
    node.set(age, 18)
    # 删除属性
    del node.attrib[name]


############ 保存文件 ############
tree = ET.ElementTree(root)
tree.write("newnew.xml", encoding=utf-8)
解析字符串方式,修改,保存
技术分享
from xml.etree import ElementTree as ET

############ 解析方式二 ############

# 直接解析xml文件
tree = ET.parse("xo.xml")

# 获取xml文件的根节点
root = tree.getroot()

############ 操作 ############

# 顶层标签
print(root.tag)

# 循环所有的year节点
for node in root.iter(year):
    # 将year节点中的内容自增一
    new_year = int(node.text) + 1
    node.text = str(new_year)

    # 设置属性
    node.set(name, alex)
    node.set(age, 18)
    # 删除属性
    del node.attrib[name]


############ 保存文件 ############
tree.write("newnew.xml", encoding=utf-8)
解析文件方式,修改,保存

d、删除节点

技术分享
from xml.etree import ElementTree as ET

############ 解析字符串方式打开 ############

# 打开文件,读取XML内容
str_xml = open(xo.xml, r).read()

# 将字符串解析成xml特殊对象,root代指xml文件的根节点
root = ET.XML(str_xml)

############ 操作 ############

# 顶层标签
print(root.tag)

# 遍历data下的所有country节点
for country in root.findall(country):
    # 获取每一个country节点下rank节点的内容
    rank = int(country.find(rank).text)

    if rank > 50:
        # 删除指定country节点
        root.remove(country)

############ 保存文件 ############
tree = ET.ElementTree(root)
tree.write("newnew.xml", encoding=utf-8
解析字符串方式打开,删除,保存
技术分享
from xml.etree import ElementTree as ET

############ 解析文件方式 ############

# 直接解析xml文件
tree = ET.parse("xo.xml")

# 获取xml文件的根节点
root = tree.getroot()

############ 操作 ############

# 顶层标签
print(root.tag)

# 遍历data下的所有country节点
for country in root.findall(country):
    # 获取每一个country节点下rank节点的内容
    rank = int(country.find(rank).text)

    if rank > 50:
        # 删除指定country节点
        root.remove(country)

############ 保存文件 ############
tree.write("newnew.xml", encoding=utf-8)
解析文件方式打开,删除,保存

3、创建XML文档

技术分享
from xml.etree import ElementTree as ET


# 创建根节点
root = ET.Element("famliy")


# 创建节点大儿子
son1 = ET.Element(son, {name: 儿1})
# 创建小儿子
son2 = ET.Element(son, {"name": 儿2})

# 在大儿子中创建两个孙子
grandson1 = ET.Element(grandson, {name: 儿11})
grandson2 = ET.Element(grandson, {name: 儿12})
son1.append(grandson1)
son1.append(grandson2)


# 把儿子添加到根节点中
root.append(son1)
root.append(son1)

tree = ET.ElementTree(root)
tree.write(oooo.xml,encoding=utf-8, short_empty_elements=False)
创建方式(一)
技术分享
from xml.etree import ElementTree as ET

# 创建根节点
root = ET.Element("famliy")


# 创建大儿子
# son1 = ET.Element(‘son‘, {‘name‘: ‘儿1‘})
son1 = root.makeelement(son, {name: 儿1})
# 创建小儿子
# son2 = ET.Element(‘son‘, {"name": ‘儿2‘})
son2 = root.makeelement(son, {"name": 儿2})

# 在大儿子中创建两个孙子
# grandson1 = ET.Element(‘grandson‘, {‘name‘: ‘儿11‘})
grandson1 = son1.makeelement(grandson, {name: 儿11})
# grandson2 = ET.Element(‘grandson‘, {‘name‘: ‘儿12‘})
grandson2 = son1.makeelement(grandson, {name: 儿12})

son1.append(grandson1)
son1.append(grandson2)


# 把儿子添加到根节点中
root.append(son1)
root.append(son1)

tree = ET.ElementTree(root)
tree.write(oooo.xml,encoding=utf-8, short_empty_elements=False)
创建方式(二
技术分享
from xml.etree import ElementTree as ET


# 创建根节点
root = ET.Element("famliy")


# 创建节点大儿子
son1 = ET.SubElement(root, "son", attrib={name: 儿1})
# 创建小儿子
son2 = ET.SubElement(root, "son", attrib={"name": "儿2"})

# 在大儿子中创建一个孙子
grandson1 = ET.SubElement(son1, "age", attrib={name: 儿11})
grandson1.text = 孙子


et = ET.ElementTree(root)  #生成文档对象
et.write("test.xml", encoding="utf-8", xml_declaration=True, short_empty_elements=False)
创建方式(三)

由于原生保存的XML时默认无缩进,如果想要设置缩进的话, 需要修改保存方式:

技术分享
from xml.etree import ElementTree as ET
from xml.dom import minidom


def prettify(elem):
    """将节点转换成字符串,并添加缩进。
    """
    rough_string = ET.tostring(elem, utf-8)
    reparsed = minidom.parseString(rough_string)
    return reparsed.toprettyxml(indent="\t")

# 创建根节点
root = ET.Element("famliy")


# 创建大儿子
# son1 = ET.Element(‘son‘, {‘name‘: ‘儿1‘})
son1 = root.makeelement(son, {name: 儿1})
# 创建小儿子
# son2 = ET.Element(‘son‘, {"name": ‘儿2‘})
son2 = root.makeelement(son, {"name": 儿2})

# 在大儿子中创建两个孙子
# grandson1 = ET.Element(‘grandson‘, {‘name‘: ‘儿11‘})
grandson1 = son1.makeelement(grandson, {name: 儿11})
# grandson2 = ET.Element(‘grandson‘, {‘name‘: ‘儿12‘})
grandson2 = son1.makeelement(grandson, {name: 儿12})

son1.append(grandson1)
son1.append(grandson2)


# 把儿子添加到根节点中
root.append(son1)
root.append(son1)


raw_str = prettify(root)

f = open("xxxoo.xml",w,encoding=utf-8)
f.write(raw_str)
f.close()
View Code

4、命名空间

详细介绍,猛击这里

技术分享
from xml.etree import ElementTree as ET

ET.register_namespace(com,"http://www.company.com") #some name

# build a tree structure
root = ET.Element("{http://www.company.com}STUFF")
body = ET.SubElement(root, "{http://www.company.com}MORE_STUFF", attrib={"{http://www.company.com}hhh": "123"})
body.text = "STUFF EVERYWHERE!"

# wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)

tree.write("page.xml",
           xml_declaration=True,
           encoding=utf-8,
           method="xml")
命名空间

 三.subprocess及系统命令

  • os.system

以上执行shell命令的相关的模块和函数的功能均在 subprocess 模块中实现,并提供了更丰富的功能。

call 

执行命令,返回状态码

ret = subprocess.call(["ls", "-l"], shell=False)
ret = subprocess.call("ls -l", shell=True)

 check_call

执行命令,如果状态码是 0 ,则返回执行结果,否则抛异常

subprocess.check_call(["ls", "-l"])
subprocess.check_call("exit 1", shell=True)

 check_output

执行命令,如果状态码是 0 ,则返回执行结果,否则抛异常

subprocess.check_output(["echo", "Hello World!"])
subprocess.check_output("exit 1", shell=True)

 subprocess.Popen(...)

用于执行复杂的系统命令

参数:

  • args:shell命令,可以是字符串或者序列类型(如:list,元组)
  • bufsize:指定缓冲。0 无缓冲,1 行缓冲,其他 缓冲区大小,负值 系统缓冲
  • stdin, stdout, stderr:分别表示程序的标准输入、输出、错误句柄
  • preexec_fn:只在Unix平台下有效,用于指定一个可执行对象(callable object),它将在子进程运行之前被调用
  • close_sfs:在windows平台下,如果close_fds被设置为True,则新创建的子进程将不会继承父进程的输入、输出、错误管道。
  • 所以不能将close_fds设置为True同时重定向子进程的标准输入、输出与错误(stdin, stdout, stderr)。
  • shell:同上
  • cwd:用于设置子进程的当前目录
  • env:用于指定子进程的环境变量。如果env = None,子进程的环境变量将从父进程中继承。
  • universal_newlines:不同系统的换行符不同,True -> 同意使用 \n
  • startupinfo与createionflags只在windows下有效
  • 将被传递给底层的CreateProcess()函数,用于设置子进程的一些属性,如:主窗口的外观,进程的优先级等等
import subprocess
ret1 = subprocess.Popen(["mkdir","t1"])
ret2 = subprocess.Popen("mkdir t2", shell=True)

终端输入的命令分为两种:

  • 输入即可得到输出,如:ifconfig
  • 输入进行某环境,依赖再输入,如:python
import subprocess
ret3 = subprocess.Popen(‘mkdir t3‘,shell=True,cwd=‘/Users/JasonWang/PycharmProjects/sd13/day7‘,)
import subprocess

obj = subprocess.Popen(["python"],stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE,universal_newlines=True)
obj.stdin.write("print(Hello)\n")
obj.stdin.write("print(World)\n")
obj.stdin.close()

cmd_out = obj.stdout.read()
obj.stdout.close()
cmd_error = obj.stderr.read()
obj.stderr.close()

print(cmd_out)
print(cmd_error)
import subprocess

obj = subprocess.Popen(["python"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
obj.stdin.write("print(1)\n")
obj.stdin.write("print(2)")

out_error_list = obj.communicate()
print(out_error_list)
#(‘1\n2\n‘, ‘‘)
import subprocess

obj = subprocess.Popen(["python"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
out_error_list = obj.communicate(‘print("hello")‘)
print(out_error_list)#(‘hello\n‘, ‘‘)

 四、shutil

高级的 文件、文件夹、压缩包 处理模块

技术分享
"""Utility functions for copying and archiving files and directory trees.

XXX The functions here don‘t copy the resource fork or other metadata on Mac.

"""

import os
import sys
import stat
import fnmatch
import collections
import errno
import tarfile

try:
    import bz2
    del bz2
    _BZ2_SUPPORTED = True
except ImportError:
    _BZ2_SUPPORTED = False

try:
    import lzma
    del lzma
    _LZMA_SUPPORTED = True
except ImportError:
    _LZMA_SUPPORTED = False

try:
    from pwd import getpwnam
except ImportError:
    getpwnam = None

try:
    from grp import getgrnam
except ImportError:
    getgrnam = None

__all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2",
           "copytree", "move", "rmtree", "Error", "SpecialFileError",
           "ExecError", "make_archive", "get_archive_formats",
           "register_archive_format", "unregister_archive_format",
           "get_unpack_formats", "register_unpack_format",
           "unregister_unpack_format", "unpack_archive",
           "ignore_patterns", "chown", "which", "get_terminal_size",
           "SameFileError"]
           # disk_usage is added later, if available on the platform

class Error(OSError):
    pass

class SameFileError(Error):
    """Raised when source and destination are the same file."""

class SpecialFileError(OSError):
    """Raised when trying to do a kind of operation (e.g. copying) which is
    not supported on a special file (e.g. a named pipe)"""

class ExecError(OSError):
    """Raised when a command could not be executed"""

class ReadError(OSError):
    """Raised when an archive cannot be read"""

class RegistryError(Exception):
    """Raised when a registry operation with the archiving
    and unpacking registeries fails"""

#将文件内容拷贝到另一个文件中
def copyfileobj(fsrc, fdst, length=16*1024):
    """copy data from file-like object fsrc to file-like object fdst"""
    while 1:
        buf = fsrc.read(length)
        if not buf:
            break
        fdst.write(buf)

def _samefile(src, dst):
    # Macintosh, Unix.
    if hasattr(os.path, samefile):
        try:
            return os.path.samefile(src, dst)
        except OSError:
            return False

    # All other platforms: check for same pathname.
    return (os.path.normcase(os.path.abspath(src)) ==
            os.path.normcase(os.path.abspath(dst)))
#拷贝文件
def copyfile(src, dst, *, follow_symlinks=True):
    """Copy data from src to dst.

    If follow_symlinks is not set and src is a symbolic link, a new
    symlink will be created instead of copying the file it points to.

    """
    if _samefile(src, dst):
        raise SameFileError("{!r} and {!r} are the same file".format(src, dst))

    for fn in [src, dst]:
        try:
            st = os.stat(fn)
        except OSError:
            # File most likely does not exist
            pass
        else:
            # XXX What about other special files? (sockets, devices...)
            if stat.S_ISFIFO(st.st_mode):
                raise SpecialFileError("`%s` is a named pipe" % fn)

    if not follow_symlinks and os.path.islink(src):
        os.symlink(os.readlink(src), dst)
    else:
        with open(src, rb) as fsrc:
            with open(dst, wb) as fdst:
                copyfileobj(fsrc, fdst)
    return dst
#仅拷贝权限。内容、组、用户均不变
def copymode(src, dst, *, follow_symlinks=True):
    """Copy mode bits from src to dst.

    If follow_symlinks is not set, symlinks aren‘t followed if and only
    if both `src` and `dst` are symlinks.  If `lchmod` isn‘t available
    (e.g. Linux) this method does nothing.

    """
    if not follow_symlinks and os.path.islink(src) and os.path.islink(dst):
        if hasattr(os, lchmod):
            stat_func, chmod_func = os.lstat, os.lchmod
        else:
            return
    elif hasattr(os, chmod):
        stat_func, chmod_func = os.stat, os.chmod
    else:
        return

    st = stat_func(src)
    chmod_func(dst, stat.S_IMODE(st.st_mode))

if hasattr(os, listxattr):
    def _copyxattr(src, dst, *, follow_symlinks=True):
        """Copy extended filesystem attributes from `src` to `dst`.

        Overwrite existing attributes.

        If `follow_symlinks` is false, symlinks won‘t be followed.

        """

        try:
            names = os.listxattr(src, follow_symlinks=follow_symlinks)
        except OSError as e:
            if e.errno not in (errno.ENOTSUP, errno.ENODATA):
                raise
            return
        for name in names:
            try:
                value = os.getxattr(src, name, follow_symlinks=follow_symlinks)
                os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
            except OSError as e:
                if e.errno not in (errno.EPERM, errno.ENOTSUP, errno.ENODATA):
                    raise
else:
    def _copyxattr(*args, **kwargs):
        pass
#仅拷贝状态的信息,包括:mode bits, atime, mtime, flags
def copystat(src, dst, *, follow_symlinks=True):
    """Copy all stat info (mode bits, atime, mtime, flags) from src to dst.

    If the optional flag `follow_symlinks` is not set, symlinks aren‘t followed if and
    only if both `src` and `dst` are symlinks.

    """
    def _nop(*args, ns=None, follow_symlinks=None):
        pass

    # follow symlinks (aka don‘t not follow symlinks)
    follow = follow_symlinks or not (os.path.islink(src) and os.path.islink(dst))
    if follow:
        # use the real function if it exists
        def lookup(name):
            return getattr(os, name, _nop)
    else:
        # use the real function only if it exists
        # *and* it supports follow_symlinks
        def lookup(name):
            fn = getattr(os, name, _nop)
            if fn in os.supports_follow_symlinks:
                return fn
            return _nop

    st = lookup("stat")(src, follow_symlinks=follow)
    mode = stat.S_IMODE(st.st_mode)
    lookup("utime")(dst, ns=(st.st_atime_ns, st.st_mtime_ns),
        follow_symlinks=follow)
    try:
        lookup("chmod")(dst, mode, follow_symlinks=follow)
    except NotImplementedError:
        # if we got a NotImplementedError, it‘s because
        #   * follow_symlinks=False,
        #   * lchown() is unavailable, and
        #   * either
        #       * fchownat() is unavailable or
        #       * fchownat() doesn‘t implement AT_SYMLINK_NOFOLLOW.
        #         (it returned ENOSUP.)
        # therefore we‘re out of options--we simply cannot chown the
        # symlink.  give up, suppress the error.
        # (which is what shutil always did in this circumstance.)
        pass
    if hasattr(st, st_flags):
        try:
            lookup("chflags")(dst, st.st_flags, follow_symlinks=follow)
        except OSError as why:
            for err in EOPNOTSUPP, ENOTSUP:
                if hasattr(errno, err) and why.errno == getattr(errno, err):
                    break
            else:
                raise
    _copyxattr(src, dst, follow_symlinks=follow)
#拷贝文件和权限
def copy(src, dst, *, follow_symlinks=True):
    """Copy data and mode bits ("cp src dst"). Return the file‘s destination.

    The destination may be a directory.

    If follow_symlinks is false, symlinks won‘t be followed. This
    resembles GNU‘s "cp -P src dst".

    If source and destination are the same file, a SameFileError will be
    raised.

    """
    if os.path.isdir(dst):
        dst = os.path.join(dst, os.path.basename(src))
    copyfile(src, dst, follow_symlinks=follow_symlinks)
    copymode(src, dst, follow_symlinks=follow_symlinks)
    return dst
#拷贝文件和状态信息
def copy2(src, dst, *, follow_symlinks=True):
    """Copy data and all stat info ("cp -p src dst"). Return the file‘s
    destination."

    The destination may be a directory.

    If follow_symlinks is false, symlinks won‘t be followed. This
    resembles GNU‘s "cp -P src dst".

    """
    if os.path.isdir(dst):
        dst = os.path.join(dst, os.path.basename(src))
    copyfile(src, dst, follow_symlinks=follow_symlinks)
    copystat(src, dst, follow_symlinks=follow_symlinks)
    return dst

def ignore_patterns(*patterns):
    """Function that can be used as copytree() ignore parameter.

    Patterns is a sequence of glob-style patterns
    that are used to exclude files"""
    def _ignore_patterns(path, names):
        ignored_names = []
        for pattern in patterns:
            ignored_names.extend(fnmatch.filter(names, pattern))
        return set(ignored_names)
    return _ignore_patterns
#递归的去拷贝文件夹
def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2,
             ignore_dangling_symlinks=False):
    """Recursively copy a directory tree.

    The destination directory must not already exist.
    If exception(s) occur, an Error is raised with a list of reasons.

    If the optional symlinks flag is true, symbolic links in the
    source tree result in symbolic links in the destination tree; if
    it is false, the contents of the files pointed to by symbolic
    links are copied. If the file pointed by the symlink doesn‘t
    exist, an exception will be added in the list of errors raised in
    an Error exception at the end of the copy process.

    You can set the optional ignore_dangling_symlinks flag to true if you
    want to silence this exception. Notice that this has no effect on
    platforms that don‘t support os.symlink.

    The optional ignore argument is a callable. If given, it
    is called with the `src` parameter, which is the directory
    being visited by copytree(), and `names` which is the list of
    `src` contents, as returned by os.listdir():

        callable(src, names) -> ignored_names

    Since copytree() is called recursively, the callable will be
    called once for each directory that is copied. It returns a
    list of names relative to the `src` directory that should
    not be copied.

    The optional copy_function argument is a callable that will be used
    to copy each file. It will be called with the source path and the
    destination path as arguments. By default, copy2() is used, but any
    function that supports the same signature (like copy()) can be used.

    """
    names = os.listdir(src)
    if ignore is not None:
        ignored_names = ignore(src, names)
    else:
        ignored_names = set()

    os.makedirs(dst)
    errors = []
    for name in names:
        if name in ignored_names:
            continue
        srcname = os.path.join(src, name)
        dstname = os.path.join(dst, name)
        try:
            if os.path.islink(srcname):
                linkto = os.readlink(srcname)
                if symlinks:
                    # We can‘t just leave it to `copy_function` because legacy
                    # code with a custom `copy_function` may rely on copytree
                    # doing the right thing.
                    os.symlink(linkto, dstname)
                    copystat(srcname, dstname, follow_symlinks=not symlinks)
                else:
                    # ignore dangling symlink if the flag is on
                    if not os.path.exists(linkto) and ignore_dangling_symlinks:
                        continue
                    # otherwise let the copy occurs. copy2 will raise an error
                    if os.path.isdir(srcname):
                        copytree(srcname, dstname, symlinks, ignore,
                                 copy_function)
                    else:
                        copy_function(srcname, dstname)
            elif os.path.isdir(srcname):
                copytree(srcname, dstname, symlinks, ignore, copy_function)
            else:
                # Will raise a SpecialFileError for unsupported file types
                copy_function(srcname, dstname)
        # catch the Error from the recursive copytree so that we can
        # continue with other files
        except Error as err:
            errors.extend(err.args[0])
        except OSError as why:
            errors.append((srcname, dstname, str(why)))
    try:
        copystat(src, dst)
    except OSError as why:
        # Copying file access times may fail on Windows
        if getattr(why, winerror, None) is None:
            errors.append((src, dst, str(why)))
    if errors:
        raise Error(errors)
    return dst

# version vulnerable to race conditions
def _rmtree_unsafe(path, onerror):
    try:
        if os.path.islink(path):
            # symlinks to directories are forbidden, see bug #1669
            raise OSError("Cannot call rmtree on a symbolic link")
    except OSError:
        onerror(os.path.islink, path, sys.exc_info())
        # can‘t continue even if onerror hook returns
        return
    names = []
    try:
        names = os.listdir(path)
    except OSError:
        onerror(os.listdir, path, sys.exc_info())
    for name in names:
        fullname = os.path.join(path, name)
        try:
            mode = os.lstat(fullname).st_mode
        except OSError:
            mode = 0
        if stat.S_ISDIR(mode):
            _rmtree_unsafe(fullname, onerror)
        else:
            try:
                os.unlink(fullname)
            except OSError:
                onerror(os.unlink, fullname, sys.exc_info())
    try:
        os.rmdir(path)
    except OSError:
        onerror(os.rmdir, path, sys.exc_info())

# Version using fd-based APIs to protect against races
def _rmtree_safe_fd(topfd, path, onerror):
    names = []
    try:
        names = os.listdir(topfd)
    except OSError as err:
        err.filename = path
        onerror(os.listdir, path, sys.exc_info())
    for name in names:
        fullname = os.path.join(path, name)
        try:
            orig_st = os.stat(name, dir_fd=topfd, follow_symlinks=False)
            mode = orig_st.st_mode
        except OSError:
            mode = 0
        if stat.S_ISDIR(mode):
            try:
                dirfd = os.open(name, os.O_RDONLY, dir_fd=topfd)
            except OSError:
                onerror(os.open, fullname, sys.exc_info())
            else:
                try:
                    if os.path.samestat(orig_st, os.fstat(dirfd)):
                        _rmtree_safe_fd(dirfd, fullname, onerror)
                        try:
                            os.rmdir(name, dir_fd=topfd)
                        except OSError:
                            onerror(os.rmdir, fullname, sys.exc_info())
                    else:
                        try:
                            # This can only happen if someone replaces
                            # a directory with a symlink after the call to
                            # stat.S_ISDIR above.
                            raise OSError("Cannot call rmtree on a symbolic "
                                          "link")
                        except OSError:
                            onerror(os.path.islink, fullname, sys.exc_info())
                finally:
                    os.close(dirfd)
        else:
            try:
                os.unlink(name, dir_fd=topfd)
            except OSError:
                onerror(os.unlink, fullname, sys.exc_info())

_use_fd_functions = ({os.open, os.stat, os.unlink, os.rmdir} <=
                     os.supports_dir_fd and
                     os.listdir in os.supports_fd and
                     os.stat in os.supports_follow_symlinks)
#递归的去删除文件
def rmtree(path, ignore_errors=False, onerror=None):
    """Recursively delete a directory tree.

    If ignore_errors is set, errors are ignored; otherwise, if onerror
    is set, it is called to handle the error with arguments (func,
    path, exc_info) where func is platform and implementation dependent;
    path is the argument to that function that caused it to fail; and
    exc_info is a tuple returned by sys.exc_info().  If ignore_errors
    is false and onerror is None, an exception is raised.

    """
    if ignore_errors:
        def onerror(*args):
            pass
    elif onerror is None:
        def onerror(*args):
            raise
    if _use_fd_functions:
        # While the unsafe rmtree works fine on bytes, the fd based does not.
        if isinstance(path, bytes):
            path = os.fsdecode(path)
        # Note: To guard against symlink races, we use the standard
        # lstat()/open()/fstat() trick.
        try:
            orig_st = os.lstat(path)
        except Exception:
            onerror(os.lstat, path, sys.exc_info())
            return
        try:
            fd = os.open(path, os.O_RDONLY)
        except Exception:
            onerror(os.lstat, path, sys.exc_info())
            return
        try:
            if os.path.samestat(orig_st, os.fstat(fd)):
                _rmtree_safe_fd(fd, path, onerror)
                try:
                    os.rmdir(path)
                except OSError:
                    onerror(os.rmdir, path, sys.exc_info())
            else:
                try:
                    # symlinks to directories are forbidden, see bug #1669
                    raise OSError("Cannot call rmtree on a symbolic link")
                except OSError:
                    onerror(os.path.islink, path, sys.exc_info())
        finally:
            os.close(fd)
    else:
        return _rmtree_unsafe(path, onerror)

# Allow introspection of whether or not the hardening against symlink
# attacks is supported on the current platform
rmtree.avoids_symlink_attacks = _use_fd_functions

def _basename(path):
    # A basename() variant which first strips the trailing slash, if present.
    # Thus we always get the last component of the path, even for directories.
    sep = os.path.sep + (os.path.altsep or ‘‘)
    return os.path.basename(path.rstrip(sep))
#递归的去移动文件,它类似mv命令,其实就是重命名。
def move(src, dst, copy_function=copy2):
    """Recursively move a file or directory to another location. This is
    similar to the Unix "mv" command. Return the file or directory‘s
    destination.

    If the destination is a directory or a symlink to a directory, the source
    is moved inside the directory. The destination path must not already
    exist.

    If the destination already exists but is not a directory, it may be
    overwritten depending on os.rename() semantics.

    If the destination is on our current filesystem, then rename() is used.
    Otherwise, src is copied to the destination and then removed. Symlinks are
    recreated under the new name if os.rename() fails because of cross
    filesystem renames.

    The optional `copy_function` argument is a callable that will be used
    to copy the source or it will be delegated to `copytree`.
    By default, copy2() is used, but any function that supports the same
    signature (like copy()) can be used.

    A lot more could be done here...  A look at a mv.c shows a lot of
    the issues this implementation glosses over.

    """
    real_dst = dst
    if os.path.isdir(dst):
        if _samefile(src, dst):
            # We might be on a case insensitive filesystem,
            # perform the rename anyway.
            os.rename(src, dst)
            return

        real_dst = os.path.join(dst, _basename(src))
        if os.path.exists(real_dst):
            raise Error("Destination path ‘%s‘ already exists" % real_dst)
    try:
        os.rename(src, real_dst)
    except OSError:
        if os.path.islink(src):
            linkto = os.readlink(src)
            os.symlink(linkto, real_dst)
            os.unlink(src)
        elif os.path.isdir(src):
            if _destinsrc(src, dst):
                raise Error("Cannot move a directory ‘%s‘ into itself"
                            " ‘%s‘." % (src, dst))
            copytree(src, real_dst, copy_function=copy_function,
                     symlinks=True)
            rmtree(src)
        else:
            copy_function(src, real_dst)
            os.unlink(src)
    return real_dst

def _destinsrc(src, dst):
    src = os.path.abspath(src)
    dst = os.path.abspath(dst)
    if not src.endswith(os.path.sep):
        src += os.path.sep
    if not dst.endswith(os.path.sep):
        dst += os.path.sep
    return dst.startswith(src)

def _get_gid(name):
    """Returns a gid, given a group name."""
    if getgrnam is None or name is None:
        return None
    try:
        result = getgrnam(name)
    except KeyError:
        result = None
    if result is not None:
        return result[2]
    return None

def _get_uid(name):
    """Returns an uid, given a user name."""
    if getpwnam is None or name is None:
        return None
    try:
        result = getpwnam(name)
    except KeyError:
        result = None
    if result is not None:
        return result[2]
    return None

def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0,
                  owner=None, group=None, logger=None):
    """Create a (possibly compressed) tar file from all the files under
    ‘base_dir‘.

    ‘compress‘ must be "gzip" (the default), "bzip2", "xz", or None.

    ‘owner‘ and ‘group‘ can be used to define an owner and a group for the
    archive that is being built. If not provided, the current owner and group
    will be used.

    The output tar file will be named ‘base_name‘ +  ".tar", possibly plus
    the appropriate compression extension (".gz", ".bz2", or ".xz").

    Returns the output filename.
    """
    tar_compression = {gzip: gz, None: ‘‘}
    compress_ext = {gzip: .gz}

    if _BZ2_SUPPORTED:
        tar_compression[bzip2] = bz2
        compress_ext[bzip2] = .bz2

    if _LZMA_SUPPORTED:
        tar_compression[xz] = xz
        compress_ext[xz] = .xz

    # flags for compression program, each element of list will be an argument
    if compress is not None and compress not in compress_ext:
        raise ValueError("bad value for ‘compress‘, or compression format not "
                         "supported : {0}".format(compress))

    archive_name = base_name + .tar + compress_ext.get(compress, ‘‘)
    archive_dir = os.path.dirname(archive_name)

    if archive_dir and not os.path.exists(archive_dir):
        if logger is not None:
            logger.info("creating %s", archive_dir)
        if not dry_run:
            os.makedirs(archive_dir)

    # creating the tarball
    if logger is not None:
        logger.info(Creating tar archive)

    uid = _get_uid(owner)
    gid = _get_gid(group)

    def _set_uid_gid(tarinfo):
        if gid is not None:
            tarinfo.gid = gid
            tarinfo.gname = group
        if uid is not None:
            tarinfo.uid = uid
            tarinfo.uname = owner
        return tarinfo

    if not dry_run:
        tar = tarfile.open(archive_name, w|%s % tar_compression[compress])
        try:
            tar.add(base_dir, filter=_set_uid_gid)
        finally:
            tar.close()

    return archive_name

def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None):
    """Create a zip file from all the files under ‘base_dir‘.

    The output zip file will be named ‘base_name‘ + ".zip".  Uses either the
    "zipfile" Python module (if available) or the InfoZIP "zip" utility
    (if installed and found on the default search path).  If neither tool is
    available, raises ExecError.  Returns the name of the output zip
    file.
    """
    import zipfile

    zip_filename = base_name + ".zip"
    archive_dir = os.path.dirname(base_name)

    if archive_dir and not os.path.exists(archive_dir):
        if logger is not None:
            logger.info("creating %s", archive_dir)
        if not dry_run:
            os.makedirs(archive_dir)

    if logger is not None:
        logger.info("creating ‘%s‘ and adding ‘%s‘ to it",
                    zip_filename, base_dir)

    if not dry_run:
        with zipfile.ZipFile(zip_filename, "w",
                             compression=zipfile.ZIP_DEFLATED) as zf:
            for dirpath, dirnames, filenames in os.walk(base_dir):
                for name in filenames:
                    path = os.path.normpath(os.path.join(dirpath, name))
                    if os.path.isfile(path):
                        zf.write(path, path)
                        if logger is not None:
                            logger.info("adding ‘%s‘", path)

    return zip_filename

_ARCHIVE_FORMATS = {
    gztar: (_make_tarball, [(compress, gzip)], "gzip‘ed tar-file"),
    tar:   (_make_tarball, [(compress, None)], "uncompressed tar file"),
    zip:   (_make_zipfile, [], "ZIP file")
    }

if _BZ2_SUPPORTED:
    _ARCHIVE_FORMATS[bztar] = (_make_tarball, [(compress, bzip2)],
                                "bzip2‘ed tar-file")

if _LZMA_SUPPORTED:
    _ARCHIVE_FORMATS[xztar] = (_make_tarball, [(compress, xz)],
                                "xz‘ed tar-file")

def get_archive_formats():
    """Returns a list of supported formats for archiving and unarchiving.

    Each element of the returned sequence is a tuple (name, description)
    """
    formats = [(name, registry[2]) for name, registry in
               _ARCHIVE_FORMATS.items()]
    formats.sort()
    return formats

def register_archive_format(name, function, extra_args=None, description=‘‘):
    """Registers an archive format.

    name is the name of the format. function is the callable that will be
    used to create archives. If provided, extra_args is a sequence of
    (name, value) tuples that will be passed as arguments to the callable.
    description can be provided to describe the format, and will be returned
    by the get_archive_formats() function.
    """
    if extra_args is None:
        extra_args = []
    if not callable(function):
        raise TypeError(The %s object is not callable % function)
    if not isinstance(extra_args, (tuple, list)):
        raise TypeError(extra_args needs to be a sequence)
    for element in extra_args:
        if not isinstance(element, (tuple, list)) or len(element) !=2:
            raise TypeError(extra_args elements are : (arg_name, value))

    _ARCHIVE_FORMATS[name] = (function, extra_args, description)
#创建压缩包并返回文件路径,例如:zip、tar
def unregister_archive_format(name):
    del _ARCHIVE_FORMATS[name]

def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0,
                 dry_run=0, owner=None, group=None, logger=None):
    """Create an archive file (eg. zip or tar).

    ‘base_name‘ is the name of the file to create, minus any format-specific
    extension; ‘format‘ is the archive format: one of "zip", "tar", "bztar"
    or "gztar".

    ‘root_dir‘ is a directory that will be the root directory of the
    archive; ie. we typically chdir into ‘root_dir‘ before creating the
    archive.  ‘base_dir‘ is the directory where we start archiving from;
    ie. ‘base_dir‘ will be the common prefix of all files and
    directories in the archive.  ‘root_dir‘ and ‘base_dir‘ both default
    to the current directory.  Returns the name of the archive file.

    ‘owner‘ and ‘group‘ are used when creating a tar archive. By default,
    uses the current owner and group.
    """
    save_cwd = os.getcwd()
    if root_dir is not None:
        if logger is not None:
            logger.debug("changing into ‘%s‘", root_dir)
        base_name = os.path.abspath(base_name)
        if not dry_run:
            os.chdir(root_dir)

    if base_dir is None:
        base_dir = os.curdir

    kwargs = {dry_run: dry_run, logger: logger}

    try:
        format_info = _ARCHIVE_FORMATS[format]
    except KeyError:
        raise ValueError("unknown archive format ‘%s‘" % format)

    func = format_info[0]
    for arg, val in format_info[1]:
        kwargs[arg] = val

    if format != zip:
        kwargs[owner] = owner
        kwargs[group] = group

    try:
        filename = func(base_name, base_dir, **kwargs)
    finally:
        if root_dir is not None:
            if logger is not None:
                logger.debug("changing back to ‘%s‘", save_cwd)
            os.chdir(save_cwd)

    return filename


def get_unpack_formats():
    """Returns a list of supported formats for unpacking.

    Each element of the returned sequence is a tuple
    (name, extensions, description)
    """
    formats = [(name, info[0], info[3]) for name, info in
               _UNPACK_FORMATS.items()]
    formats.sort()
    return formats

def _check_unpack_options(extensions, function, extra_args):
    """Checks what gets registered as an unpacker."""
    # first make sure no other unpacker is registered for this extension
    existing_extensions = {}
    for name, info in _UNPACK_FORMATS.items():
        for ext in info[0]:
            existing_extensions[ext] = name

    for extension in extensions:
        if extension in existing_extensions:
            msg = %s is already registered for "%s"
            raise RegistryError(msg % (extension,
                                       existing_extensions[extension]))

    if not callable(function):
        raise TypeError(The registered function must be a callable)


def register_unpack_format(name, extensions, function, extra_args=None,
                           description=‘‘):
    """Registers an unpack format.

    `name` is the name of the format. `extensions` is a list of extensions
    corresponding to the format.

    `function` is the callable that will be
    used to unpack archives. The callable will receive archives to unpack.
    If it‘s unable to handle an archive, it needs to raise a ReadError
    exception.

    If provided, `extra_args` is a sequence of
    (name, value) tuples that will be passed as arguments to the callable.
    description can be provided to describe the format, and will be returned
    by the get_unpack_formats() function.
    """
    if extra_args is None:
        extra_args = []
    _check_unpack_options(extensions, function, extra_args)
    _UNPACK_FORMATS[name] = extensions, function, extra_args, description

def unregister_unpack_format(name):
    """Removes the pack format from the registery."""
    del _UNPACK_FORMATS[name]

def _ensure_directory(path):
    """Ensure that the parent directory of `path` exists"""
    dirname = os.path.dirname(path)
    if not os.path.isdir(dirname):
        os.makedirs(dirname)

def _unpack_zipfile(filename, extract_dir):
    """Unpack zip `filename` to `extract_dir`
    """
    try:
        import zipfile
    except ImportError:
        raise ReadError(zlib not supported, cannot unpack this archive.)

    if not zipfile.is_zipfile(filename):
        raise ReadError("%s is not a zip file" % filename)

    zip = zipfile.ZipFile(filename)
    try:
        for info in zip.infolist():
            name = info.filename

            # don‘t extract absolute paths or ones with .. in them
            if name.startswith(/) or .. in name:
                continue

            target = os.path.join(extract_dir, *name.split(/))
            if not target:
                continue

            _ensure_directory(target)
            if not name.endswith(/):
                # file
                data = zip.read(info.filename)
                f = open(target, wb)
                try:
                    f.write(data)
                finally:
                    f.close()
                    del data
    finally:
        zip.close()

def _unpack_tarfile(filename, extract_dir):
    """Unpack tar/tar.gz/tar.bz2/tar.xz `filename` to `extract_dir`
    """
    try:
        tarobj = tarfile.open(filename)
    except tarfile.TarError:
        raise ReadError(
            "%s is not a compressed or uncompressed tar file" % filename)
    try:
        tarobj.extractall(extract_dir)
    finally:
        tarobj.close()

_UNPACK_FORMATS = {
    gztar: ([.tar.gz, .tgz], _unpack_tarfile, [], "gzip‘ed tar-file"),
    tar:   ([.tar], _unpack_tarfile, [], "uncompressed tar file"),
    zip:   ([.zip], _unpack_zipfile, [], "ZIP file")
    }

if _BZ2_SUPPORTED:
    _UNPACK_FORMATS[bztar] = ([.tar.bz2, .tbz2], _unpack_tarfile, [],
                                "bzip2‘ed tar-file")

if _LZMA_SUPPORTED:
    _UNPACK_FORMATS[xztar] = ([.tar.xz, .txz], _unpack_tarfile, [],
                                "xz‘ed tar-file")

def _find_unpack_format(filename):
    for name, info in _UNPACK_FORMATS.items():
        for extension in info[0]:
            if filename.endswith(extension):
                return name
    return None

def unpack_archive(filename, extract_dir=None, format=None):
    """Unpack an archive.

    `filename` is the name of the archive.

    `extract_dir` is the name of the target directory, where the archive
    is unpacked. If not provided, the current working directory is used.

    `format` is the archive format: one of "zip", "tar", or "gztar". Or any
    other registered format. If not provided, unpack_archive will use the
    filename extension and see if an unpacker was registered for that
    extension.

    In case none is found, a ValueError is raised.
    """
    if extract_dir is None:
        extract_dir = os.getcwd()

    if format is not None:
        try:
            format_info = _UNPACK_FORMATS[format]
        except KeyError:
            raise ValueError("Unknown unpack format ‘{0}‘".format(format))

        func = format_info[1]
        func(filename, extract_dir, **dict(format_info[2]))
    else:
        # we need to look at the registered unpackers supported extensions
        format = _find_unpack_format(filename)
        if format is None:
            raise ReadError("Unknown archive format ‘{0}‘".format(filename))

        func = _UNPACK_FORMATS[format][1]
        kwargs = dict(_UNPACK_FORMATS[format][2])
        func(filename, extract_dir, **kwargs)


if hasattr(os, statvfs):

    __all__.append(disk_usage)
    _ntuple_diskusage = collections.namedtuple(usage, total used free)

    def disk_usage(path):
        """Return disk usage statistics about the given path.

        Returned value is a named tuple with attributes ‘total‘, ‘used‘ and
        ‘free‘, which are the amount of total, used and free space, in bytes.
        """
        st = os.statvfs(path)
        free = st.f_bavail * st.f_frsize
        total = st.f_blocks * st.f_frsize
        used = (st.f_blocks - st.f_bfree) * st.f_frsize
        return _ntuple_diskusage(total, used, free)

elif os.name == nt:

    import nt
    __all__.append(disk_usage)
    _ntuple_diskusage = collections.namedtuple(usage, total used free)

    def disk_usage(path):
        """Return disk usage statistics about the given path.

        Returned values is a named tuple with attributes ‘total‘, ‘used‘ and
        ‘free‘, which are the amount of total, used and free space, in bytes.
        """
        total, free = nt._getdiskusage(path)
        used = total - free
        return _ntuple_diskusage(total, used, free)


def chown(path, user=None, group=None):
    """Change owner user and group of the given path.

    user and group can be the uid/gid or the user/group names, and in that case,
    they are converted to their respective uid/gid.
    """

    if user is None and group is None:
        raise ValueError("user and/or group must be set")

    _user = user
    _group = group

    # -1 means don‘t change it
    if user is None:
        _user = -1
    # user can either be an int (the uid) or a string (the system username)
    elif isinstance(user, str):
        _user = _get_uid(user)
        if _user is None:
            raise LookupError("no such user: {!r}".format(user))

    if group is None:
        _group = -1
    elif not isinstance(group, int):
        _group = _get_gid(group)
        if _group is None:
            raise LookupError("no such group: {!r}".format(group))

    os.chown(path, _user, _group)

def get_terminal_size(fallback=(80, 24)):
    """Get the size of the terminal window.

    For each of the two dimensions, the environment variable, COLUMNS
    and LINES respectively, is checked. If the variable is defined and
    the value is a positive integer, it is used.

    When COLUMNS or LINES is not defined, which is the common case,
    the terminal connected to sys.__stdout__ is queried
    by invoking os.get_terminal_size.

    If the terminal size cannot be successfully queried, either because
    the system doesn‘t support querying, or because we are not
    connected to a terminal, the value given in fallback parameter
    is used. Fallback defaults to (80, 24) which is the default
    size used by many terminal emulators.

    The value returned is a named tuple of type os.terminal_size.
    """
    # columns, lines are the working values
    try:
        columns = int(os.environ[COLUMNS])
    except (KeyError, ValueError):
        columns = 0

    try:
        lines = int(os.environ[LINES])
    except (KeyError, ValueError):
        lines = 0

    # only query if necessary
    if columns <= 0 or lines <= 0:
        try:
            size = os.get_terminal_size(sys.__stdout__.fileno())
        except (NameError, OSError):
            size = os.terminal_size(fallback)
        if columns <= 0:
            columns = size.columns
        if lines <= 0:
            lines = size.lines

    return os.terminal_size((columns, lines))

def which(cmd, mode=os.F_OK | os.X_OK, path=None):
    """Given a command, mode, and a PATH string, return the path which
    conforms to the given mode on the PATH, or None if there is no such
    file.

    `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
    of os.environ.get("PATH"), or can be overridden with a custom search
    path.

    """
    # Check that a given file can be accessed with the correct mode.
    # Additionally check that `file` is not a directory, as on Windows
    # directories pass the os.access check.
    def _access_check(fn, mode):
        return (os.path.exists(fn) and os.access(fn, mode)
                and not os.path.isdir(fn))

    # If we‘re given a path with a directory part, look it up directly rather
    # than referring to PATH directories. This includes checking relative to the
    # current directory, e.g. ./script
    if os.path.dirname(cmd):
        if _access_check(cmd, mode):
            return cmd
        return None

    if path is None:
        path = os.environ.get("PATH", os.defpath)
    if not path:
        return None
    path = path.split(os.pathsep)

    if sys.platform == "win32":
        # The current directory takes precedence on Windows.
        if not os.curdir in path:
            path.insert(0, os.curdir)

        # PATHEXT is necessary to check on Windows.
        pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
        # See if the given file matches any of the expected path extensions.
        # This will allow us to short circuit when given "python.exe".
        # If it does match, only test that one, otherwise we have to try
        # others.
        if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
            files = [cmd]
        else:
            files = [cmd + ext for ext in pathext]
    else:
        # On other platforms you don‘t have things like PATHEXT to tell you
        # what file suffixes are executable, so just pass on cmd as-is.
        files = [cmd]

    seen = set()
    for dir in path:
        normdir = os.path.normcase(dir)
        if not normdir in seen:
            seen.add(normdir)
            for thefile in files:
                name = os.path.join(dir, thefile)
                if _access_check(name, mode):
                    return name
    return None
View Code

 shutil.copyfileobj(fsrc, fdst[, length])
将文件内容拷贝到另一个文件中

import shutil

shutil.copyfileobj(open(‘new1.xml‘,‘r‘),open(‘new2.xml‘,‘w‘))

 shutil.copyfile(src, dst)
拷贝文件

shutil.copyfile(‘new.xml‘, ‘new4.xml‘)

 shutil.copymode(src, dst)

仅拷贝权限。内容、组、用户均不变

shutil.copystat(‘new.xml‘,‘new4.xml‘)

 shutil.copy(src, dst)
拷贝文件和权限

import shutil
shutil.copy(‘new.xml‘,‘new4.xml‘)

 shutil.copy2(src, dst)
拷贝文件和状态信息

shutil.copy2(‘new.xml‘,‘new4.xml‘)

 shutil.ignore_patterns(*patterns)
shutil.copytree(src, dst, symlinks=False, ignore=None)
递归的去拷贝文件夹

import shutil
 
shutil.copytree(‘folder1‘, ‘folder2‘, ignore=shutil.ignore_patterns(‘*.pyc‘, ‘tmp*‘))
shutil.copytree(‘/Users/JasonWang/PycharmProjects/sd13/day7‘,‘tt‘,symlinks=True,ignore=shutil.ignore_patterns(‘*.pyc‘,‘tmp*‘))

 shutil.rmtree(path[, ignore_errors[, onerror]])
递归的去删除文件

shutil.rmtree(‘tt‘)

 shutil.move(src, dst)
递归的去移动文件,它类似mv命令,其实就是重命名。

import shutil
 
shutil.move(‘folder1‘, ‘folder3‘)

shutil.make_archive(base_name, format,...)

创建压缩包并返回文件路径,例如:zip、tar

创建压缩包并返回文件路径,例如:zip、tar

  • base_name: 压缩包的文件名,也可以是压缩包的路径。只是文件名时,则保存至当前目录,否则保存至指定路径,
  • 如:www                        =>保存至当前路径
  • 如:/Users/wupeiqi/www =>保存至/Users/wupeiqi/
  • format: 压缩包种类,“zip”, “tar”, “bztar”,“gztar”
  • root_dir: 要压缩的文件夹路径(默认当前目录)
  • owner: 用户,默认当前用户
  • group: 组,默认当前组
  • logger: 用于记录日志,通常是logging.Logger对象
#将 /Users/JasonWang/PycharmProjects/sd13/day7下的文件打包放置当前程序目录
ret = shutil.make_archive(‘tt_arch‘,‘gztar‘,root_dir=‘/Users/JasonWang/PycharmProjects/sd13/day7‘)

#将/Users/JasonWang/PycharmProjects/sd13/day7/tt 下的文件打包放置/Users/JasonWang/PycharmProjects/sd13/day7 目录
ret = shutil.make_archive(‘/Users/JasonWang/PycharmProjects/sd13/day7/tt‘,‘bztar‘,root_dir=‘/Users/JasonWang/PycharmProjects/sd13/day7‘)

 shutil 对压缩包的处理是调用 ZipFile 和 TarFile 两个模块来进行的,详细:

技术分享
import zipfile
z = zipfile.ZipFile(test.zip,w)
z.write(newnew1.xml)
z.write(new2.xml)
z.close()
#解压
z = zipfile.ZipFile(test.zip,r)
z.extract(newnew.xml,path=/Users/JasonWang/PycharmProjects/sd13/day7/)
z.close()
zipfile解压缩
技术分享
import tarfile
#压缩
tar = tarfile.open(my.tar,w)
tar.add(newnew1.xml, arcname=newnew1.xml)
tar.add(/Users/JasonWang/PycharmProjects/sd13/day7/tt/new4.xml, arcname=new4.xml)
tar.close()
#解压
tar = tarfile.open(my.tar,r)
# tar.extractall()
tar.extract(newnew1.xml)
tarfile解压缩

五、paramiko

 paramiko是一个用于做远程控制的模块,使用该模块可以对远程服务器进行命令或文件操作,值得一说的是,fabric和ansible内部的远程管理就是使用的paramiko来现实。

1、下载安装

pycrypto,由于 paramiko 模块内部依赖pycrypto,所以先下载安装pycrypto
pip3 install pycrypto
pip3 install paramiko

 2、模块使用

技术分享
#!/usr/bin/env python
#coding:utf-8

import paramiko

ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(192.168.1.108, 22, alex, 123)
stdin, stdout, stderr = ssh.exec_command(df)
print stdout.read()
ssh.close();
执行命令 - 用户名+密码
技术分享
import paramiko

private_key_path = /home/auto/.ssh/id_rsa
key = paramiko.RSAKey.from_private_key_file(private_key_path)

ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(主机名 , 端口, 用户名, key)

stdin, stdout, stderr = ssh.exec_command(df)
print stdout.read()
ssh.close()
执行命令 - 密钥
技术分享
import os,sys
import paramiko

t = paramiko.Transport((182.92.219.86,22))
t.connect(username=wupeiqi,password=123)
sftp = paramiko.SFTPClient.from_transport(t)
sftp.put(/tmp/test.py,/tmp/test.py) 
t.close()


import os,sys
import paramiko

t = paramiko.Transport((182.92.219.86,22))
t.connect(username=wupeiqi,password=123)
sftp = paramiko.SFTPClient.from_transport(t)
sftp.get(/tmp/test.py,/tmp/test2.py)
t.close()
上传或下载文件 - 用户名+密码
技术分享
import paramiko

pravie_key_path = /home/auto/.ssh/id_rsa
key = paramiko.RSAKey.from_private_key_file(pravie_key_path)

t = paramiko.Transport((182.92.219.86,22))
t.connect(username=wupeiqi,pkey=key)

sftp = paramiko.SFTPClient.from_transport(t)
sftp.put(/tmp/test3.py,/tmp/test3.py) 

t.close()

import paramiko

pravie_key_path = /home/auto/.ssh/id_rsa
key = paramiko.RSAKey.from_private_key_file(pravie_key_path)

t = paramiko.Transport((182.92.219.86,22))
t.connect(username=wupeiqi,pkey=key)

sftp = paramiko.SFTPClient.from_transport(t)
sftp.get(/tmp/test3.py,/tmp/test4.py) 

t.close()
上传或下载文件 - 密钥

六、shelve模块

shelve模块是pickle模块的扩展,可以通过key,value的方式访问pickle序列化保存的数据

import shelve
sw = shelve.open(‘shelve_test.pkl‘)#创建shelve对象
name = [‘12‘,‘34‘,‘56‘,7]#创建一个列表
dict_test = {‘k1‘:‘v1‘,‘k2‘:‘v2‘}
sw[‘name‘] = name#将列表持久化保存
sw[‘dict_test‘] = dict_test
sw.close()#关闭文件,必须要有

sr = shelve.open(‘shelve_test.pkl‘)
print(sr[‘name‘])#读出列表
print(sr[‘dict_test‘])#读出字典
sr.close()

说明:

1、其实shelve模块其实就是pickle模块的一个扩展,可以直接用key来读取持久化保存的数据,而不用原生pickle一样通过持久化的顺序来一个个读取出来

2、sw[‘name‘]里的name其实就是key也就是自定义的一个名字,读取的时候通过这个key就可以方便读取出来

3、shelve_test.pkl并不是最终的文件名,shelve会自动生成如下三个后缀的文件

shelve_test.pkl.bak,shelve_test.pkl.dat,shelve_test.pkl.dir





 

Python 模块续 configparser、shutil、XML、paramiko、系统命令、

标签:

原文地址:http://www.cnblogs.com/jasonwang-2016/p/5615188.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!