标签:lin directly tool following create rust visio backend new
Background:
Device Mapper is a kernel-based framework that underpins many advancedvolume management technologies on Linux. Docker’s
devicemapper
storage driverleverages the thin provisioning and snapshotting capabilities of this frameworkfor image and container management. This article refers to the Device Mapperstorage driver as
devicemapper
, and the kernel framework as
Device Mapper
.
Docker originally ran on Ubuntu and Debian Linux and used AUFS for its storagebackend. As Docker became popular, many of the companies that wanted to use itwere using Red Hat Enterprise Linux (RHEL). Unfortunately, because the upstreammainline Linux kernel did not include AUFS, RHEL did not use AUFS either.
To correct this Red Hat developers investigated getting AUFS into the mainlinekernel. Ultimately, though, they decided a better idea was to develop a newstorage backend. Moreover, they would base this new storage backend on existingDevice
Mapper
technology.
Red Hat collaborated with Docker Inc. to contribute this new driver. As a resultof this collaboration, Docker’s Engine was re-engineered to make the storagebackend pluggable. So it was that the
devicemapper
became the second storagedriver Docker supported.
Device Mapper has been included in the mainline Linux kernel since version2.6.9. It is a core part of RHEL family of Linux distributions. This means thatthe
devicemapper
storage driver is based on stable code that has a lot ofreal-world production deployments and strong community support.
The devicemapper
driver stores every image and container on its own virtualdevice. These devices are thin-provisioned copy-on-write snapshot devices.Device Mapper technology works at the block level rather than the
file level.This means that devicemapper
storage driver’s thin provisioning andcopy-on-write operations work with blocks rather than entire files.
How to config?
The devicemapper
is the default Docker storage driver on some Linuxdistributions. This includes RHEL and most of its forks. Currently, thefollowing distributions support the driver:
Docker hosts running the devicemapper
storage driver default to aconfiguration mode known as
loop-lvm
. This mode uses sparse files to buildthe thin pool used by image and container snapshots. The mode is designed towork out-of-the-box with no additional configuration. However, productiondeployments should not
run under loop-lvm
mode.
You can detect the mode by viewing the docker info
command:
$ sudo docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-202:2-25220302-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
[...]
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
[...]
The output above shows a Docker host running with the
devicemapper
storagedriver operating in loop-lvm
mode. This is indicated by the fact that theData loop file
and a
Metadata loop file
are on files under/var/lib/docker/devicemapper/devicemapper
. These are loopback mounted sparsefiles.
The preferred configuration for production deployments is
direct-lvm
. Thismode uses block devices to create the thin pool. The following procedure showsyou how to configure a Docker host to use the
devicemapper
storage driver ina
direct-lvm
configuration.
Caution: If you have already run the Docker daemon on your Docker hostand have images you want to keep,
push
them to Docker Hub or your privateDocker Trusted Registry before attempting this procedure.
The procedure below will create a logical volume configured as a thin pool touse as backing for the storage pool. It assumes that you have a spare blockdevice at
/dev/xvdf
with enough free space to complete the task. The deviceidentifier and volume sizes may be different in your environment and youshould substitute your own values throughout the procedure. The procedure alsoassumes
that the Docker daemon is in the stopped
state.
Log in to the Docker host you want to configure and stop the Docker daemon.
Install the LVM2 and thin-provisioning-tools
packages.
The LVM2 package includes the userspace toolset that provides logical volumemanagement facilities on linux.
The thin-provisioning-tools
package allows you to activate and manage yourpool.
$ sudo yum install -y lvm2
Create a physical volume replacing /dev/xvdf
with your block device.
$ pvcreate /dev/xvdf
Create a docker
volume group.
$ vgcreate docker /dev/xvdf
Create a logical volume named thinpool
and
thinpoolmeta
.
In this example, the data logical is 95% of the ‘docker’ volume group size.Leaving this free space allows for auto expanding of either the data ormetadata if space runs low as a temporary stopgap.
$ lvcreate --wipesignatures y -n thinpool docker -l 95%VG
$ lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG
Convert the pool to a thin pool.
$ lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta
Configure autoextension of thin pools via an lvm
profile.
$ vi /etc/lvm/profile/docker-thinpool.profile
Specify thin_pool_autoextend_threshold
value.
The value should be the percentage of space used before
lvm
attemptsto autoextend the available space (100 = disabled).
thin_pool_autoextend_threshold = 80
Modify the thin_pool_autoextend_percent
for when thin pool autoextension occurs.
The value’s setting is the percentage of space to increase the thin pool (100 =disabled)
thin_pool_autoextend_percent = 20
Check your work, your docker-thinpool.profile
file should appear similar to the following:
An example /etc/lvm/profile/docker-thinpool.profile
file:
activation {
thin_pool_autoextend_threshold=80
thin_pool_autoextend_percent=20
}
Apply your new lvm profile
$ lvchange --metadataprofile docker-thinpool docker/thinpool
Verify the lv
is monitored.
$ lvs -o+seg_monitor
If the Docker daemon was previously started, move your existing graph driverdirectory out of the way.
Moving the graph driver removes any images, containers, and volumes in yourDocker installation. These commands move the contents of the/var/lib/docker
directory to a new directory named
/var/lib/docker.bk
.If any of the following steps fail and you need to restore, you can remove/var/lib/docker
and replace it with
/var/lib/docker.bk
.
$ mkdir /var/lib/docker.bk
$ mv /var/lib/docker/* /var/lib/docker.bk
Configure the Docker daemon with specific devicemapper options.
Now that your storage is configured, configure the Docker daemon to use it.There are two ways to do this. You can set options on the command line ifyou start the daemon there:
Note: The deferred deletion option,
dm.use_deferred_deletion=true
, is not yet supportedon CentOS, RHEL, or Ubuntu 14.04 when using the default kernel. Support was added in theupstream kernel version 3.18.
--storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true
You can also set them for startup in thedaemon configuration file,which defaults to
/etc/docker/daemon.json
configuration, for example:
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/docker-thinpool",
"dm.use_deferred_removal=true",
"dm.use_deferred_deletion=true"
]
}
If using systemd and modifying the daemon configuration via unit or drop-in file, reload systemd to scan for changes.
$ systemctl daemon-reload
Start the Docker daemon.
$ systemctl start docker
After you start the Docker daemon, ensure you monitor your thin pool and volumegroup free space. While the volume group will auto-extend, it can still fillup. To monitor logical volumes, use
lvs
without options or
lvs -a
to see thedata and metadata sizes. To monitor volume group free space, use the
vgs
command.
Logs can show the auto-extension of the thin pool when it hits the threshold, toview the logs use:
$ journalctl -fu dm-event.service
You can use the lsblk
command to see the device files created above and thepool
that the
devicemapper
storage driver creates on top of them.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 10G 0 disk
├─vg--docker-data 253:0 0 90G 0 lvm
│ └─docker-202:1-1032-pool 253:2 0 10G 0 dm
└─vg--docker-metadata 253:1 0 4G 0 lvm
└─docker-202:1-1032-pool 253:2 0 10G 0 dm
The diagram below shows the image from prior examples updated with the detailfrom the
lsblk
command above.It is important to understand the impact that allocate-on-demand andcopy-on-write operations can have on overall container performance.
The devicemapper
storage driver allocates new blocks to a container via anallocate-on-demand operation. This means that each time an app writes tosomewhere new inside a container, one or more empty blocks has to be
locatedfrom the pool and mapped into the container.
All blocks are 64KB. A write that uses less than 64KB still results in a single 64KB block being allocated. Writing more than 64KB of data uses multiple 64KBblocks. This can impact container performance, especially in containers thatperform lots of small writes. However, once a block is allocated to a container subsequent reads and writes can operate directly on that block.
Each time a container updates existing data for the first time, thedevicemapper
storage driver has to perform a copy-on-write operation. Thiscopies the data from the image snapshot to the container’s snapshot. Thisprocess
can have a noticeable impact on container performance.
All copy-on-write operations have a 64KB granularity. As a result, updating32KB of a 1GB file causes the driver to copy a single 64KB block into thecontainer’s snapshot. This has obvious performance advantages over file-levelcopy-on-write operations which would require copying the entire 1GB file intothe container layer.
In practice, however, containers that perform lots of small block writes(<64KB) can perform worse with
devicemapper
than with AUFS.
There are several other things that impact the performance of thedevicemapper
storage driver.
The mode. The default mode for Docker running the
devicemapper
storagedriver is loop-lvm
. This mode uses sparse files and suffers from poorperformance. It is
not recommended for production. The recommended mode forproduction environments is
direct-lvm
where the storage driver writesdirectly to raw block devices.
High speed storage. For best performance you should place the
Data file
and
Metadata file
on high speed storage such as SSD. This can be directattached storage or from a SAN or NAS array.
Memory usage. devicemapper
is not the most memory efficient Dockerstorage driver. Launching
n copies of the same container loads n copies ofits files into memory. This can have a memory impact on your Docker host. As aresult, the
devicemapper
storage driver may not be the best choice for PaaSand other high density use cases.
One final point, data volumes provide the best and most predictableperformance. This is because they bypass the storage driver and do not incurany of the potential overheads introduced by thin provisioning andcopy-on-write. For this reason, you should place heavy write workloads ondata volumes.
Reference:
https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#other-device-mapper-performance-considerations
Docker: How to configure Docker with devicemapper
标签:lin directly tool following create rust visio backend new
原文地址:http://blog.csdn.net/yexianyi/article/details/68065589