fio,又称为Flexible IO Tester,是Jens Axboe编写的测试程序。Jens是Linux Kernel中blockIO subsystem的维护者。所以这个工具的权威性是毋庸置疑的。它支持的操作系统非常多,包含linux, windows, AIX, HPUX, freeBSD等。
FIO的功能非常强劲,这里只介绍最常用的基本功能。
FIO官方网站是:http://freecode.com/projects/fio
Window是安装文件下载网站:http://bluestop.org/fio/
CentOS可以直接使用yum安装
#yum安装
yum install libaio-devel fio
#手动安装
yum install libaio-devel
wget http://brick.kernel.dk/snaps/fio-2.2.10.tar.gz
tar -zxvf fio-2.2.10.tar.gz
cd fio-2.2.10
make $ make install
ubantu可以直接使用apt安装
apt installfio
apt installsysstate #为了使用iostat工具检测性能
随机写
fio -filename=/dev/mapper/multipath0-direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=20G-numjobs=30 -runtime=100 -group_reporting -name=mytest
随机读
fio -filename=/dev/mapper/multipath0-direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=20G-numjobs=30 -runtime=100 -group_reporting -name=mytest
随机读写混合(70:30)
fio -filename=/dev/mapper/multipath0 -direct=1-iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=20G-numjobs=30 -runtime=100 -group_reporting -name=mytest
顺序读
fio -filename=/dev/mapper/multipath0-direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=20G -numjobs=30-runtime=100 -group_reporting -name=mytest
顺序写
fio -filename=/dev/mapper/multipath0-direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=20G-numjobs=30 -runtime=100 -group_reporting -name=mytest
参数解释:
-filename=/dev/mapper/multipath0 测试文件名,本例中使用了多路径。
direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。
-iodepth=1 异步IO队列深度,iodepth大于1不会影响同步IO引擎(除非verify_async这个选项被设置)
Thread fio默认会使用fork()创建job,如果这个选项设置的话,fio将使用pthread_create来创建线程
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/O
bs=16k 单次io块的大小为16k
size=20G 本次的测试文件大小为20g,以每次4k的io进行测试。
numjobs=30 本次的测试线程数为30.
runtime=100 测试时间为1000秒,如果不写则一直将20g文件写完为止。
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
rwmixread=70 在混合读写的模式下,读占70%
group_reporting 关于显示结果的,汇总每个进程的信息。
root@s3:/etc# fio -filename=/dev/mapper/multipath1-direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k-size=20G -numjobs=30 -runtime=100 -group_reporting -name=mytest
mytest: (g=0): rw=randrw,bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
fio-2.2.10
Starting 30 threads
Jobs: 30 (f=30): [m(30)] [100.0% done][957.5MB/399.6MB/0KB /s] [61.3K/25.6K/0 iops] [eta 00m:00s]
mytest: (groupid=0, jobs=30): err= 0:pid=11893: Mon Mar 6 13:21:42 2017
read : io=88731MB, bw=908595KB/s, iops=56787, runt=100001msec
clat (usec): min=73, max=5835, avg=343.76, stdev=263.49
lat (usec): min=73, max=5835, avg=343.89, stdev=263.51
clat percentiles (usec):
| 1.00th=[ 109], 5.00th=[ 141], 10.00th=[ 165], 20.00th=[ 201],
| 30.00th=[ 237], 40.00th=[ 278], 50.00th=[ 318], 60.00th=[ 354],
| 70.00th=[ 386], 80.00th=[ 414], 90.00th=[ 466], 95.00th=[ 524],
| 99.00th=[ 2024], 99.50th=[ 2352], 99.90th=[ 2736], 99.95th=[ 2896],
| 99.99th=[ 3472]
bw (KB /s): min=25344, max=36256,per=3.33%, avg=30294.36, stdev=1408.26
write: io=38157MB, bw=390721KB/s, iops=24420, runt=100001msec
clat (usec): min=125, max=7006, avg=423.43, stdev=272.92
lat (usec): min=125, max=7006, avg=423.96, stdev=272.96
clat percentiles (usec):
| 1.00th=[ 187], 5.00th=[ 233], 10.00th=[ 262], 20.00th=[ 306],
| 30.00th=[ 338], 40.00th=[ 366], 50.00th=[ 390], 60.00th=[ 410],
| 70.00th=[ 434], 80.00th=[ 470], 90.00th=[ 524], 95.00th=[ 604],
| 99.00th=[ 2160], 99.50th=[ 2480], 99.90th=[ 2896], 99.95th=[ 3024],
| 99.99th=[ 3568]
bw (KB /s): min=11232, max=16096,per=3.33%, avg=13026.44, stdev=652.21
lat (usec) : 100=0.28%, 250=25.09%, 500=66.28%, 750=5.96%, 1000=0.58%
lat (msec) : 2=0.73%, 4=1.09%, 10=0.01%
cpu : usr=0.67%,sys=6.10%, ctx=8154915, majf=0, minf=8899
IOdepths : 1=100.0%, 2=0.0%, 4=0.0%,8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%,8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%,8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued :total=r=5678775/w=2442030/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0,percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=88731MB, aggrb=908594KB/s, minb=908594KB/s, maxb=908594KB/s,mint=100001msec, maxt=100001msec
WRITE: io=38157MB, aggrb=390720KB/s, minb=390720KB/s, maxb=390720KB/s,mint=100001msec, maxt=100001msec
随机读写的性能主要由IOPS决定,顺序读写性能则是由带框bw决定。下表是实际测试数据,供比较使用。
随机读 | 随机写 | 顺序读 | 顺序写 | 随机读写(70:30)混合 | |
Sandisk 128G U盘 | bw=26877KB/s, iops=1679 | bw=2595.2KB/s, iops=162 | bw=36393KB/s, iops=2274 | bw=11136KB/s, iops=695 | read : bw=4938.5KB/s, iops=308 |
3par 8*480G SSD | bw=1515.1MB/s, iops=97019 | bw=529079KB/s, iops=33067 | bw=2014.8MB/s, iops=128900 | bw=1011.7MB/s, iops=64747 | read : bw=908595KB/s, iops=56787 |
3par 12*1.2T SAS | bw=452745KB/s, iops=28296 | bw=37639KB/s, iops=2352 | bw=1907.6MB/s, iops=122051 | NA | read : bw=91726KB/s, iops=5732 |
原文地址:http://leesbing.blog.51cto.com/1344594/1903927