标签:
tuning for Data Transfer hosts connected at speeds of 1Gbps or higher
<一.本次OpenStack系统调试简单过程简单记录>
1,dmesg 日志,丢包问题关键原因定位;
[101231.909932] net_ratelimit: 85 callbacks suppressed
2,ethstatus -i p5p1 实时追踪网口TX/RX状态;
3,具体内核等相关参数调整
# recommended default congestion control is htcp 超文本缓冲
net.ipv4.tcp_congestion_control=htcp
# recommended for hosts with jumbo frames enabled 开启系统巨型帧
net.ipv4.tcp_mtu_probing=1
you should leave net.tcp_mem alone, as the defaults are fine. A number of performance experts say to also increase net.core.optmem_max to match net.core.rmem_max and net.core.wmem_max, but we have not found that makes any difference. Some experts also say to set net.ipv4.tcp_timestamps and net.ipv4.tcp_sack to 0, as doing that reduces CPU load. We strongly disagree with that recommendation for WAN performance, as we have observed that the default value of 1 helps in more cases than it hurts, and can help a lot
sysctl net.ipv4.tcp_available_congestion_control
/sbin/modprobe tcp_htcp
/sbin/modprobe tcp_cubic
sysctl -w net.ipv4.tcp_congestion_control=htcp
/sbin/ifconfig ethN txqueuelen 10000
经过相关参数调整,ssh远程终端不再出现时常连接中断大延迟卡顿,云平台管理portal 大延迟Loading加载等问题,后续问题继续观察测试中!
<二.附加linux tunning,网络部分及内核参数,UDP/TCP tunning>
https://www.myricom.com/software/myri10ge/347-what-is-the-performance-impact-of-vlan-tagging-with-the-myri10ge-driver.html
1,What is the performance impact of VLAN tagging, slight performance impact incurred when using VLAN tagging;
There are two issues:
Capabilities of vlan shim drivers to do offloads
On all OSes, there is a shim driver to handle vlan traffic and configuration. This driver sits between the TCP/IP stack and the hardware driver. On Windows, every hardware vendor must ship their own vlan shim driver. (Our Windows VLAN driver is included in the Windows Myri10GE software distribution on the Myri10GE Download page). However, on linux, the vlan shim driver is shared by all adapters, regardless of vendor. It is not, to our knowledge, possible to bypass this shim driver.
Prior to linux 2.6.26, the linux vlan driver did not propagate any advanced offload flags from the hardware device to the vlan shim device. That means, for example, that doing TCP Segmentation Offload was not possible on a vlan device, even though the driver/hardware supports it. This is true for all vendors, not just our hardware.
There are other limitations. For example, prior to 2.6.32, the vlan driver would xmit all packets through tx queue 0, rather than the correct TX queue. RHEL 5.5 enables TSO on VLAN, but does not enable S/G, which results in a memory copy of most packets, etc.
Capabilities of the adapter to handle vlan packets
Our adapter handles VLAN traffic where it is required to (emitting TSO frames, for example).
openStack 云平台管理节点管理网口流量非常大 出现丢包严重 终端总是时常中断问题调试及当前测试较有效方案
标签:
原文地址:http://www.cnblogs.com/ruiy/p/4905657.html