标签:RKE linux 必须 too file 自动 sof als search
检查系统允许 Elasticsearch 打开的最大文件数。
(1) 检查linux文件句柄,默认为1024
ulimit -a
open files (-n) 1024
(2) 修改配置文件
vi /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
(3) reboot Elasticsearch所在虚拟机
要求:heap size [9.9gb],一般分配12G内存
[2018-01-19 14:58:12,732][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
Killed
[2018-01-19 15:52:59,321][INFO ][env ] [sa-node-1] heap size [9.9gb], compressed ordinary object pointers [true]
因为在配置文件中设置了,es启动后需要一次性获取足够内存。
cat /opt/elasticsearch-2.4.3/config/elasticsearch.yml
bootstrap.memory_lock: true
也可以设置成不一次性获取所有内存:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false (该配置必须位于上一条配置之后)
重启es后异常消失
可以根据机器硬件配置情况作出适当的调整ES的JVM内存配置,一般情况下,此处的内存分配大小为机器物理内存的一半,同时将 ES_MIN_MEM 与 ES_MAX_MEM 配置成相同的值,这样的好处在于ES JVM大小固定,不会上下浮动,从实践效果上看可以提高 node 性能。
vim /opt/elasticsearch-2.4.3/elasticsearch.in.sh
org.elasticsearch.action.search.SearchPhaseExecutionException: all shaes org.elasticsearch.action.search.AbstractSearchAsyncAction.onFirstPhaseResult(AbstractSearchAsyncAction.java:206)
at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:152)
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:46)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:874)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:852)
at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:389)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchException: ElasticsearchException[CircuitBreakingException[[fielddata] Data too large, data for [zone] would be larger than limit of [6390113894/5.9gb]]]; nested: UncheckedExecutionException[CircuitBreakingException[[fielddata] Data too large, data for [zone] would be larger than limit of [6390113894/5.9gb]]]; nested: CircuitBreakingException[[fielddata] Data too large, data for [zone] would be larger than limit of [6390113894/5.9gb]];
at org.elasticsearch.index.fielddata.plain.AbstractIndexOrdinalsFieldData.loadGlobal(AbstractIndexOrdinalsFieldData.java:91)
at org.elasticsearch.search.aggregations.support.ValuesSource$Bytes$WithOrdinals$FieldData.globalOrdinalsValues(ValuesSource.java:144)
at org.elasticsearch.search.aggregations.support.ValuesSource$Bytes$WithOrdinals.globalMaxOrd(ValuesSource.java:117)
at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.doCreateInternal(TermsAggregatorFactory.java:217)
at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:64)
at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:102)
at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:87)
```
原因:由于cache溢出导致
在linux中查看cache大小?
free -hl
buffers(buffer cache), 块设备的缓冲区,协调memory与disk.
cache(page cache), 用于给文件做缓存,记录打开过的文件。协调CPU与memory.
solution:
(1) 手动删除cache:
curl -XPOST ‘127.0.0.1:9200/_cache/clear‘
(2)修改Elasticsearch配置文件
indices.fielddata.cache.size: 20%
当cache达到20%,则自动清理缓存。
indices.fielddata.cache.size默认值为无限大,则esata too large异常。
标签:RKE linux 必须 too file 自动 sof als search
原文地址:https://www.cnblogs.com/sunzhuli/p/9695231.html