标签:lan 原型 pac 分词器 skipped reference list int keyword
解析-analysis
可以理解为分词。
解析由解析器——analyzer执行,解析器包括内置和用户自定义两种。
doc:https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-analyzers.html
Standard Analyzer:按单词边界分解,忽略大多数标点符号、小写术语,支持删除停用词。
Simple Analyzer:以非字母字符为分词点,格式化字母为小写。
Whitespace Analyzer:以空白字符为分词点,不执行小写化。
Stop Analyzer:类似于simple analyzer,但支持删除停用词。
Pattern Analyzer:正则解析分词
Language Analyzers:其它语种分词
Fingerprint Analyzer:
The fingerprint analyzer is a specialist analyzer which creates a fingerprint which can be used for duplicate detection.
暂不涉及。
索引分词很好理解,写时分词,形成索引。
每个text字段可以指定独有的analyzer;
如果没有指定,默认以index settings/default参数为准,实质上是standard analyzer.
搜索分词
对于搜索语句,也会进行分词,默认使用索引分词的解析器;
可以单独设置搜索分词的分词器,但一般不必。
以内置english解析器为例:
"The QUICK brown foxes jumped over the lazy dog!"
首先小写化,移除频次高的停用词,转换单词为原型词,最终的结果是序列:
[ quick, brown, fox, jump, over, lazi, dog ]
环境配置:
创建index test_i
创建field msg,使用默认配置,即标准分词器
创建field msg_english,使用english分词器;
# 测试环境创建
d = {"msg":"Eating an apple a day keeps doctor away."}
rv = es.index("test_i", d)
pr(rv)
d = { "properties": {
"msg_english": {
"type": "text",
"analyzer": "english"
} } }
rv = es.indices.put_mapping(body=d, index=["test_i"]) # 正常情况返回true
# 查看数据结构
rv = es.indices.get_mapping(index_name)
{
"test_i": {
"mappings": {
"properties": {
"msg": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
} } },
"msg_english": {
"type": "text",
"analyzer": "english"
} } } }}
插入文档:
d = {"msg_english":"Eating an apple a day keeps doctor away."}
rv = es.index("test_i", d)
查询:查询分为两部分,第一种按字段msg匹配eat,是没有hits项的,查询msg_english字段
# search apis
def search_api_test():
data = { "query" : { "match" : {"msg_english":"eat"} }, }
rv = es.search(index="test_i", body=data)
pr(rv)
search_api_test()
结果
{ "took": 2,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": 0.2876821,
"hits": [
{
"_index": "test_i",
"_type": "_doc",
"_id": "XG7KFG0BpAsDZnvvGLz2",
"_score": 0.2876821,
"_source": {
"msg_english": "Eating an apple a day keeps doctor away."
} } ] }}
补充:分词测试,直观测试标准分词器和english分词器的区别
测试代码:
# 分词测试
d1 = {"analyzer":"standard","text":"Eating an apple a day keeps doctor away."}
d2 = {"analyzer":"english","text":"Eating an apple a day keeps doctor away."}
rv1 = es.indices.analyze(body=d1, format="text")
rv2 = es.indices.analyze(body=d2, format="text")
print([x["token"] for x in rv1["tokens"]]) # d1 分词结果
print([x["token"] for x in rv2["tokens"]]) # d2 分词结果
输出:
[‘eating‘, ‘an‘, ‘apple‘, ‘a‘, ‘day‘, ‘keeps‘, ‘doctor‘, ‘away‘]
[‘eat‘, ‘appl‘, ‘dai‘, ‘keep‘, ‘doctor‘, ‘awai‘]
标签:lan 原型 pac 分词器 skipped reference list int keyword
原文地址:https://www.cnblogs.com/wodeboke-y/p/11562809.html