承接前面对whoosh的文章,继续:
下面开始写入索引内容,过程如下:
writer = ix.writer() writer.add_document(title=u"my document", content=u"this is my document", path=u"/a", tags=u"firlst short", icon=u"/icons/star.png") writer.add_document(title=u"my second document", content=u"this is my second document", path=u"/b", tags=u"second short", icon=u"/icons/sheep.png") writer.commit()
更多的内容,请参考:如何索引文件官方文档
开始搜索,需要新建立一个对象,如:
searcher = ix.searcher()
withe ix.searcher() as searcher: (do somthing)
try: searcher = ix.searcher() (do somthing) finally: searcher.close()
from whoosh.qparser import QueryParser with ix.searcher() as searcher: query = QueryParser("content",ix.schema).parse("second") result = searcher.search(query) results[0]
{"title":u"my second document","path":u"/a"}
原计划写这个相关代码,但是google之后,发现阿小信的博客(恕我不能给出链接,因为只要有链接,本文就不能发布,在我的github中是完整的,这里是阉割版)中有非常完美的解决,我特地将那段代码抄下来,供需要者参考:
#-*- coding:utf-8 -*- import jieba from whoosh.analysis import Tokenizer,Token from whoosh.compat import text_type class ChineseTokenizer(Tokenizer): def __call__(self, value, positions=False, chars=False, keeporiginal=False, removestops=True, start_pos=0, start_char=0, mode='', **kwargs): assert isinstance(value, text_type), "%r is not unicode" % value t = Token(positions, chars, removestops=removestops, mode=mode, **kwargs) seglist=jieba.cut_for_search(value) #使用结巴分词库进行分词 for w in seglist: t.original = t.text = w t.boost = 1.0 if positions: t.pos=start_pos+value.find(w) if chars: t.startchar=start_char+value.find(w) t.endchar=start_char+value.find(w)+len(w) yield t #通过生成器返回每个分词的结果token def ChineseAnalyzer(): return ChineseTokenizer()
本文属于阉割之后的版本。要看完整版,请到我的github:qiwsir的algorithm。
全文索引搜索whoosh(2),布布扣,bubuko.com
原文地址:http://blog.csdn.net/qiwsir/article/details/37697651