文档章节

Elasticsearch学习(七):Elasticsearch分析

howsweet
 howsweet
发布于 2017/08/17 16:18
字数 1486
阅读 44
收藏 1

一、分析

1. 分析(analysis)

  • 首先,标记化一个文本块为适用于倒排索引单独的词(term)
  • 然后标准化这些词为标准形式,提高它们的“可搜索性”或“查全率” 分析是由分析器(analyzer)完成的。

2. 分析器(analyzer)

  • 字符过滤器(character filter) 过滤处理字符串(比如去掉多余的空格之类的),让字符串在被分词前变得更加“整洁”,一个分析器可能包含零到多个字符过滤器。
  • 分词器(tokenizer) 字符串被标记化成独立的词(比如按空格划分成一个个单词),一个分析器必须包含一个分词器。
  • 标记过滤器(token filters) 所有的词经过标记过滤,标记过滤器可能修改,添加或删除标记。

只有字段是全文字段(full-text fields)的时候分析器才会被使用,当字段是一个确切的值(exact value)时,不会对该字段做分析。

  • 全文字段:类似于string、text
  • 确切值:类似于数值、日期

二、自定义分析器

1. char_filter(字符过滤器)

  • html_strip(html标签过滤) 参数:
    • escaped_tags不应该从原始文本中删除的HTML标签数组
  • mapping(自定义映射过滤) 参数:
    • mappings一个映射数组,每个元素的格式为key => value
    • mappings_path一个以UTF-8编码的文件的绝对路径或者是相对于config目录的路径,文件每一行都是一个格式为key => value映射
  • pattern_replace(使用正则表达式来匹配字符并使用指定的字符串替换) 参数:

2. tokenizer(分词器)

这里只列出常用的几个,更多分词器请查阅官方文档

  • standard(标准分词,默认使用的分词。根据Unicode Consortium的定义的单词边界来切分文本,然后去掉大部分标点符号对于文本分析,所以对于任何语言都是最佳选择) 参数:
    • max_token_length最大标记长度。如果一个标记超过这个长度,就会被分割。默认值为255
  • letter(遇到不是字母的字符就分割) 参数:无
  • lowercase(在letter基础上把所分词都转为小写) 参数:无
  • whitespace(以空格分词) 参数:无
  • keyword(相当于不分词,接收啥输出啥) 参数:
    • buffer_size缓冲区大小。默认为256。缓冲区将以这种大小增长,直到所有文本被消耗。建议不要改变这个设置。

3. filter(标记过滤器)

由于标记过滤器太多,这里就不一一介绍了,请查阅官方文档

4. 自定义分析器

newindex PUT

{
  "settings": {
    "analysis": {
      "char_filter": {
        "my_char_filter": {
          "type": "mapping",
          "mappings": [
            "&=>and",
            ":)=>happy",
            ":(=>sad"
          ]
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      },
      "filter": {
        "my_filter": {
          "type": "stop",
          "stopwords": [
            "the",
            "a"
          ]
        }
      },
      "analyzer": {
        "my_analyzer": {
          "type": "custom",
          "char_filter": [
            "html_strip",
            "my_char_filter"
          ],
          "tokenizer": "my_tokenizer",
          "filter": [
            "lowercase",
            "my_filter"
          ]
        }
      }
    }
  }
}

然后用自定义分析器分析一段字符串:

newindex/_analyze POST

{
  "analyzer": "my_analyzer",
  "text": "<span>If you are :(, I will be :).</span> The people & a banana",
  "explain": true
}

可以看到分析过程:

{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [
      {
        "name": "html_strip",
        "filtered_text": [
          "if you are :(, I will be :). the people & a banana"
        ]
      },
      {
        "name": "my_char_filter",
        "filtered_text": [
          "if you are sad, I will be happy. the people and a banana"
        ]
      }
    ],
    "tokenizer": {
      "name": "my_tokenizer",
      "tokens": [
        {
          "token": "if",
          "start_offset": 6,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0,
          "bytes": "[69 66]",
          "positionLength": 1
        },
        {
          "token": "you",
          "start_offset": 9,
          "end_offset": 12,
          "type": "<ALPHANUM>",
          "position": 1,
          "bytes": "[79 6f 75]",
          "positionLength": 1
        },
        {
          "token": "are",
          "start_offset": 13,
          "end_offset": 16,
          "type": "<ALPHANUM>",
          "position": 2,
          "bytes": "[61 72 65]",
          "positionLength": 1
        },
        {
          "token": "sad",
          "start_offset": 17,
          "end_offset": 19,
          "type": "<ALPHANUM>",
          "position": 3,
          "bytes": "[73 61 64]",
          "positionLength": 1
        },
        {
          "token": "I",
          "start_offset": 21,
          "end_offset": 22,
          "type": "<ALPHANUM>",
          "position": 4,
          "bytes": "[49]",
          "positionLength": 1
        },
        {
          "token": "will",
          "start_offset": 23,
          "end_offset": 27,
          "type": "<ALPHANUM>",
          "position": 5,
          "bytes": "[77 69 6c 6c]",
          "positionLength": 1
        },
        {
          "token": "be",
          "start_offset": 28,
          "end_offset": 30,
          "type": "<ALPHANUM>",
          "position": 6,
          "bytes": "[62 65]",
          "positionLength": 1
        },
        {
          "token": "happy",
          "start_offset": 31,
          "end_offset": 33,
          "type": "<ALPHANUM>",
          "position": 7,
          "bytes": "[68 61 70 70 79]",
          "positionLength": 1
        },
        {
          "token": "the",
          "start_offset": 42,
          "end_offset": 45,
          "type": "<ALPHANUM>",
          "position": 8,
          "bytes": "[74 68 65]",
          "positionLength": 1
        },
        {
          "token": "peopl",
          "start_offset": 46,
          "end_offset": 51,
          "type": "<ALPHANUM>",
          "position": 9,
          "bytes": "[70 65 6f 70 6c]",
          "positionLength": 1
        },
        {
          "token": "e",
          "start_offset": 51,
          "end_offset": 52,
          "type": "<ALPHANUM>",
          "position": 10,
          "bytes": "[65]",
          "positionLength": 1
        },
        {
          "token": "and",
          "start_offset": 53,
          "end_offset": 54,
          "type": "<ALPHANUM>",
          "position": 11,
          "bytes": "[61 6e 64]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 55,
          "end_offset": 56,
          "type": "<ALPHANUM>",
          "position": 12,
          "bytes": "[61]",
          "positionLength": 1
        },
        {
          "token": "banan",
          "start_offset": 57,
          "end_offset": 62,
          "type": "<ALPHANUM>",
          "position": 13,
          "bytes": "[62 61 6e 61 6e]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 62,
          "end_offset": 63,
          "type": "<ALPHANUM>",
          "position": 14,
          "bytes": "[61]",
          "positionLength": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "lowercase",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "the",
            "start_offset": 42,
            "end_offset": 45,
            "type": "<ALPHANUM>",
            "position": 8,
            "bytes": "[74 68 65]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 55,
            "end_offset": 56,
            "type": "<ALPHANUM>",
            "position": 12,
            "bytes": "[61]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 62,
            "end_offset": 63,
            "type": "<ALPHANUM>",
            "position": 14,
            "bytes": "[61]",
            "positionLength": 1
          }
        ]
      },
      {
        "name": "my_filter",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          }
        ]
      }
    ]
  }
}

© 著作权归作者所有

共有 人打赏支持
howsweet
粉丝 4
博文 7
码字总数 6036
作品 0
济南
程序员
私信 提问
Elastic 在年度用户大会 Elastic{ON} 2018 上发布众多新功能和技术预览

下载超过 2.25 亿次,Elastic 公开 X-Pack 源代码 旧金山 (Elastic{ON} 2018) – 2018 年 2 月 27 日 – Elastic,Elasticsearch 和 Elastic Stack背后的公司,今天宣布其产品累计下载次数达...

Medcl
03/01
0
0
centos 7( linux )下安装elasticsearch教程

目录 概述 环境准备 elaticsearch简介 安装elasticsearch 彩蛋 概述 很久没有写博客了,最近在做全文检索的项目,发现elasticsearch踩了不少坑,百度点进去又是坑,在此记录一下自己的踩坑历程。...

java_龙
10/15
0
0
当ES赶超Redis,这份ES进修攻略不容错过!

从4月DB-Engines最新发布的全球数据库排名中,我们赫然发现ElasticSearch逆袭超越了Redis,从原先的第9名上升至第8名,而Redis则落后一名,排在了其后。 事实上,这场逆袭并不算太让人意外。...

DBAplus社群
04/15
0
0
Elastic Search学习笔记1——安装elasticsearch2.4.6

Elastic Search 简介 1.基于Apache Lucene的开源搜索引擎 2.采用Java编写 RESTful API风格 3.较容易的横向扩展 应用场景 1.海量数据分析引擎 2.数据搜索引擎 3.数据仓库 官网 https://www.el...

晨猫
03/09
0
0
Elastic 南京 Meetup

1. 主办方 Elastic中文社区 趋势科技 2. 时间地点 活动时间:2018年6月30日 13:00 - 18:00 活动地点:雨花区软件大道48号苏豪国际广场B座 趋势科技中国研发中心(靠花神庙地铁站) 3. 报名地...

Medcl
06/04
69
0

没有更多内容

加载失败,请刷新页面

加载更多

Spring Cloud Stream消费失败后的处理策略(二):自定义错误处理逻辑

应用场景 上一篇《Spring Cloud Stream消费失败后的处理策略(一):自动重试》介绍了默认就会生效的消息重试功能。对于一些因环境原因、网络抖动等不稳定因素引发的问题可以起到比较好的作用...

程序猿DD
18分钟前
1
0
Java 使用 pinyin4j 生成汉字拼音

添加 pinyin4j jar包 <dependency> <groupId>com.belerweb</groupId> <artifactId>pinyin4j</artifactId> <version>2.5.0</version> ......

yh32
29分钟前
2
0
Deepin 安装wireshark抓包工具

一、关于deepin和wireshark deepin目前已经发展到15.8了,开发Android毫无压力,在四个月的使用时间里,已经非常习惯了。目前想处理一些网络问题,因此尝试在deepin上安装一个抓包工具。dee...

IamOkay
今天
6
0
Docker镜像仓库服务-Nexus

建立云原生集群系统,建立自己的私有Docker镜像仓库必不可少。一方面可以加快多节点部署容器镜像的下载速度,另一方面是为了安全(容器里存储有系统所有的信息、包括密码、数据库等等,切记不...

openthings
今天
6
0
127.0.0.1 和 0.0.0.0 地址的区别

1. IP地址分类 1.1 IP地址表示 IP地址由两个部分组成,net-id和host-id,即网络号和主机号。 net-id:表示ip地址所在的网络号。 host-id:表示ip地址所在网络中的某个主机号码。 即: IP-a...

华山猛男
今天
25
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部