文档章节

Elasticsearch学习(七):Elasticsearch分析

田忌赛码
 田忌赛码
发布于 2017/08/17 16:18
字数 1486
阅读 134
收藏 1

#程序员薪资揭榜#你做程序员几年了?月薪多少?发量还在么?>>>

一、分析

1. 分析(analysis)

  • 首先,标记化一个文本块为适用于倒排索引单独的词(term)
  • 然后标准化这些词为标准形式,提高它们的“可搜索性”或“查全率” 分析是由分析器(analyzer)完成的。

2. 分析器(analyzer)

  • 字符过滤器(character filter) 过滤处理字符串(比如去掉多余的空格之类的),让字符串在被分词前变得更加“整洁”,一个分析器可能包含零到多个字符过滤器。
  • 分词器(tokenizer) 字符串被标记化成独立的词(比如按空格划分成一个个单词),一个分析器必须包含一个分词器。
  • 标记过滤器(token filters) 所有的词经过标记过滤,标记过滤器可能修改,添加或删除标记。

只有字段是全文字段(full-text fields)的时候分析器才会被使用,当字段是一个确切的值(exact value)时,不会对该字段做分析。

  • 全文字段:类似于string、text
  • 确切值:类似于数值、日期

二、自定义分析器

1. char_filter(字符过滤器)

  • html_strip(html标签过滤) 参数:
    • escaped_tags不应该从原始文本中删除的HTML标签数组
  • mapping(自定义映射过滤) 参数:
    • mappings一个映射数组,每个元素的格式为key => value
    • mappings_path一个以UTF-8编码的文件的绝对路径或者是相对于config目录的路径,文件每一行都是一个格式为key => value映射
  • pattern_replace(使用正则表达式来匹配字符并使用指定的字符串替换) 参数:

2. tokenizer(分词器)

这里只列出常用的几个,更多分词器请查阅官方文档

  • standard(标准分词,默认使用的分词。根据Unicode Consortium的定义的单词边界来切分文本,然后去掉大部分标点符号对于文本分析,所以对于任何语言都是最佳选择) 参数:
    • max_token_length最大标记长度。如果一个标记超过这个长度,就会被分割。默认值为255
  • letter(遇到不是字母的字符就分割) 参数:无
  • lowercase(在letter基础上把所分词都转为小写) 参数:无
  • whitespace(以空格分词) 参数:无
  • keyword(相当于不分词,接收啥输出啥) 参数:
    • buffer_size缓冲区大小。默认为256。缓冲区将以这种大小增长,直到所有文本被消耗。建议不要改变这个设置。

3. filter(标记过滤器)

由于标记过滤器太多,这里就不一一介绍了,请查阅官方文档

4. 自定义分析器

newindex PUT

{
  "settings": {
    "analysis": {
      "char_filter": {
        "my_char_filter": {
          "type": "mapping",
          "mappings": [
            "&=>and",
            ":)=>happy",
            ":(=>sad"
          ]
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      },
      "filter": {
        "my_filter": {
          "type": "stop",
          "stopwords": [
            "the",
            "a"
          ]
        }
      },
      "analyzer": {
        "my_analyzer": {
          "type": "custom",
          "char_filter": [
            "html_strip",
            "my_char_filter"
          ],
          "tokenizer": "my_tokenizer",
          "filter": [
            "lowercase",
            "my_filter"
          ]
        }
      }
    }
  }
}

然后用自定义分析器分析一段字符串:

newindex/_analyze POST

{
  "analyzer": "my_analyzer",
  "text": "<span>If you are :(, I will be :).</span> The people & a banana",
  "explain": true
}

可以看到分析过程:

{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [
      {
        "name": "html_strip",
        "filtered_text": [
          "if you are :(, I will be :). the people & a banana"
        ]
      },
      {
        "name": "my_char_filter",
        "filtered_text": [
          "if you are sad, I will be happy. the people and a banana"
        ]
      }
    ],
    "tokenizer": {
      "name": "my_tokenizer",
      "tokens": [
        {
          "token": "if",
          "start_offset": 6,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0,
          "bytes": "[69 66]",
          "positionLength": 1
        },
        {
          "token": "you",
          "start_offset": 9,
          "end_offset": 12,
          "type": "<ALPHANUM>",
          "position": 1,
          "bytes": "[79 6f 75]",
          "positionLength": 1
        },
        {
          "token": "are",
          "start_offset": 13,
          "end_offset": 16,
          "type": "<ALPHANUM>",
          "position": 2,
          "bytes": "[61 72 65]",
          "positionLength": 1
        },
        {
          "token": "sad",
          "start_offset": 17,
          "end_offset": 19,
          "type": "<ALPHANUM>",
          "position": 3,
          "bytes": "[73 61 64]",
          "positionLength": 1
        },
        {
          "token": "I",
          "start_offset": 21,
          "end_offset": 22,
          "type": "<ALPHANUM>",
          "position": 4,
          "bytes": "[49]",
          "positionLength": 1
        },
        {
          "token": "will",
          "start_offset": 23,
          "end_offset": 27,
          "type": "<ALPHANUM>",
          "position": 5,
          "bytes": "[77 69 6c 6c]",
          "positionLength": 1
        },
        {
          "token": "be",
          "start_offset": 28,
          "end_offset": 30,
          "type": "<ALPHANUM>",
          "position": 6,
          "bytes": "[62 65]",
          "positionLength": 1
        },
        {
          "token": "happy",
          "start_offset": 31,
          "end_offset": 33,
          "type": "<ALPHANUM>",
          "position": 7,
          "bytes": "[68 61 70 70 79]",
          "positionLength": 1
        },
        {
          "token": "the",
          "start_offset": 42,
          "end_offset": 45,
          "type": "<ALPHANUM>",
          "position": 8,
          "bytes": "[74 68 65]",
          "positionLength": 1
        },
        {
          "token": "peopl",
          "start_offset": 46,
          "end_offset": 51,
          "type": "<ALPHANUM>",
          "position": 9,
          "bytes": "[70 65 6f 70 6c]",
          "positionLength": 1
        },
        {
          "token": "e",
          "start_offset": 51,
          "end_offset": 52,
          "type": "<ALPHANUM>",
          "position": 10,
          "bytes": "[65]",
          "positionLength": 1
        },
        {
          "token": "and",
          "start_offset": 53,
          "end_offset": 54,
          "type": "<ALPHANUM>",
          "position": 11,
          "bytes": "[61 6e 64]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 55,
          "end_offset": 56,
          "type": "<ALPHANUM>",
          "position": 12,
          "bytes": "[61]",
          "positionLength": 1
        },
        {
          "token": "banan",
          "start_offset": 57,
          "end_offset": 62,
          "type": "<ALPHANUM>",
          "position": 13,
          "bytes": "[62 61 6e 61 6e]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 62,
          "end_offset": 63,
          "type": "<ALPHANUM>",
          "position": 14,
          "bytes": "[61]",
          "positionLength": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "lowercase",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "the",
            "start_offset": 42,
            "end_offset": 45,
            "type": "<ALPHANUM>",
            "position": 8,
            "bytes": "[74 68 65]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 55,
            "end_offset": 56,
            "type": "<ALPHANUM>",
            "position": 12,
            "bytes": "[61]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 62,
            "end_offset": 63,
            "type": "<ALPHANUM>",
            "position": 14,
            "bytes": "[61]",
            "positionLength": 1
          }
        ]
      },
      {
        "name": "my_filter",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          }
        ]
      }
    ]
  }
}

© 著作权归作者所有

田忌赛码
粉丝 5
博文 7
码字总数 6036
作品 0
济南
程序员
私信 提问
加载中

评论(0)

当ES赶超Redis,这份ES进修攻略不容错过!

从4月DB-Engines最新发布的全球数据库排名中,我们赫然发现ElasticSearch逆袭超越了Redis,从原先的第9名上升至第8名,而Redis则落后一名,排在了其后。 事实上,这场逆袭并不算太让人意外。...

DBAplus社群
2018/04/15
0
0
Elasticsearch中文分词研究

一、ES分析器简介 ES是一个实时搜索与数据分析引擎,为了完成搜索功能,必须对原始数据进行分析、拆解,以建立索引,从而实现搜索功能; ES对数据分析、拆解过程如下: 首先,将一块文本分成...

zhaipengfei1231
2018/04/18
0
0
Elastic 中国开发者大会 [上市特惠门票5折抢]

2018年11月10日周六,Elastic 中国开发者大会将在深圳金茂 JW 万豪酒店召开。届时,将有来自 Elastic、eBay、暴雪、Grab、华为、阿里巴巴、顺丰等公司的25位各领域的专家为大家带来围绕 Elas...

Medcl
2018/07/30
47
0
Elastic 中国开发者大会 [上市特惠门票5折抢]

2018年11月10日周六,Elastic 中国开发者大会将在深圳金茂 JW 万豪酒店召开。届时,将有来自 Elastic、eBay、暴雪、Grab、华为、阿里巴巴、顺丰等公司的25位各领域的专家为大家带来围绕 Elas...

Medcl
2018/07/30
519
0
Elasticsearch-6.7.0系列-Joyce博客总目录

官方英文文档地址:https://www.elastic.co/guide/index.html Elasticsearch博客目录 Elasticsearch-6.7.0系列(一)9200端口 .tar.gz版本centos7环境--下载安装运行 Elasticsearch-6.7.0系列...

osc_3rll7emc
2019/04/06
10
0

没有更多内容

加载失败,请刷新页面

加载更多

如果我有jQuery背景,那么“ AngularJS中的思考”吗? [关闭]

问题: Closed . 已关闭 。 This question needs to be more focused . 这个问题需要更加集中 。 It is not currently accepting answers. 它当前不接受答案。 Want to improve this questio......

技术盛宴
32分钟前
9
0
ArrayList-不常用方法

这篇笔记主要记录一些不常用方法,了解一下可以干什么,有个印象。 改变数组容量 /** * 将该<tt> ArrayList </ tt>实例的容量调整为列表的当前大小。 * 应用程序可以使用此操作来最大程度...

jackdawl
35分钟前
4
0
PCDN+路由器就能赚钱? 揭秘京东云无线宝背后的黑科技

受疫情影响,今年视频直播、点播等在线视频业务迎来了爆发期,看视频成为人们宅在家中的主要消遣方式之一,由此带来的互联网流量增长大幅增加了对 CDN流量的消耗。传统CDN依赖于运营商、IDC...

京东智联云开发者
37分钟前
16
0
Oracle 中关于 group by 的那些坑

分组聚合Group by 在mysql中,对group by 的使用限制是比较宽松,还是比较灵活的, 表数据之间的调取是完全没问题的; 而在Oracle中,对group by 就有一定限制,两句相同的SQL语句,可能在mysql中不...

煌sir
54分钟前
20
0
MySQL服务器的SQL模式

与其它数据库不同,MySQL 服务器可以在不同的 SQL 模式下运行,并且可以针对不同的客户端以不同的方式应用这些模式,具体取决于 sql_mode 系统变量的值。 SQL 模式定义了 MySQL 数据库所支持...

Linux就该这么学
58分钟前
36
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部