文档章节

Elasticsearch学习(七):Elasticsearch分析

howsweet
 howsweet
发布于 2017/08/17 16:18
字数 1486
阅读 31
收藏 1
点赞 0
评论 0

一、分析

1. 分析(analysis)

  • 首先,标记化一个文本块为适用于倒排索引单独的词(term)
  • 然后标准化这些词为标准形式,提高它们的“可搜索性”或“查全率” 分析是由分析器(analyzer)完成的。

2. 分析器(analyzer)

  • 字符过滤器(character filter) 过滤处理字符串(比如去掉多余的空格之类的),让字符串在被分词前变得更加“整洁”,一个分析器可能包含零到多个字符过滤器。
  • 分词器(tokenizer) 字符串被标记化成独立的词(比如按空格划分成一个个单词),一个分析器必须包含一个分词器。
  • 标记过滤器(token filters) 所有的词经过标记过滤,标记过滤器可能修改,添加或删除标记。

只有字段是全文字段(full-text fields)的时候分析器才会被使用,当字段是一个确切的值(exact value)时,不会对该字段做分析。

  • 全文字段:类似于string、text
  • 确切值:类似于数值、日期

二、自定义分析器

1. char_filter(字符过滤器)

  • html_strip(html标签过滤) 参数:
    • escaped_tags不应该从原始文本中删除的HTML标签数组
  • mapping(自定义映射过滤) 参数:
    • mappings一个映射数组,每个元素的格式为key => value
    • mappings_path一个以UTF-8编码的文件的绝对路径或者是相对于config目录的路径,文件每一行都是一个格式为key => value映射
  • pattern_replace(使用正则表达式来匹配字符并使用指定的字符串替换) 参数:

2. tokenizer(分词器)

这里只列出常用的几个,更多分词器请查阅官方文档

  • standard(标准分词,默认使用的分词。根据Unicode Consortium的定义的单词边界来切分文本,然后去掉大部分标点符号对于文本分析,所以对于任何语言都是最佳选择) 参数:
    • max_token_length最大标记长度。如果一个标记超过这个长度,就会被分割。默认值为255
  • letter(遇到不是字母的字符就分割) 参数:无
  • lowercase(在letter基础上把所分词都转为小写) 参数:无
  • whitespace(以空格分词) 参数:无
  • keyword(相当于不分词,接收啥输出啥) 参数:
    • buffer_size缓冲区大小。默认为256。缓冲区将以这种大小增长,直到所有文本被消耗。建议不要改变这个设置。

3. filter(标记过滤器)

由于标记过滤器太多,这里就不一一介绍了,请查阅官方文档

4. 自定义分析器

newindex PUT

{
  "settings": {
    "analysis": {
      "char_filter": {
        "my_char_filter": {
          "type": "mapping",
          "mappings": [
            "&=>and",
            ":)=>happy",
            ":(=>sad"
          ]
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      },
      "filter": {
        "my_filter": {
          "type": "stop",
          "stopwords": [
            "the",
            "a"
          ]
        }
      },
      "analyzer": {
        "my_analyzer": {
          "type": "custom",
          "char_filter": [
            "html_strip",
            "my_char_filter"
          ],
          "tokenizer": "my_tokenizer",
          "filter": [
            "lowercase",
            "my_filter"
          ]
        }
      }
    }
  }
}

然后用自定义分析器分析一段字符串:

newindex/_analyze POST

{
  "analyzer": "my_analyzer",
  "text": "<span>If you are :(, I will be :).</span> The people & a banana",
  "explain": true
}

可以看到分析过程:

{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [
      {
        "name": "html_strip",
        "filtered_text": [
          "if you are :(, I will be :). the people & a banana"
        ]
      },
      {
        "name": "my_char_filter",
        "filtered_text": [
          "if you are sad, I will be happy. the people and a banana"
        ]
      }
    ],
    "tokenizer": {
      "name": "my_tokenizer",
      "tokens": [
        {
          "token": "if",
          "start_offset": 6,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0,
          "bytes": "[69 66]",
          "positionLength": 1
        },
        {
          "token": "you",
          "start_offset": 9,
          "end_offset": 12,
          "type": "<ALPHANUM>",
          "position": 1,
          "bytes": "[79 6f 75]",
          "positionLength": 1
        },
        {
          "token": "are",
          "start_offset": 13,
          "end_offset": 16,
          "type": "<ALPHANUM>",
          "position": 2,
          "bytes": "[61 72 65]",
          "positionLength": 1
        },
        {
          "token": "sad",
          "start_offset": 17,
          "end_offset": 19,
          "type": "<ALPHANUM>",
          "position": 3,
          "bytes": "[73 61 64]",
          "positionLength": 1
        },
        {
          "token": "I",
          "start_offset": 21,
          "end_offset": 22,
          "type": "<ALPHANUM>",
          "position": 4,
          "bytes": "[49]",
          "positionLength": 1
        },
        {
          "token": "will",
          "start_offset": 23,
          "end_offset": 27,
          "type": "<ALPHANUM>",
          "position": 5,
          "bytes": "[77 69 6c 6c]",
          "positionLength": 1
        },
        {
          "token": "be",
          "start_offset": 28,
          "end_offset": 30,
          "type": "<ALPHANUM>",
          "position": 6,
          "bytes": "[62 65]",
          "positionLength": 1
        },
        {
          "token": "happy",
          "start_offset": 31,
          "end_offset": 33,
          "type": "<ALPHANUM>",
          "position": 7,
          "bytes": "[68 61 70 70 79]",
          "positionLength": 1
        },
        {
          "token": "the",
          "start_offset": 42,
          "end_offset": 45,
          "type": "<ALPHANUM>",
          "position": 8,
          "bytes": "[74 68 65]",
          "positionLength": 1
        },
        {
          "token": "peopl",
          "start_offset": 46,
          "end_offset": 51,
          "type": "<ALPHANUM>",
          "position": 9,
          "bytes": "[70 65 6f 70 6c]",
          "positionLength": 1
        },
        {
          "token": "e",
          "start_offset": 51,
          "end_offset": 52,
          "type": "<ALPHANUM>",
          "position": 10,
          "bytes": "[65]",
          "positionLength": 1
        },
        {
          "token": "and",
          "start_offset": 53,
          "end_offset": 54,
          "type": "<ALPHANUM>",
          "position": 11,
          "bytes": "[61 6e 64]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 55,
          "end_offset": 56,
          "type": "<ALPHANUM>",
          "position": 12,
          "bytes": "[61]",
          "positionLength": 1
        },
        {
          "token": "banan",
          "start_offset": 57,
          "end_offset": 62,
          "type": "<ALPHANUM>",
          "position": 13,
          "bytes": "[62 61 6e 61 6e]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 62,
          "end_offset": 63,
          "type": "<ALPHANUM>",
          "position": 14,
          "bytes": "[61]",
          "positionLength": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "lowercase",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "the",
            "start_offset": 42,
            "end_offset": 45,
            "type": "<ALPHANUM>",
            "position": 8,
            "bytes": "[74 68 65]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 55,
            "end_offset": 56,
            "type": "<ALPHANUM>",
            "position": 12,
            "bytes": "[61]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 62,
            "end_offset": 63,
            "type": "<ALPHANUM>",
            "position": 14,
            "bytes": "[61]",
            "positionLength": 1
          }
        ]
      },
      {
        "name": "my_filter",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          }
        ]
      }
    ]
  }
}

© 著作权归作者所有

共有 人打赏支持
howsweet
粉丝 4
博文 7
码字总数 6036
作品 0
济南
程序员
当ES赶超Redis,这份ES进修攻略不容错过!

从4月DB-Engines最新发布的全球数据库排名中,我们赫然发现ElasticSearch逆袭超越了Redis,从原先的第9名上升至第8名,而Redis则落后一名,排在了其后。 事实上,这场逆袭并不算太让人意外。...

DBAplus社群 ⋅ 04/15 ⋅ 0

Elasticsearch中文分词研究

一、ES分析器简介 ES是一个实时搜索与数据分析引擎,为了完成搜索功能,必须对原始数据进行分析、拆解,以建立索引,从而实现搜索功能; ES对数据分析、拆解过程如下: 首先,将一块文本分成...

zhaipengfei1231 ⋅ 04/18 ⋅ 0

Centos6搭建elk系统,监控IIS日志

**所需程序: 服务器端:java、elasticsearch、kikbana 客 户 端:IIS、logstash** 一、服务器端(192.168.10.46)操作: 先建立一个ELK专门的目录: [root@Cent65 ~]mkdir /elk/ 上传到elk...

D杀手D ⋅ 04/24 ⋅ 0

Elastic Search学习笔记1——安装elasticsearch2.4.6

Elastic Search 简介 1.基于Apache Lucene的开源搜索引擎 2.采用Java编写 RESTful API风格 3.较容易的横向扩展 应用场景 1.海量数据分析引擎 2.数据搜索引擎 3.数据仓库 官网 https://www.el...

晨猫 ⋅ 03/09 ⋅ 0

快速上手 Elasticsearch 的几个建议

相信不少同学都听说过 Elasticsearch,作为目前最流行的搜索引擎实现方案,越来越多的公司在自己的架构中引入,而其应用场景也从搜索引擎扩展到了日志存储分析、大数据分析领域,本文尝试给初...

rockybean ⋅ 05/21 ⋅ 0

CTO详细讲解海量日志处理ELK

ELK实时日志分析平台之Elasticsearch简介 Elasticsearch是一个高度灵活的开源全文检索和分析引擎。它能够迅速(几乎是实时地)地存储、查找和分析大规模数据。通常被用在有复杂的搜索要求的系...

Java架构分享 ⋅ 05/23 ⋅ 0

elasticsearch-head 安装介绍

elasticsearch-head 是用于监控 Elasticsearch 状态的客户端插件,包括数据可视化、执行增删改查操作等。elasticsearch-head 插件的安装在 Linux 和 Windows 没什么区别,安装之前确保当前系...

BeckJin ⋅ 05/19 ⋅ 0

Docker 部署ELK 日志分析

Docker 部署ELK 日志分析 elk集成镜像包 名字是 sebp/elk 安装 docke、启动 yum install docke service docker start Docker至少得分配3GB的内存;不然得加参数 -e ESMINMEM=128m -e ESMAXM...

yikayi ⋅ 06/08 ⋅ 0

Elasticsearch + Kibana 集群环境搭建

Elk 提供了完备且成熟的日志存储和分析的解决方案,虽然不开源,但是可以免费使用。本文主要介绍 elasticsearch 集群以及 kibana 的环境搭建。 Elasticsearch Elasticsearch 可以理解为一个支...

xjtuhit ⋅ 04/16 ⋅ 0

基于ELK实时日志分析的最佳实践

在2018云栖大会深圳峰会大数据分析与可视化专场上,由阿里巴巴搜索引擎事业部开放搜索团队的吴迪带来了“基于ELK实时日志分析的最佳实践”的主题分享。介绍了传统的日志分析、ELK的概念和ELK...

smile小太阳 ⋅ 05/06 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

Gitee 生成并部署SSH key

1.如何生成ssh公钥 你可以按如下命令来生成 sshkey: ssh-keygen -t rsa -C "xxxxx@xxxxx.com" # Generating public/private rsa key pair...# 三次回车即可生成 ssh key 查看你的 ...

晨猫 ⋅ 49分钟前 ⋅ 0

zblog2.3版本的asp系统是否可以超越卢松松博客的流量[图]

最近访问zblog官网,发现zlbog-asp2.3版本已经进入测试阶段了,虽然正式版还没有发布,想必也不久了。那么作为aps纵横江湖十多年的今天,blog2.2版本应该已经成熟了,为什么还要发布这个2.3...

原创小博客 ⋅ 今天 ⋅ 0

聊聊spring cloud的HystrixCircuitBreakerConfiguration

序 本文主要研究一下spring cloud的HystrixCircuitBreakerConfiguration HystrixCircuitBreakerConfiguration spring-cloud-netflix-core-2.0.0.RELEASE-sources.jar!/org/springframework/......

go4it ⋅ 今天 ⋅ 0

二分查找

二分查找,也称折半查找、二分搜索,是一种在有序数组中查找某一特定元素的搜索算法。搜素过程从数组的中间元素开始,如果中间元素正好是要查找的元素,则搜素过程结束;如果某一特定元素大于...

人觉非常君 ⋅ 今天 ⋅ 0

VS中使用X64汇编

需要注意的是,在X86项目中,可以使用__asm{}来嵌入汇编代码,但是在X64项目中,再也不能使用__asm{}来编写嵌入式汇编程序了,必须使用专门的.asm汇编文件来编写相应的汇编代码,然后在其它地...

simpower ⋅ 今天 ⋅ 0

ThreadPoolExecutor

ThreadPoolExecutor public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, ......

4rnold ⋅ 昨天 ⋅ 0

Java正无穷大、负无穷大以及NaN

问题来源:用Java代码写了一个计算公式,包含除法和对数和取反,在页面上出现了-infinity,不知道这是什么问题,网上找答案才明白意思是负的无穷大。 思考:为什么会出现这种情况呢?这是哪里...

young_chen ⋅ 昨天 ⋅ 0

前台对中文编码,后台解码

前台:encodeURI(sbzt) 后台:String param = URLDecoder.decode(sbzt,"UTF-8");

west_coast ⋅ 昨天 ⋅ 0

实验楼—MySQL基础课程-挑战3实验报告

按照文档要求创建数据库 sudo sercice mysql startwget http://labfile.oss.aliyuncs.com/courses/9/createdb2.sqlvim /home/shiyanlou/createdb2.sql#查看下数据库代码 代码创建了grade......

zhangjin7 ⋅ 昨天 ⋅ 0

一起读书《深入浅出nodejs》-node模块机制

node 模块机制 前言 说到node,就不免得提到JavaScript。JavaScript自诞生以来,经历了工具类库、组件库、前端框架、前端应用的变迁。通过无数开发人员的努力,JavaScript不断被类聚和抽象,...

小草先森 ⋅ 昨天 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部