Home Articles

如何通过Elasticsearch分析字数统计?

Asked
Viewed 1444 times
0

我想计算每个分析的标记 .

首先,我尝试了以下代码:

mapping

{
  "docs": {
    "mappings": {
      "doc": {
        "dynamic": "false",
        "properties": {
          "text": {
            "type": "string",
            "analyzer": "kuromoji"
          }
        }
      }
    }
  }
}

query

{
  "query": {
    "match_all": {}
  },
  "aggs": {
    "word-count": {
      "terms": {
        "field": "text",
        "size": "1000"
      }
    }
  },
  "size": 0
}

我插入数据后查询了索引,得到了以下结果:

{
  "took": 41
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 10000,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "word-count": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 36634,
      "buckets": [
        {
          "key": "はい",
          "doc_count": 4734
        },
        {
          "key": "いただく",
          "doc_count": 2440
        },
        ...
      ]
    }
  }
}

不幸的是,术语聚合仅提供doc_count . 这不是一个字数 . 所以,我认为使用 _index['text']['TERM'].df()_index['text']['TERM'].ttf() 获得近似字数的方法 .

也许近似字数是以下等式:

WordCount = doc_count['TERM'] / _index['text']['TERM'].df() * _index['text']['TERM'].ttf()

'TERM'是桶中的关键 . 我尝试编写脚本度量聚合,但我不知道如何获取桶中的密钥 .

{
  "query": {
    "match_all": {}
  },
  "aggs": {
    "doc-count": {
      "terms": {
        "field": "text",
        "size": "1000"
      }
    },
    "aggs": {
      "word-count": {
        "scripted_metric": {
           // ???
        }
      }
    }
  },
  "size": 0
}

如何获得桶中的钥匙?如果不可能,我怎样才能得到分析的字数?

2 Answers

  • 0

    您可以尝试使用token count数据类型 . 只需将该类型的子字段添加到 text 字段:

    {
      "docs": {
        "mappings": {
          "doc": {
            "dynamic": "false",
            "properties": {
              "text": {
                "type": "string",
                "analyzer": "kuromoji"
              }, 
              "fields": {
                "nb_tokens": {
                  "type": "token_count",
                  "analyzer": "kuromoji"
                }
              }
            }
          }
        }
      }
    }
    

    然后,您可以在聚合中使用 text.nb_tokens .

  • 0

    你能试试 dynamic_scripting ,虽然这会影响性能 .

    {
    "query": {
    "match_all": {}
    },
    "aggs": {
    "word-count": {
      "terms": {
        "script": "_source.text",
        "size": "1000"
        }
      }
     },
    "size": 0
    }
    

Related