首页 文章

Solr 6.0.0 - SolrCloud java示例

提问于
浏览
2

我在我的localhost上安装了solr .

我用嵌入式zookeepr开始 standard solr cloud example .

集合:gettingstarted shards:2复制:2

500 records/docs to process time took 115 seconds[localhost tetsing] - why is this taking this much time to process just 500 records. is there a way to improve this to some millisecs/nanosecs

注意:

我在远程机器solr实例上测试了相同的东西,localhost在远程solr上有数据索引[在java注释内]

我和Ensemble一起开始了#1421837_与单一的zookeepr .

2 solr nodes, 1 Ensemble zookeeper standalone

集合:myCloudData,分片:2,复制:2

Solr colud java代码

package com.test.solr.basic;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.common.SolrInputDocument;

 public class SolrjPopulatorCloudClient2 {
  public static void main(String[] args) throws     IOException,SolrServerException {           


    //String zkHosts = "64.101.49.57:2181/solr";
    String zkHosts = "localhost:9983";
    CloudSolrClient solrCloudClient = new CloudSolrClient(zkHosts, true);
    //solrCloudClient.setDefaultCollection("myCloudData");
    solrCloudClient.setDefaultCollection("gettingstarted");
    /*
    // Thread Safe
    solrClient = new ConcurrentUpdateSolrClient(urlString, queueSize, threadCount);
    */
    // Depreciated - client
    //HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr");
    long start = System.nanoTime();
    for (int i = 0; i < 500; ++i) {
        SolrInputDocument doc = new SolrInputDocument();
        doc.addField("cat", "book");
        doc.addField("id", "book-" + i);
        doc.addField("name", "The Legend of the Hobbit part " + i);
        solrCloudClient.add(doc);
        if (i % 100 == 0)
            System.out.println(" Every 100 records flush it");
        solrCloudClient.commit(); // periodically flush
    }
    solrCloudClient.commit(); 
    solrCloudClient.close();
    long end = System.nanoTime();
    long seconds = TimeUnit.NANOSECONDS.toSeconds(end - start);
    System.out.println(" All records are indexed, took " + seconds + " seconds");

 }
}

1 回答

  • 3

    您正在提交每个新文档,这是不必要的 . 如果更改 if (i % 100 == 0) 块以进行读取,它将运行得更快

    if (i % 100 == 0) {
        System.out.println(" Every 100 records flush it");
        solrCloudClient.commit(); // periodically flush
    }
    

    在我的机器上,这将在14秒内为您的500条记录编制索引 . 如果我从 for 循环中删除 commit() 调用,它将在7秒内完成索引 .

    或者,您可以在 solrCloudClient.add() 调用中添加 commitWithinMs 参数:

    solrCloudClient.add(doc, 15000);
    

    这将保证您的记录在15秒内提交,并且还可以提高您的索引速度 .

相关问题