首页 文章

尝试从键空间读取时,Cassandra复制节点失败

提问于
浏览
0

我的Cassandra(3.0.2)复制集中有3个节点 . 我的一致性水平是“一” . 在开始时,我的所有键空间都具有等于1的复制因子 . 我通过表更改来更改它并在所有节点上运行“nodetool repair” . 现在,当我试图选择一些数据(而不是每个键空间)时,我得到类似这样的东西(select from from keyspace.table):

回溯(最近一次调用最后一次):文件“/usr/bin/cqlsh.py”,第1258行,在perform_simple_statement result = future.result()文件“cassandra / cluster.py”,第3781行,在cassandra.cluster中 . ResponseFuture.result(cassandra / cluster.c:73073)引发self._final_exception ReadFailure:来自服务器的错误:code = 1300 [Replica(s)无法执行read] message =“操作失败 - 收到0响应和1次失败”info = {'失败':1,'received_responses':0,'required_responses':1,'一致性':'ONE'}

在“/var/log/cassandra/system.log”中,我得到:

WARN [SharedPool-Worker-2] 2017-04-07 12:46:20,036 AbstractTracingAwareExecutorService.java:169 - 线程上的未捕获异常线程[SharedPool-Worker-2,5,main]:{} java.lang.AssertionError: null org.apache.cassandra.db.columniterator.AbstractSSTableIterator $ IndexState.updateBlock(AbstractSSTableIterator.java:463)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator .SSTableIterator $ ForwardIndexedReader.computeNext(SSTableIterator.java:268)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.columniterator.SSTableIterator $ ForwardReader.hasNextInternal(SSTableIterator.java: 158)〜[apache-cassandra-3.0.2.jar:3.0.2] org.apache.cassandra.db.columniterator.AbstractSSTableIterator $ Reader.hasNext(AbstractSSTableIterator.java:352)~ [apache-cassandra-3.0.2 .jar:3.0.2] org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)〜[apache-cassandra-3.0.2.jar:3.0.2] org.apache.cassandra .db.columniterator.SS org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)中的TableIterator.hasNext(SSTableIterator.java:32)〜[apache-cassandra-3.0.2.jar:3.0.2]〜[apache -cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)〜[apache-cassandra-3.0.2.jar:3.0.2]在org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2] org.apache.cassandra.utils.MergeIterator $ Candidate.advance( MergeIterator.java:369)〜[apache-cassandra-3.0.2.jar:3.0.2]在org.apache.cassandra.utils.MergeIterator $ ManyToOne.advance(MergeIterator.java:189)〜[apache-cassandra-3.0 .2.jar:3.0.2] org.apache.cassandra.utils.MergeIterator $ ManyToOne.computeNext(MergeIterator.java:158)〜[apache-cassandra-3.0.2.jar:3.0.2] org.apache .cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassan dra-3.0.2.jar:3.0.2]在org.apache.cassandra.db.rows.UnfilteredRowIterators $ UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:426)〜[apache-cassandra-3.0.2.jar:3.0.2 ] org.apache.cassandra.db.rows.UnfilteredRowIterators $ UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:286)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.utils.AbstractIterator .hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)〜[apache- cassandra-3.0.2.jar:3.0.2]在org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)〜[apache-cassandra-3.0.2.jar:3.0.2] at at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows) .java:108)〜[apache-cas sandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)〜[apache-cassandra-3.0.2.jar:3.0.2] at at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize (UnfilteredRowIteratorSerializer.java:77)〜[apache-cassandra-3.0.2.jar:3.0.2] at atorg.apache.cassandra.db.partitions.UnfilteredPartitionIterators $ Serializer.serialize(UnfilteredPartitionIterators.java:298)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadResponse $ LocalDataResponse .build(ReadResponse.java:136)〜[apache-cassandra-3.0.2.jar:3.0.2] at org.apache.cassandra.db.ReadResponse $ LocalDataResponse . (ReadResponse.java:128)~ [apache-cassandra -3.0.2.jar:3.0.2]在org.apache.cassandra.db.ReadResponse $ LocalDataResponse . (ReadResponse.java:123)〜[apache-cassandra-3.0.2.jar:3.0.2] at org . org.apache.cassandra.db.ReadCommand.createResponse上的apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)〜[apache-cassandra-3.0.2.jar:3.0.2](ReadCommand.java:289 )〜[apache-cassandra-3.0.2.jar:3.0.2]在org.apache.cassandra.service.StorageProxy $ LocalReadRunnable.runMayThrow(StorageProxy.java:1721)〜[apache-cassandra-3.0.2.jar: 3.0.2] at org.apache.cassandra.service.StorageProxy $ DroppableRunnable.run(StorageProxy.java:2375)~ [apache-cassandra-3.0.2.j ar:3.0.2] at java.util.concurrent.Executors $ RunnableAdapter.call(Executors.java:511)〜[na:1.8.0_121] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService $ FutureTask.run(AbstractTracingAwareExecutorService . 在org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)[apache-cassandra-3.0.2.jar: 3.0.2]在java.lang.Thread.run(Thread.java:745)[na:1.8.0_121] DEBUG [SharedPool-Worker-1] 2017-04-07 12:46:20,037 ReadCallback.java:126 - 失败;收到0回复1

我也得到:

DEBUG [SharedPool-Worker-1] 2017-04-07 13:20:30,002 ReadCallback.java:126 - 超时;收到0回复1

我检查了端口9042和7000上的节点之间是否存在连接 . 我更改了“/etc/cassandra/cassandra.yml”中的选项,如“read_request_timeout_in_ms”,“range_request_timeout_in_ms”,“write_request_timeout_in_ms”oraz“truncate_request_timeout_in_ms” . 我更改了文件“〜/ .cassandra / cqlshrc”和选项“client_timeout = 3600” . 另外,当我执行“select * from keyspace.table where column1 ='value'和column2 = value”时,我得到:

ReadTimeout:来自服务器的错误:代码= 1200 [协调器节点超时等待副本节点的响应] message =“操作超时 - 仅收到0个响应 . ” info = {'received_responses':0,'required_responses':1,'一致':'ONE'}

有任何想法吗?

3 回答

  • 0

    这或多或少是一个评论,但因为有很多话要说它不适合评论 .

    如果你更改了值的复制因子,那将是非常好的 . 我只是假设它是3,因为它非常标准 . 然后再次,因为你只有3人的集群有时将RF设置为2.你还提到你更新了表上的复制因子 . 据我所知,复制因子是在键空间级别设置的 .

    如果您发布了发生错误的键空间的描述,那将非常有用 .

    考虑到 select * from something 可能会在群集中变得非常密集,特别是如果您有大量数据 . 如果你在cqlsh中执行此查询,你可能会回到10 000然后再次只提到cqlsh而没有应用程序代码所以我只是注意这一个等等 .

    你能否提供 nodetool status 只是为了确保你实际上没有运行查询一些节点 . 因为第一个错误就像这样 .

    通过第二个错误,你发布了一个堆栈跟踪,看起来你错过了磁盘上的一些sstables?某些其他进程是否有可能以某种方式以某种方式操纵sstables?

    您还在cassandra.yaml中更改了很多属性,基本上您将预期的响应时间缩短了近50%,我猜这也就是说节点没有时间响应...整个表的计数通常需要超过3.6秒

    推断为什么这些值被改变的原因就是缺失 .

  • 0

    MarkoŠvaljek,是的我将复制因子从1改为3(因为我的复制中有3个节点) . 你是对的;你改变了键空间的复制因子,这就是我所做的 . 在这里你有我的键空间的描述,通常我会得到一些错误(但当然它与其他键空间一起发生):

    soi@cqlsh> desc keyspace engine;
    
    CREATE KEYSPACE engine WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true;
    
    CREATE TABLE engine.messages (
        persistence_id text,
        partition_nr bigint,
        sequence_nr bigint,
        timestamp timeuuid,
        timebucket text,
        message blob,
        tag1 text,
        tag2 text,
        tag3 text,
        used boolean static,
        PRIMARY KEY ((persistence_id, partition_nr), sequence_nr, timestamp, timebucket)
    ) WITH CLUSTERING ORDER BY (sequence_nr ASC, timestamp ASC, timebucket ASC)
        AND bloom_filter_fp_chance = 0.01
        AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
        AND comment = ''
        AND compaction = {'bucket_high': '1.5', 'bucket_low': '0.5', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'enabled': 'true', 'max_threshold': '32', 'min_sstable_size': '50', 'min_threshold': '4', 'tombstone_compaction_interval': '86400', 'tombstone_threshold': '0.2', 'unchecked_tombstone_compaction': 'false'}
        AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
        AND crc_check_chance = 1.0
        AND dclocal_read_repair_chance = 0.1
        AND default_time_to_live = 0
        AND gc_grace_seconds = 864000
        AND max_index_interval = 2048
        AND memtable_flush_period_in_ms = 0
        AND min_index_interval = 128
        AND read_repair_chance = 0.0
        AND speculative_retry = '99PERCENTILE';
    
    CREATE MATERIALIZED VIEW engine.eventsbytag1 AS
        SELECT tag1, timebucket, timestamp, persistence_id, partition_nr, sequence_nr, message
        FROM engine.messages
        WHERE persistence_id IS NOT NULL AND partition_nr IS NOT NULL AND sequence_nr IS NOT NULL AND tag1 IS NOT NULL AND timestamp IS NOT NULL AND timebucket IS NOT NULL
        PRIMARY KEY ((tag1, timebucket), timestamp, persistence_id, partition_nr, sequence_nr)
        WITH CLUSTERING ORDER BY (timestamp ASC, persistence_id ASC, partition_nr ASC, sequence_nr ASC)
        AND bloom_filter_fp_chance = 0.01
        AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
        AND comment = ''
        AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
        AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
        AND crc_check_chance = 1.0
        AND dclocal_read_repair_chance = 0.1
        AND default_time_to_live = 0
        AND gc_grace_seconds = 864000
        AND max_index_interval = 2048
        AND memtable_flush_period_in_ms = 0
        AND min_index_interval = 128
        AND read_repair_chance = 0.0
        AND speculative_retry = '99PERCENTILE';
    
    CREATE TABLE engine.config (
        property text PRIMARY KEY,
        value text
    ) WITH bloom_filter_fp_chance = 0.01
        AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
        AND comment = ''
        AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
        AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
        AND crc_check_chance = 1.0
        AND dclocal_read_repair_chance = 0.1
        AND default_time_to_live = 0
        AND gc_grace_seconds = 864000
        AND max_index_interval = 2048
        AND memtable_flush_period_in_ms = 0
        AND min_index_interval = 128
        AND read_repair_chance = 0.0
        AND speculative_retry = '99PERCENTILE';
    
    CREATE TABLE engine.metadata (
        persistence_id text PRIMARY KEY,
        deleted_to bigint,
        properties map<text, text>
    ) WITH bloom_filter_fp_chance = 0.01
        AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
        AND comment = ''
        AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
        AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
        AND crc_check_chance = 1.0
        AND dclocal_read_repair_chance = 0.1
        AND default_time_to_live = 0
        AND gc_grace_seconds = 864000
        AND max_index_interval = 2048
        AND memtable_flush_period_in_ms = 0
        AND min_index_interval = 128
        AND read_repair_chance = 0.0
        AND speculative_retry = '99PERCENTILE';
    

    通常我得到代码错误没有 . 你可以在第一篇文章中看到1200或1300 . 在这里你有我的“nodetool status”:

    ubuntu@cassandra-db1:~$ nodetool status
    Datacenter: datacenter1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address       Load       Tokens       Owns    Host ID                               Rack
    UN  192.168.1.13  3.94 MB    256          ?       8ebcc3fe-9869-44c5-b7a5-e4f0f5a0beb1  rack1
    UN  192.168.1.14  4.26 MB    256          ?       977831cb-98fe-4170-ab15-2b4447559003  rack1
    UN  192.168.1.15  4.94 MB    256          ?       7515a967-cbdc-4d89-989b-c0a2f124173f  rack1
    
    Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
    

    我不这么认为其他一些进程可以操纵磁盘上的一些数据 . 我将补充说,我有类似的复制,我有更多的数据,我没有像这样的问题 .

  • 0

    固定!我将Cassandra版本从3.0.2改为3.0.9,问题解决了 .

相关问题