我在测试环境中玩Cassandra . 我在群集中有3个C *(版本3.11)节点,具有复制因子2和SizeTieredCompactionStrategy的3个列族 . 每个节点都部署到带有两个EBS gp2磁盘的AWS r4.large实例 - 100 GB用于提交日志,300 GB用于数据 . 2列系列是只写的,我们的应用程序写了很多这些表(总共每秒大约800-1000次插入),从不读它们 . 第三列系列是一个计数器表,其使用类似于“递增和获取”模式:大约200个增量并且每秒获取 . 每个计数器不经常更新 - 在整个测试期间大约30次 .

一切都运行良好5天,但然后读取其中一个节点上的操作/秒,增长到> 1000 iops,到磁盘iops限制:
IOPS graph

当它达到极限时,从计数器表中选择的响应时间从2位毫秒跳到秒,并且读取io在第二个节点上开始增长 .

这是 nodetool cfstats 的输出:

Keyspace : geo
Read Count: 51744711
Read Latency: 12.840126352082631 ms.
Write Count: 201887146
Write Latency: 0.02199966441647553 ms.
Pending Flushes: 0
    Table: gpm
    SSTable count: 5
    Space used (live): 5127907030
    Space used (total): 5127907030
    Space used by snapshots (total): 0
    Off heap memory used (total): 102252166
    SSTable Compression Ratio: 0.41318162863841235
    Number of keys (estimate): 32583794
    Memtable cell count: 51001
    Memtable data size: 3072328
    Memtable off heap memory used: 0
    Memtable switch count: 176
    Local read count: 51530318
    Local read latency: NaN ms
    Local write count: 97595727
    Local write latency: NaN ms
    Pending flushes: 0
    Percent repaired: 0.0
    Bloom filter false positives: 0
    Bloom filter false ratio: 0.00000
    Bloom filter space used: 80436992
    Bloom filter off heap memory used: 80436952
    Index summary off heap memory used: 21046326
    Compression metadata off heap memory used: 768888
    Compacted partition minimum bytes: 87
    Compacted partition maximum bytes: 149
    Compacted partition mean bytes: 112
    Average live cells per slice (last five minutes): NaN
    Maximum live cells per slice (last five minutes): 0
    Average tombstones per slice (last five minutes): NaN
    Maximum tombstones per slice (last five minutes): 0
    Dropped Mutations: 0

我找不到任何解释为什么会这样 . 我执行了两次相同的测试(从空数据库开始),并且从开始的5-6天后两次失败相同 . 这是典型查询的跟踪,它选择计数器(测试期间执行的唯一选择类型):

Execute CQL3 query | 2017-08-28 16:06:44.110000 | 172.31.16.220 |              0 | 172.31.16.220
                                            Parsing select * from geo.gpm where usr = '57469' and lat = 55.617 and lon = 37.509; [Native-Transport-Requests-1] | 2017-08-28 16:06:44.110000 | 172.31.16.220 |            187 | 172.31.16.220
                                                                                                             Preparing statement [Native-Transport-Requests-1] | 2017-08-28 16:06:44.110000 | 172.31.16.220 |            411 | 172.31.16.220
                                                                                                reading data from /172.31.32.220 [Native-Transport-Requests-1] | 2017-08-28 16:06:44.110000 | 172.31.16.220 |            736 | 172.31.16.220
   Sending READ message to cassandra-test-skywalker.aws.local/172.31.32.220 [MessagingService-Outgoing-cassandra-test-skywalker.aws.local/172.31.32.220-Small] | 2017-08-28 16:06:44.111000 | 172.31.16.220 |           1184 | 172.31.16.220
                                                                          READ message received from /172.31.16.220 [MessagingService-Incoming-/172.31.16.220] | 2017-08-28 16:06:44.111000 | 172.31.32.220 |             35 | 172.31.16.220
                                                                                                         Executing single-partition query on gpm [ReadStage-3] | 2017-08-28 16:06:44.117000 | 172.31.32.220 |           5675 | 172.31.16.220
                                                                                                                    Acquiring sstable references [ReadStage-3] | 2017-08-28 16:06:44.117000 | 172.31.32.220 |           5745 | 172.31.16.220
                                                                       Skipped 0/5 non-slice-intersecting sstables, included 0 due to tombstones [ReadStage-3] | 2017-08-28 16:06:44.117000 | 172.31.32.220 |           5812 | 172.31.16.220
                                                                                                                    Key cache hit for sstable 85 [ReadStage-3] | 2017-08-28 16:06:44.117000 | 172.31.32.220 |           5906 | 172.31.16.220
                                                                                                                   Key cache hit for sstable 170 [ReadStage-3] | 2017-08-28 16:06:44.117000 | 172.31.32.220 |           5963 | 172.31.16.220
                                                                                                                   Key cache hit for sstable 191 [ReadStage-3] | 2017-08-28 16:06:44.117000 | 172.31.32.220 |           5996 | 172.31.16.220
                                                                                                                   Key cache hit for sstable 212 [ReadStage-3] | 2017-08-28 16:06:44.117001 | 172.31.32.220 |           6028 | 172.31.16.220
                                                                                                        Bloom filter allows skipping sstable 217 [ReadStage-3] | 2017-08-28 16:06:44.118000 | 172.31.32.220 |           6069 | 172.31.16.220
                                                                                                       Merged data from memtables and 4 sstables [ReadStage-3] | 2017-08-28 16:06:44.118000 | 172.31.32.220 |           6151 | 172.31.16.220
                                                                                                               Read 1 live and 0 tombstone cells [ReadStage-3] | 2017-08-28 16:06:44.118000 | 172.31.32.220 |           6203 | 172.31.16.220
                                                                                                            Enqueuing response to /172.31.16.220 [ReadStage-3] | 2017-08-28 16:06:44.118001 | 172.31.32.220 |           6228 | 172.31.16.220
                                                              REQUEST_RESPONSE message received from /172.31.32.220 [MessagingService-Incoming-/172.31.32.220] | 2017-08-28 16:06:44.119000 | 172.31.16.220 |           9891 | 172.31.16.220
Sending REQUEST_RESPONSE message to cassandra-test-kenobi.aws.local/172.31.16.220 [MessagingService-Outgoing-cassandra-test-kenobi.aws.local/172.31.16.220-Small] | 2017-08-28 16:06:44.119000 | 172.31.32.220 |           7466 | 172.31.16.220
                                                                                              Processing response from /172.31.32.220 [RequestResponseStage-3] | 2017-08-28 16:06:44.120000 | 172.31.16.220 |          10013 | 172.31.16.220
                                                                                                                                              Request complete | 2017-08-28 16:06:44.124753 | 172.31.16.220 |          14753 | 172.31.16.220

任何帮助将不胜感激 . 有没有任何分析工具可以帮助我认识到什么是错的?