首页 文章

Stackdriver Monitoring floods collectd uc_update:syslog中的值太旧了

提问于
浏览
0

让我先说明我不是DevOp,所以我对Linux管理的经验是有限的 .

我基本上遵循了这个方法(https://cloud.google.com/monitoring/agent/install-agent)并在我的Google Compute Instance上安装了代理 .

一切正常,我在我的stackdriver帐户中获得了新的指标,但是我在我的系统日志中充斥了这些指标

instance-name collectd[26092]: uc_update: Value too old: name = <RandomNumber>/processes-all/ps_vm; value time = 1517218302.393; last cache update = 1517218302.393;

所以我在/opt/stackdriver/collectd/etc/collectd.conf文件中找到了这个

Hostname "RandomNumber"
Interval 60

这是有道理的,除了stackdriver之外,我们不会将collectd用于其他任何东西 . 因此,发现导致问题的proccessid与stackdriver主机名相同 .

接下来我查了https://collectd.org/faq.shtml

我为/etc/collectd.conf和/opt/stackdriver/collectd/etc/collectd.conf运行此命令

grep -i LoadPlugin /etc/collectd.conf | egrep -v '^[[:space:]]*#' | sort | uniq -c
  1 LoadPlugin cpu
  1 LoadPlugin interface
  1 LoadPlugin load
  1 LoadPlugin memory
  1 LoadPlugin network
  1 LoadPlugin syslog
grep -i LoadPlugin /opt/stackdriver/collectd/etc/collectd.conf | egrep -v '^[[:space:]]*#' | sort | uniq -c
  1 LoadPlugin "match_regex"
  1 LoadPlugin aggregation
  1 LoadPlugin cpu
  1 LoadPlugin df
  1 LoadPlugin disk
  1 LoadPlugin exec
  1 LoadPlugin interface
  1 LoadPlugin load
  1 LoadPlugin match_regex
  1 LoadPlugin match_throttle_metadata_keys
  1 LoadPlugin memory
  1 LoadPlugin processes
  1 LoadPlugin stackdriver_agent
  1 LoadPlugin swap
  1 LoadPlugin syslog
  1 LoadPlugin tcpconns
  1 LoadPlugin write_gcm

如你所见,没有重复的 Value 观 .

我已经用完了想法,有人可以帮忙吗?

谢谢 .

附:我们正在使用Debian Stretch并使用php运行lighttpd .

附:更多信息这是一个更详细的日志,其中包含错误,您可以看到时间戳

Jan 30 10:47:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_cputime; value time = 1517309269.877; last cache update = 1517309269.877;
Jan 30 10:48:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_cputime; value time = 1517309329.884; last cache update = 1517309329.884;
Jan 30 10:50:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_rss; value time = 1517309449.881; last cache update = 1517309449.881;
Jan 30 10:50:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/io_octets; value time = 1517309449.881; last cache update = 1517309449.884;
Jan 30 10:52:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_vm; value time = 1517309569.889; last cache update = 1517309569.889;
Jan 30 10:52:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/disk_octets; value time = 1517309569.890; last cache update = 1517309569.890;
Jan 30 10:52:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/disk_octets; value time = 1517309569.890; last cache update = 1517309569.894;

这是PS命令的输出

ps -e
 PID TTY          TIME CMD
 1 ?        00:01:28 systemd
 2 ?        00:00:00 kthreadd
 3 ?        00:00:24 ksoftirqd/0
 5 ?        00:00:00 kworker/0:0H
 7 ?        00:41:17 rcu_sched
 8 ?        00:00:00 rcu_bh
 9 ?        00:00:02 migration/0
10 ?        00:00:00 lru-add-drain
11 ?        00:00:03 watchdog/0
12 ?        00:00:00 cpuhp/0
13 ?        00:00:00 cpuhp/1
14 ?        00:00:03 watchdog/1
15 ?        00:00:01 migration/1
16 ?        00:11:58 ksoftirqd/1
18 ?        00:00:00 kworker/1:0H
19 ?        00:00:00 cpuhp/2
20 ?        00:00:03 watchdog/2
21 ?        00:00:01 migration/2
22 ?        00:03:16 ksoftirqd/2
24 ?        00:00:00 kworker/2:0H
25 ?        00:00:00 cpuhp/3
26 ?        00:00:03 watchdog/3
27 ?        00:00:02 migration/3
28 ?        00:03:11 ksoftirqd/3
30 ?        00:00:00 kworker/3:0H
31 ?        00:00:00 kdevtmpfs
32 ?        00:00:00 netns
33 ?        00:00:00 khungtaskd
34 ?        00:00:00 oom_reaper
35 ?        00:00:00 writeback
36 ?        00:00:00 kcompactd0
38 ?        00:00:00 ksmd
39 ?        00:01:02 khugepaged
40 ?        00:00:00 crypto
41 ?        00:00:00 kintegrityd
42 ?        00:00:00 bioset
43 ?        00:00:00 kblockd
44 ?        00:00:00 devfreq_wq
45 ?        00:00:00 watchdogd
49 ?        00:01:16 kswapd0
50 ?        00:00:00 vmstat
62 ?        00:00:00 kthrotld
63 ?        00:00:00 ipv6_addrconf
130 ?        00:00:00 scsi_eh_0
131 ?        00:00:00 scsi_tmf_0
133 ?        00:00:00 bioset
416 ?        07:01:34 jbd2/sda1-8
417 ?        00:00:00 ext4-rsv-conver
443 ?        00:02:37 systemd-journal
447 ?        00:00:00 kauditd
452 ?        00:00:01 kworker/0:1H
470 ?        00:00:01 systemd-udevd
483 ?        00:00:26 cron
485 ?        00:00:37 rsyslogd
491 ?        00:00:00 acpid
496 ?        00:00:49 irqbalance
497 ?        00:00:21 systemd-logind
498 ?        00:00:36 dbus-daemon
524 ?        00:00:00 edac-poller
612 ?        00:00:02 kworker/2:1H
613 ?        00:00:00 dhclient
674 ?        00:00:00 vsftpd
676 ttyS0    00:00:00 agetty
678 tty1     00:00:00 agetty
687 ?        00:01:18 ntpd
795 ?        4-19:58:17 mysqld
850 ?        00:00:15 sshd
858 ?        00:04:06 google_accounts
859 ?        00:00:33 google_clock_sk
861 ?        00:01:05 google_ip_forwa
892 ?        01:31:57 kworker/1:1H
1154 ?        00:00:00 exim4
1160 ?        00:00:01 kworker/3:1H
4259 ?        00:00:00 kworker/2:1
6090 ?        00:00:00 kworker/0:1
6956 ?        00:00:00 sshd
6962 ?        00:00:00 sshd
6963 pts/0    00:00:00 bash
6968 pts/0    00:00:00 su
6969 pts/0    00:00:00 bash
6972 ?        00:00:00 kworker/u8:2
7127 ?        00:00:00 kworker/3:2
7208 ?        00:00:00 php-fpm7.0
7212 ?        00:00:00 kworker/0:0 
10516 ?        00:00:00 systemd
10517 ?        00:00:00 (sd-pam)
10633 ?        00:00:00 kworker/2:2
11569 ?        00:00:00 kworker/3:1
12539 ?        00:00:00 kworker/1:2
13625 ?        00:00:00 kworker/1:0
13910 ?        00:00:00 sshd
13912 ?        00:00:00 systemd
13913 ?        00:00:00 (sd-pam)
13920 ?        00:00:00 sshd
13921 ?        00:00:00 sftp-server
13924 ?        00:00:00 sftp-server
14016 pts/0    00:00:00 tail
14053 ?        00:00:03 php-fpm7.0
14084 ?        00:00:00 sshd
14090 ?        00:00:00 sshd
14091 pts/1    00:00:00 bash
14098 ?        00:00:01 php-fpm7.0
14099 pts/1    00:00:00 su
14100 pts/1    00:00:00 bash
14105 ?        00:00:00 sshd
14106 ?        00:00:00 sshd
14107 ?        00:00:00 php-fpm7.0
14108 pts/1    00:00:00 ps
17456 ?        00:00:03 kworker/u8:1
17704 ?        01:38:36 lighttpd
21624 ?        00:00:30 perl
25593 ?        00:00:00 sshd
25595 ?        00:00:00 systemd
25596 ?        00:00:00 (sd-pam)
25602 ?        00:00:00 sshd
25603 ?        00:00:00 sftp-server
25641 ?        00:00:00 sftp-server
27001 ?        00:00:00 gpg-agent
28953 ?        00:01:20 stackdriver-col

PS grep comamnd用较少的输出

root@instance-7:/home/# ps aux | grep collectd
root      6981  0.0  0.0  12756   976 pts/0    S+   13:40   0:00 grep collectd
root     28953  0.1  1.1 1105712 41960 ?       Ssl  Jan29   3:16 /opt/stackdriver/collectd/sbin/stackdriver-collectd -C /opt/stackdriver/collectd/etc/collectd.conf -P /var/run/stackdriver-agent.pid

2 回答

  • 0

    这些应该是来自Stackdriver代理的正常消息 . (如果费率与每分钟2-3条消息一样 . )

    我建议你安装ntp / ntpd服务并将其同步到任何时间服务器,这样你就可以在你的系统上找到合适的时间 .

    示例ntp服务器:pool.ntp.org

  • 0

    您只是得到一个副本,因为您的消息具有相同的时间戳值,两者都有相同的时间戳值,要添加到内部高速缓存的新值以及添加到高速缓存中的具有相同名称的最后一个值 . 值time = 1517218302.393 last cache update = 1517218302.393

    您可以参考collectd faq页面(https://collectd.org/faq.shtml) . 它解释了这种消息,包括一个与你得到的消息相匹配的例子 .

    您应该检查: - 如果您的实例中运行了多个collectd守护程序( ps 命令) . 要查看可以运行的collectd进程:

    ps aux | grep collectd
    
    • 每条消息的时间戳都在增加吗?如果是这种情况,则可能是使用相同主机名的另一个主机报告数据 .

相关问题