首页 文章

如何通过unix utils或nmon获取hadoop fs的磁盘信息?

提问于
浏览
1

我已经安装了mapr与mfs(基于hadoop fs)和一些脚本,它通过使用df,dfisk和nmon日志文件从文件系统获取信息 .

root@spbswgvml10:/opt/nmon# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda1       8.8G  4.4G  4.0G  53% /
    none            4.0K     0  4.0K   0% /sys/fs/cgroup
    udev            2.0G  4.0K  2.0G   1% /dev
    tmpfs           396M  464K  395M   1% /run
    none            5.0M     0  5.0M   0% /run/lock
    none            2.0G     0  2.0G   0% /run/shm
    none            100M     0  100M   0% /run/user
    root@spbswgvml10:/opt/nmon# fdisk -l

    Disk /dev/sda: 10.7 GB, 10737418240 bytes
    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00038d7f

       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048    18874367     9436160   83  Linux
    /dev/sda2        18876414    20969471     1046529    5  Extended
    /dev/sda5        18876416    20969471     1046528   82  Linux swap / Solaris

    Disk /dev/sdb: 32.2 GB, 32212254720 bytes
    64 heads, 51 sectors/track, 19275 cylinders, total 62914560 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x434da72d

       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048    62914559    31456256   83  Linux
    root@spbswgvml10:/opt/nmon# mount
    /dev/sda1 on / type ext4 (rw,errors=remount-ro)
    proc on /proc type proc (rw,noexec,nosuid,nodev)
    sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
    none on /sys/fs/cgroup type tmpfs (rw)
    none on /sys/fs/fuse/connections type fusectl (rw)
    none on /sys/kernel/debug type debugfs (rw)
    none on /sys/kernel/security type securityfs (rw)
    udev on /dev type devtmpfs (rw,mode=0755)
    devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
    tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
    none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
    none on /run/shm type tmpfs (rw,nosuid,nodev)
    none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
    none on /sys/fs/pstore type pstore (rw)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
    cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
    cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
    systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)
    rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)

现在我想从设备 /dev/sdb1 获取信息,它由mapr用作hadoop fs . 我知道我可以使用类似的东西

hadoop fs df

但我希望有另一种方法可以使用,总大小等 .

我无法挂载/ dev / sdb1因为某些进程正在使用它 . 并且找不到分区可能已经安装的路径 .

1 回答

  • 0

    使用以下命令:

    maprcli disk list -host `hostname`
    

    mfs使用的磁盘不会显示在常规安装输出上 .

相关问题