首页 文章

H2O:无法通过`h2o.loadModel`从磁盘读取LARGE模型

提问于
浏览
0
  • UPDATED 28Jun2017, below, in response to @Michal Kurka.

  • UPDATED 26Jun2017, below.

我无法加载以原生H2O格式保存的大型GBM模型(即十六进制) .

  • H2O v3.10.5.1

  • R v3.3.2

  • Linux 3.10.0-327.el7.x86_64 GNU / Linux

我的目标是最终将此模型保存为MOJO .

这个模型太大了,我不得不用最小/最大内存100G / 200G来初始化H2O,然后H2O的模型训练才会成功运行 .

这就是我训练GBM模型的方法:

localH2O <- h2o.init(ip = 'localhost', port = port, nthreads = -1,
                     min_mem_size = '100G', max_mem_size = '200G')

iret <- h2o.gbm(x = predictors, y = response, training_frame = train.hex,
                validation_frame = holdout.hex, distribution="multinomial",
                ntrees = 3000, learn_rate = 0.01, max_depth = 5, nbins = numCats,
                model_id = basename_model)

gbm <- h2o.getModel(basename_model)
oPath <- h2o.saveModel(gbm, path = './', force = TRUE)

培训数据包含81,886条记录,其中1413列 . 在这些列中,有19个是因素 . 绝大多数这些列都是0/1 .

$ wc -l training/*.txt
     81887 training/train.txt
     27294 training/holdout.txt

这是写入磁盘的已保存模型:

$ ls -l

total 37G
-rw-rw-r-- 1 bfo7328 37G Jun 22 19:57 my_model.hex

这就是我尝试使用相同的大内存分配值100G / 200G从磁盘读取模型的方法:

$ R

R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-redhat-linux-gnu (64-bit)

> library(h2o)
> localH2O=h2o.init(ip='localhost', port=65432, nthreads=-1,
                  min_mem_size='100G', max_mem_size='200G')

H2O is not running yet, starting it now...

Note:  In case of errors look at the following log files:
    /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.out
    /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.err

openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

Starting H2O JVM and connecting: .. Connection successful!

R is connected to the H2O cluster: 
    H2O cluster uptime:         3 seconds 550 milliseconds 
    H2O cluster version:        3.10.5.1 
    H2O cluster version age:    13 days  
    H2O cluster name:           H2O_started_from_R_bfo7328_kmt050 
    H2O cluster total nodes:    1 
    H2O cluster total memory:   177.78 GB 
    H2O cluster total cores:    64 
    H2O cluster allowed cores:  64 
    H2O cluster healthy:        TRUE 
    H2O Connection ip:          localhost 
    H2O Connection port:        65432 
    H2O Connection proxy:       NA 
    H2O Internal Security:      FALSE 
    R Version:                  R version 3.3.2 (2016-10-31)

来自 /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.out

INFO: Processed H2O arguments: [-name, H2O_started_from_R_bfo7328_kmt050, -ip, localhost, -port, 65432, -ice_root, /tmp/RtmpVSwxXR]
INFO: Java availableProcessors: 64
INFO: Java heap totalMemory: 95.83 GB
INFO: Java heap maxMemory: 177.78 GB
INFO: Java version: Java 1.8.0_121 (from Oracle Corporation)
INFO: JVM launch parameters: [-Xms100G, -Xmx200G, -ea]
INFO: OS version: Linux 3.10.0-327.el7.x86_64 (amd64)
INFO: Machine physical memory: 1.476 TB

我打电话给 h2o.loadModel

if ( TRUE ) {
  now <- format(Sys.time(), "%a %b %d %Y %X")
  cat( sprintf( 'Begin %s\n', now ))

  model_filename <- './my_model.hex'
  in_model.hex <- h2o.loadModel( model_filename )

  now <- format(Sys.time(), "%a %b %d %Y %X")
  cat( sprintf( 'End   %s\n', now ))
}

来自 /tmp/RtmpVSwxXR/h2o_bfo7328_started_from_r.out

INFO: GET /, parms: {}
INFO: GET /, parms: {}
INFO: GET /, parms: {}
INFO: GET /3/InitID, parms: {}
INFO: Locking cloud to new members, because water.api.schemas3.InitIDV3
INFO: POST /99/Models.bin/, parms: {dir=./my_model.hex}

等了一个小时后,我看到这些“内存不足”(OOM)错误消息:

INFO: POST /99/Models.bin/, parms: {dir=./my_model.hex}
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:24.86 GB + POJO:112.01 GB + FREE:40.90 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:26.31 GB + POJO:118.41 GB + FREE:33.06 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:27.36 GB + POJO:123.03 GB + FREE:27.39 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!
#e Thread WARN: Swapping!  GC CALLBACK, (K/V:28.21 GB + POJO:126.73 GB + FREE:22.83 GB == MEM_MAX:177.78 GB), desiredKV=22.22 GB OOM!

我不希望需要这么多内存来从磁盘读取模型 .

如何将此模型从磁盘读入内存 . 一旦我这样做,我可以将其保存为MOJO吗?


UPDATE 1: 26Jun2017

我刚刚注意到GB2模型的磁盘大小在H2O版本之间急剧增加:

H2O v3.10.2.1:
    -rw-rw-r-- 1 169M Jun 19 07:23 my_model.hex

H2O v3.10.5.1:
    -rw-rw-r-- 1  37G Jun 22 19:57 my_model.hex

有什么想法吗?这可能是问题的根源吗?


UPDATE 2: 28Jun2017 in response to comments by @Michal Kurka.

当我通过 fread 加载训练数据时,每列的类(类型)是正确的:* 24列是'character'; * 1389列是'整数'(除了一列之外的所有列都是0/1); *总共1413列 .

然后,我将R原生数据帧转换为H2O数据框并手动分解20列:

train.hex <- as.h2o(df.train, destination_frame = "train.hex”)
length(factorThese)
[1] 20
train.hex[factorThese] <- as.factor(train.hex[factorThese])
str(train.hex)

str(train.hex) 输出的精简版本,仅显示那些因子的19列(1个因子是响应列):

- attr(*, "nrow")= int 81886
 - attr(*, "ncol")= int 1413
 - attr(*, "types")=List of 1413
  ..$ : chr "enum" : Factor w/ 72 levels
  ..$ : chr "enum" : Factor w/ 77 levels
  ..$ : chr "enum" : Factor w/ 51 levels
  ..$ : chr "enum" : Factor w/ 4226 levels
  ..$ : chr "enum" : Factor w/ 4183 levels
  ..$ : chr "enum" : Factor w/ 3854 levels
  ..$ : chr "enum" : Factor w/ 3194 levels
  ..$ : chr "enum" : Factor w/ 735 levels
  ..$ : chr "enum" : Factor w/ 133 levels
  ..$ : chr "enum" : Factor w/ 16 levels
  ..$ : chr "enum" : Factor w/ 25 levels
  ..$ : chr "enum" : Factor w/ 647 levels
  ..$ : chr "enum" : Factor w/ 715 levels
  ..$ : chr "enum" : Factor w/ 679 levels
  ..$ : chr "enum" : Factor w/ 477 levels
  ..$ : chr "enum" : Factor w/ 645 levels
  ..$ : chr "enum" : Factor w/ 719 levels
  ..$ : chr "enum" : Factor w/ 678 levels
  ..$ : chr "enum" : Factor w/ 478 levels

上述结果在v3.10.2.1(写入磁盘的较小型号:169M)和v3.10.5.1(写入磁盘的较大型号:37G)之间完全相同 .

实际的GBM培训使用 nbins <- 37

numCats <- n_distinct(as.matrix(dplyr::select_(df.train,response)))
numCats
[1] 37

iret <- h2o.gbm(x = predictors, y = response, training_frame = train.hex,
          validation_frame = holdout.hex, distribution="multinomial",
          ntrees = 3000, learn_rate = 0.01, max_depth = 5, nbins = numCats,
          model_id = basename_model)

1 回答

  • 0

    模型的大小差异(169M对37G)令人惊讶 . 您能否确保H2O将所有数字列识别为数字而不是具有非常高基数的分类?

    您是否使用自动检测列类型或手动指定它?

相关问题