首页 文章

将RDD [org.apache.spark.sql.Row]转换为RDD [org.apache.spark.mllib.linalg.Vector]

提问于
浏览
7

我对Spark和Scala相对较新 .

我从以下数据帧开始(单个列由密集的双打矢量组成):

scala> val scaledDataOnly_pruned = scaledDataOnly.select("features")
scaledDataOnly_pruned: org.apache.spark.sql.DataFrame = [features: vector]

scala> scaledDataOnly_pruned.show(5)
+--------------------+
|            features|
+--------------------+
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
+--------------------+

直接转换为RDD会生成org.apache.spark.rdd.RDD [org.apache.spark.sql.Row]的实例:

scala> val scaledDataOnly_rdd = scaledDataOnly_pruned.rdd
scaledDataOnly_rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[32] at rdd at <console>:66

有谁知道如何将此DF转换为org.apache.spark.rdd.RDD [org.apache.spark.mllib.linalg.Vector]的实例?到目前为止,我的各种尝试都没有成功 .

提前感谢您的任何指示!

3 回答

  • 6

    编辑:使用更复杂的方式来解释Row中的字段 .

    这对我有用

    val featureVectors = features.map(row => {
      Vectors.dense(row.toSeq.toArray.map({
        case s: String => s.toDouble
        case l: Long => l.toDouble
        case _ => 0.0
      }))
    })
    

    features是spark SQL的DataFrame .

  • 5
    import org.apache.spark.mllib.linalg.Vectors
    
    scaledDataOnly
       .rdd
       .map{
          row => Vectors.dense(row.getAs[Seq[Double]]("features").toArray)
         }
    
  • 1

    刚发现:

    val scaledDataOnly_rdd = scaledDataOnly_pruned.map{x:Row => x.getAs[Vector](0)}
    

相关问题