首页 文章

在Scala Apache Spark中输出DStream的内容

提问于
浏览
4

Spark代码下面似乎没有对文件执行任何操作 example.txt

val conf = new org.apache.spark.SparkConf()
  .setMaster("local")
  .setAppName("filter")
  .setSparkHome("C:\\spark\\spark-1.2.1-bin-hadoop2.4")
  .set("spark.executor.memory", "2g");

val ssc = new StreamingContext(conf, Seconds(1))
val dataFile: DStream[String] = ssc.textFileStream("C:\\example.txt")

dataFile.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate

我正在尝试使用 dataFile.print() 打印文件的前10个元素

一些生成的输出:

15/03/12 12:23:53 INFO JobScheduler: Started JobScheduler
15/03/12 12:23:54 INFO FileInputDStream: Finding new files took 105 ms
15/03/12 12:23:54 INFO FileInputDStream: New files at time 1426163034000 ms:

15/03/12 12:23:54 INFO JobScheduler: Added jobs for time 1426163034000 ms
15/03/12 12:23:54 INFO JobScheduler: Starting job streaming job 1426163034000 ms.0 from job set of time 1426163034000 ms
-------------------------------------------
Time: 1426163034000 ms
-------------------------------------------

15/03/12 12:23:54 INFO JobScheduler: Finished job streaming job 1426163034000 ms.0 from job set of time 1426163034000 ms
15/03/12 12:23:54 INFO JobScheduler: Total delay: 0.157 s for time 1426163034000 ms (execution: 0.006 s)
15/03/12 12:23:54 INFO FileInputDStream: Cleared 0 old files that were older than 1426162974000 ms: 
15/03/12 12:23:54 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
15/03/12 12:23:55 INFO FileInputDStream: Finding new files took 2 ms
15/03/12 12:23:55 INFO FileInputDStream: New files at time 1426163035000 ms:

15/03/12 12:23:55 INFO JobScheduler: Added jobs for time 1426163035000 ms
15/03/12 12:23:55 INFO JobScheduler: Starting job streaming job 1426163035000 ms.0 from job set of time 1426163035000 ms
-------------------------------------------
Time: 1426163035000 ms
-------------------------------------------

15/03/12 12:23:55 INFO JobScheduler: Finished job streaming job 1426163035000 ms.0 from job set of time 1426163035000 ms
15/03/12 12:23:55 INFO JobScheduler: Total delay: 0.011 s for time 1426163035000 ms (execution: 0.001 s)
15/03/12 12:23:55 INFO MappedRDD: Removing RDD 1 from persistence list
15/03/12 12:23:55 INFO BlockManager: Removing RDD 1
15/03/12 12:23:55 INFO FileInputDStream: Cleared 0 old files that were older than 1426162975000 ms: 
15/03/12 12:23:55 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
15/03/12 12:23:56 INFO FileInputDStream: Finding new files took 3 ms
15/03/12 12:23:56 INFO FileInputDStream: New files at time 1426163036000 ms:

example.txt 格式:

gdaeicjdcg,194,155,98,107
jhbcfbdigg,73,20,122,172
ahdjfgccgd,28,47,40,178
afeidjjcef,105,164,37,53
afeiccfdeg,29,197,128,85
aegddbbcii,58,126,89,28
fjfdbfaeid,80,89,180,82

正如 print 文档所述:

/ ** 打印此DStream中生成的每个RDD的前十个元素 . 这是一个输出运算符,因此这个DStream将被注册为输出流并在那里实现 . * /

这是否意味着为此流生成了0 RDD?如果想要查看RDD的内容,则使用Apache Spark将使用RDD的collect函数 . 这些类似于Streams的方法吗?那么总之如何打印到Stream的控制台内容?

更新:

更新了基于@ 0x0FFF注释的代码 . http://spark.apache.org/docs/1.2.0/streaming-programming-guide.html似乎没有给出从本地文件系统读取的示例 . 这不像使用Spark核心那样常见,那里有从文件中读取数据的明确示例吗?

这是更新的代码:

val conf = new org.apache.spark.SparkConf()
  .setMaster("local[2]")
  .setAppName("filter")
  .setSparkHome("C:\\spark\\spark-1.2.1-bin-hadoop2.4")
  .set("spark.executor.memory", "2g");

val ssc = new StreamingContext(conf, Seconds(1))
val dataFile: DStream[String] = ssc.textFileStream("file:///c:/data/")

dataFile.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate

但输出是一样的 . 当我将新文件添加到 c:\\data dir(其格式与现有数据文件相同)时,它们不会被处理 . 我假设 dataFile.print 应该先打10行到控制台?

更新2:

也许这与我在Windows环境中运行此代码的事实有关?

2 回答

  • 2

    你误解了 textFileStream 的使用 . 以下是Spark文档中的描述:

    创建一个输入流,监视与Hadoop兼容的文件系统以获取新文件并将其作为文本文件读取(使用密钥作为LongWritable,值作为Text,输入格式作为TextInputFormat) .

    首先,你应该将它传递给目录,其次,这个目录应该可以从运行接收器的节点获得,所以最好将HDFS用于此目的 . 然后,当您将 new 文件放入此目录时,它将由函数 print() 处理,并且将为其打印前10行

    更新:

    我的代码:

    [alex@sparkdemo tmp]$ pyspark --master local[2]
    Python 2.6.6 (r266:84292, Nov 22 2013, 12:16:22) 
    [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    Spark assembly has been built with Hive, including Datanucleus jars on classpath
    s15/03/12 06:37:49 WARN Utils: Your hostname, sparkdemo resolves to a loopback address: 127.0.0.1; using 192.168.208.133 instead (on interface eth0)
    15/03/12 06:37:49 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
    
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /__ / .__/\_,_/_/ /_/\_\   version 1.2.0
          /_/
    
    Using Python version 2.6.6 (r266:84292, Nov 22 2013 12:16:22)
    SparkContext available as sc.
    >>> from pyspark.streaming import StreamingContext
    >>> ssc = StreamingContext(sc, 30)
    >>> dataFile = ssc.textFileStream('file:///tmp')
    >>> dataFile.pprint()
    >>> ssc.start()
    >>> ssc.awaitTermination()
    -------------------------------------------
    Time: 2015-03-12 06:40:30
    -------------------------------------------
    
    -------------------------------------------
    Time: 2015-03-12 06:41:00
    -------------------------------------------
    
    -------------------------------------------
    Time: 2015-03-12 06:41:30
    -------------------------------------------
    1 2 3
    4 5 6
    7 8 9
    
    -------------------------------------------
    Time: 2015-03-12 06:42:00
    -------------------------------------------
    
  • 0

    这是我写的一个自定义接收器,它监听指定目录下的数据:

    package receivers
    
    import java.io.File
    import org.apache.spark.{ SparkConf, Logging }
    import org.apache.spark.storage.StorageLevel
    import org.apache.spark.streaming.{ Seconds, StreamingContext }
    import org.apache.spark.streaming.receiver.Receiver
    
    class CustomReceiver(dir: String)
      extends Receiver[String](StorageLevel.MEMORY_AND_DISK_2) with Logging {
    
      def onStart() {
        // Start the thread that receives data over a connection
        new Thread("File Receiver") {
          override def run() { receive() }
        }.start()
      }
    
      def onStop() {
        // There is nothing much to do as the thread calling receive()
        // is designed to stop by itself isStopped() returns false
      }
    
      def recursiveListFiles(f: File): Array[File] = {
        val these = f.listFiles
        these ++ these.filter(_.isDirectory).flatMap(recursiveListFiles)
      }
    
      private def receive() {
    
        for (f <- recursiveListFiles(new File(dir))) {
    
          val source = scala.io.Source.fromFile(f)
          val lines = source.getLines
          store(lines)
          source.close()
          logInfo("Stopped receiving")
          restart("Trying to connect again")
    
        }
      }
    }
    

    我想要注意的一件事是文件需要在<= configured batchDuration 的时间内处理 . 在下面的示例中,'s set to 10 seconds but if time to process files by receiver exceeds 10 seconds then some data files will not be processed. I' m可以在这一点上进行修正 .

    以下是自定义接收器的实现方式:

    val conf = new org.apache.spark.SparkConf()
      .setMaster("local[2]")
      .setAppName("filter")
      .setSparkHome("C:\\spark\\spark-1.2.1-bin-hadoop2.4")
      .set("spark.executor.memory", "2g");
    
    val ssc = new StreamingContext(conf, Seconds(10))
    
    val customReceiverStream: ReceiverInputDStream[String] = ssc.receiverStream(new CustomReceiver("C:\\data\\"))
    
    customReceiverStream.print
    
    customReceiverStream.foreachRDD(m => {
      println("size is " + m.collect.size)
    })
    
    ssc.start() // Start the computation
    ssc.awaitTermination() // Wait for the computation to terminate
    

    更多信息在:http://spark.apache.org/docs/1.2.0/streaming-programming-guide.htmlhttps://spark.apache.org/docs/1.2.0/streaming-custom-receivers.html

相关问题