首页 文章

将pyspark字符串转换为日期格式

提问于
浏览
36

我有一个日期pyspark数据帧,其字符串列的格式为 MM-dd-yyyy ,我试图将其转换为日期列 .

我试过了:

df.select(to_date(df.STRING_COLUMN).alias('new_date')).show()

我得到一串空值 . 有人可以帮忙吗?

4 回答

  • 0

    没有udf可以(优先?)这样做:

    > from pyspark.sql.functions import unix_timestamp
    
    > df = spark.createDataFrame([("11/25/1991",), ("11/24/1991",), ("11/30/1991",)], ['date_str'])
    
    > df2 = df.select('date_str', from_unixtime(unix_timestamp('date_str', 'MM/dd/yyy')).alias('date'))
    
    > df2
    
    DataFrame[date_str: string, date: timestamp]
    
    > df2.show()
    
    +----------+--------------------+
    |  date_str|                date|
    +----------+--------------------+
    |11/25/1991|1991-11-25 00:00:...|
    |11/24/1991|1991-11-24 00:00:...|
    |11/30/1991|1991-11-30 00:00:...|
    +----------+--------------------+
    

    Update (1/10/2018):

    对于Spark 2.2,最好的方法是使用to_dateto_timestamp函数,它们都支持 format 参数 . 来自文档:

    >>> df = spark.createDataFrame([('1997-02-28 10:30:00',)], ['t'])
    >>> df.select(to_timestamp(df.t, 'yyyy-MM-dd HH:mm:ss').alias('dt')).collect()
    [Row(dt=datetime.datetime(1997, 2, 28, 10, 30))]
    
  • 30
    from datetime import datetime
    from pyspark.sql.functions import col, udf
    from pyspark.sql.types import DateType
    
    
    
    # Creation of a dummy dataframe:
    df1 = sqlContext.createDataFrame([("11/25/1991","11/24/1991","11/30/1991"), 
                                ("11/25/1391","11/24/1992","11/30/1992")], schema=['first', 'second', 'third'])
    
    # Setting an user define function:
    # This function converts the string cell into a date:
    func =  udf (lambda x: datetime.strptime(x, '%m/%d/%Y'), DateType())
    
    df = df1.withColumn('test', func(col('first')))
    
    df.show()
    
    df.printSchema()
    

    这是输出:

    +----------+----------+----------+----------+
    |     first|    second|     third|      test|
    +----------+----------+----------+----------+
    |11/25/1991|11/24/1991|11/30/1991|1991-01-25|
    |11/25/1391|11/24/1992|11/30/1992|1391-01-17|
    +----------+----------+----------+----------+
    
    root
     |-- first: string (nullable = true)
     |-- second: string (nullable = true)
     |-- third: string (nullable = true)
     |-- test: date (nullable = true)
    
  • 13

    strptime()方法对我不起作用 . 我得到另一个清洁解决方案,使用演员:

    from pyspark.sql.types import DateType
    spark_df1 = spark_df.withColumn("record_date",spark_df['order_submitted_date'].cast(DateType()))
    #below is the result
    spark_df1.select('order_submitted_date','record_date').show(10,False)
    
    +---------------------+-----------+
    |order_submitted_date |record_date|
    +---------------------+-----------+
    |2015-08-19 12:54:16.0|2015-08-19 |
    |2016-04-14 13:55:50.0|2016-04-14 |
    |2013-10-11 18:23:36.0|2013-10-11 |
    |2015-08-19 20:18:55.0|2015-08-19 |
    |2015-08-20 12:07:40.0|2015-08-20 |
    |2013-10-11 21:24:12.0|2013-10-11 |
    |2013-10-11 23:29:28.0|2013-10-11 |
    |2015-08-20 16:59:35.0|2015-08-20 |
    |2015-08-20 17:32:03.0|2015-08-20 |
    |2016-04-13 16:56:21.0|2016-04-13 |
    
  • 46

    试试这个:

    df = spark.createDataFrame([('2018-07-27 10:30:00',)], ['Date_col'])
    df.select(from_unixtime(unix_timestamp(df.Date_col, 'yyyy-MM-dd HH:mm:ss')).alias('dt_col'))
    df.show()
    +-------------------+  
    |           Date_col|  
    +-------------------+  
    |2018-07-27 10:30:00|  
    +-------------------+
    

相关问题