我正在尝试从包含镶木地板数据文件的路径列表中将数据加载到DataFrame中 . 所以,列表看起来像:

somelist = ["s3://some/path/col1=val1/col2=val2", "s3://some/path/col1=foo/col2=bar"]

另外,我希望能够将分区加载到DataFrame中 . 通常我可以使用类似的东西:

spark.read.option("basePath", "s3://some/path").parquet(*somelist)

但是,问题是列表中的某些位置不包含所有分区 . 所以我的列表看起来更像是:

somelist = ["s3://some/path/col1=val1/col2=val2", "s3://some/path/col1=foo"]

当我尝试运行相同的spark命令时,我收到此错误:

Py4JJavaError: An error occurred while calling o256.parquet.
: java.lang.AssertionError: assertion failed: Conflicting partition column names detected:
    Partition column name list #0: transactiondate, timestamp
    Partition column name list #1: transactiondate
For partitioned table directories, data files should only live in leaf directories.
And directories at the same level should have the same partition column name.
Please check the following directories for unexpected files or inconsistent partition column names:
    s3://dlx-prod-core-shared/prod/data/dlx/dsi/db01/spirepanel6/basketdetail_parquet/transactiondate=2013-01-18
    s3://dlx-prod-core-shared/prod/data/dlx/dsi/db01/spirepanel6/basketdetail_parquet/transactiondate=2017-07-10/timestamp=20180127002819
    at scala.Predef$.assert(Predef.scala:170)
    at org.apache.spark.sql.execution.datasources.PartitioningUtils$.resolvePartitions(PartitioningUtils.scala:320)
    at org.apache.spark.sql.execution.datasources.PartitioningUtils$.parsePartitions(PartitioningUtils.scala:131)
    at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.inferPartitioning(PartitioningAwareFileIndex.scala:146)

那么,在缺少分区字段的情况下,是否有一种简单的方法可以将此列表加载到具有n / a值的DataFrame中?或者我需要通过一些丑陋的名称解析来解决这个问题吗?

提前致谢 .