我试图通过pyspark从多个分区读取多个镶木地板文件,并将它们连接到一个大数据框 . 文件看起来像,
hdfs dfs -ls /data/customers/odysseyconsultants/logs_ch_blade_fwvpn
Found 180 items
drwxrwxrwx - impala impala 0 2018-03-01 10:31 /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/_impala_insert_staging
drwxr-xr-x - impala impala 0 2017-08-23 17:55 /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/cdateint=20170822
drwxr-xr-x - impala impala 0 2017-08-24 05:57 /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/cdateint=20170823
drwxr-xr-x - impala impala 0 2017-08-25 06:00 /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/cdateint=20170824
drwxr-xr-x - impala impala 0 2017-08-26 06:04 /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/cdateint=20170825
每个分区都有一个或多个镶木地板文件,即
hdfs dfs -ls /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/cdateint=20170822
Found 1 items
-rw-r--r-- 2 impala impala 72252308 2017-08-23 17:55 /data/customers/odysseyconsultants/logs_ch_blade_fwvpn/cdateint=20170822/5b4bb1c5214fdffd-cc8dbcf600000008_1393229110_data.0.parq
What I m trying to create is a generic function that will take a from - to argument and load and concatenate all the parquet files of that time range in a big data frame.
我可以创建要读取的文件,
def read_files(table,from1,to):
s1 = ', '.join('/data/customers/odysseyconsultants/' + table + '/' + 'cdateint=' + str(i) for i in range(from1, to+1))
return s1.split(', ')
如果我尝试读取文件,如下所示,我得到一个例外
for i in read_files('logs_ch_blade_fwvpn', 20170506, 20170510):
... sqlContext.read.parquet(i).show()
如果我试着读它
x = read_files('logs_cs_blade_fwvpn', 20180109, 20180110)
d1 = sqlContext.read.parquet(*x)
我收到错误
pyspark.sql.utils.AnalysisException:u'Path不存在:hdfs:// nameservice1 / data / customers / odysseyconsultants / logs_cs_blade_fwvpn / cdateint = 20180109;'
2 回答
将目录名称用作分区怎么样?例如:
这是一种做法,虽然我对替代方案持开放态度