我出面阅读这篇关于展平json文件并在redshift中上传的亚马逊文章 .
我的计划是转换json文件并在s3中上传,然后将文件再次抓取到aws-glue到数据目录,并将数据作为表格上传到amazon redshift中 .
现在,“示例3:Python代码转换嵌套JSON并将其输出到ORC”中的代码问题显示了一些错误:
NameError:未定义名称'spark'
不是我迷失了因为我是aws-glue的新手,我需要在redshift中上传json(它们是嵌套数组) .
这是我的代码:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
#from awsglue.transforms import Relationalize
# Begin variables to customize with your information
glue_source_database = "DATABASE"
glue_source_table = "TABLE_NAME"
glue_temp_storage = "s3://XXXXX"
glue_relationalize_output_s3_path = "s3://XXXXX"
dfc_root_table_name = "root" #default value is "roottable"
# End variables to customize with your information
glueContext = GlueContext(spark.sparkContext)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = glue_source_database, table_name = glue_source_table, transformation_ctx = "datasource0")
dfc = Relationalize.apply(frame = datasource0, staging_path = glue_temp_storage, name = dfc_root_table_name, transformation_ctx = "dfc")
blogdata = dfc.select(dfc_root_table_name)
blogdataoutput = glueContext.write_dynamic_frame.from_options(frame = blogdata, connection_type = "s3", connection_options = {"path": glue_relationalize_output_s3_path}, format = "orc", transformation_ctx = "blogdataoutput")
2 回答
你错误地创造了
GlueContext
. 你的代码应该是这样的你可以看一下Glue code examples from AWS .
@beni
我和你一样跟着指南有同样的问题,正确的火花环境会导致另一个问题就是编写 glueContext.write_dynamic_frame.from_options .
检查日志,我看到 null value error . 所以添加 DropNullFields.apply 解决了这个问题
希望这有帮助!