我创建了一个粘合爬虫来读取apache访问日志 . 下面是表定义,爬虫在Glue数据目录中创建它 . 我能够从Athena获得以下DDL语句 .

CREATE EXTERNAL TABLE crawler_access_log(
.. Other column names
timestamp string COMMENT 'from deserializer'
) ROW FORMAT SERDE 
'com.amazonaws.glue.serde.GrokSerDe' 
WITH SERDEPROPERTIES ( 
'input.format'='%{COMBINEDAPACHELOG}') 
STORED AS INPUTFORMAT 
'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://some location'
TBLPROPERTIES (
'CrawlerSchemaDeserializerVersion'='1.0', 
'CrawlerSchemaSerializerVersion'='1.0', 
'UPDATED_BY_CRAWLER'='crawler_access_log', 
'averageRecordSize'='268', 
'classification'='combinedapache', 
'compressionType'='gzip', 
'grokPattern'='%{COMBINEDAPACHELOG}', 
'objectCount'='2', 
'recordCount'='71552', 
'sizeKey'='25268746', 
'typeOfData'='file')

//SAMPLE TIMESTAMP (Data type as string) COULMN DATA
 20/Jul/2018:03:27:44 +0000
 20/Jul/2018:03:27:44 +0000

但是当我通过glueContext从同一个表读取数据时, timestamp 列的数据类型变为 date 而不是 string . 我使用下面的代码从表中读取数据 .

val rawDynamicDataFrame = glueContext.getCatalogSource(database = "someDB", 
tableName = "crawler_access_log", redshiftTmpDir = "", transformationContext 
= "rawDynamicDataFrame").getDynamicFrame();

当我执行printSchema并查看动态数据框的数据时,我看到 timestamp 列的数据类型为 date 而不是 string .Hence数据被截断 .

scala> rawDynamicDataFrame.printSchema
root
|-- xx: string
|-- xx: string
|-- xx: string
|-- timestamp: date
|-- xx: string
|-- xx: string
|-- xx: string
scala> rawDynamicDataFrame.show(2)
2018-07-20  ///Original (20/Jul/2018:03:27:44 +0000)
2018-07-20  ///Original (20/Jul/2018:03:27:44 +0000)

即使胶水从胶水目录中读取数据,我也无法弄清楚为什么数据类型会发生变化 .