首页 文章

Spark中的潜在Dirichlet分配(LDA)

提问于
浏览
8

我正在尝试在Spark中编写一个progor来执行Latent Dirichlet分配(LDA) . Spark文档page提供了一个很好的示例,用于在示例数据上执行LDA . 以下是该计划

from pyspark.mllib.clustering import LDA, LDAModel
from pyspark.mllib.linalg import Vectors

# Load and parse the data
data = sc.textFile("data/mllib/sample_lda_data.txt")
parsedData = data.map(lambda line: Vectors.dense([float(x) for x in line.strip().split(' ')]))
# Index documents with unique IDs
corpus = parsedData.zipWithIndex().map(lambda x: [x[1], x[0]]).cache()

# Cluster the documents into three topics using LDA
ldaModel = LDA.train(corpus, k=3)

# Output topics. Each is a distribution over words (matching word count vectors)
print("Learned topics (as distributions over vocab of " + str(ldaModel.vocabSize())
      + " words):")
topics = ldaModel.topicsMatrix()
for topic in range(3):
    print("Topic " + str(topic) + ":")
    for word in range(0, ldaModel.vocabSize()):
        print(" " + str(topics[word][topic]))

# Save and load model
ldaModel.save(sc, "target/org/apache/spark/PythonLatentDirichletAllocationExample/LDAModel")
sameModel = LDAModel\
    .load(sc, "target/org/apache/spark/PythonLatentDirichletAllocationExample/LDAModel")

使用的样本输入(sample_lda_data.txt)如下

1 2 6 0 2 3 1 1 0 0 3
1 3 0 1 3 0 0 2 0 0 1
1 4 1 0 0 4 9 0 1 2 0
2 1 0 3 0 0 5 0 2 3 9
3 1 1 9 3 0 2 0 0 1 3
4 2 0 3 4 5 1 1 1 4 0
2 1 0 3 0 0 5 0 2 2 9
1 1 1 9 2 1 2 0 0 1 3
4 4 0 3 4 2 1 3 0 0 0
2 8 2 0 3 0 2 0 2 7 2
1 1 1 9 0 2 2 0 0 3 3
4 1 0 0 4 5 1 3 0 1 0

如何修改程序以在包含文本数据而不是数字的文本数据文件中运行?让示例文件包含以下文本 .

潜在Dirichlet分配(LDA)是一个主题模型,它从一组文本文档中推断出主题 . LDA可以被认为是一种聚类算法,如下所示:主题对应于聚类中心,文档对应于数据集中的示例(行) . 主题和文档都存在于特征空间中,其中特征向量是单词计数(单词包)的向量 . LDA不是使用传统距离估计聚类,而是使用基于如何生成文本文档的统计模型的函数 .

1 回答

  • 8

    在做了一些研究后,我试图回答这个问题 . 下面是使用Spark在文本文档上使用真实文本数据执行LDA的示例代码 .

    from pyspark.sql import SQLContext, Row
    from pyspark.ml.feature import CountVectorizer
    from pyspark.mllib.clustering import LDA, LDAModel
    from pyspark.mllib.linalg import Vector, Vectors
    
    path = "sample_text_LDA.txt"
    
    data = sc.textFile(path).zipWithIndex().map(lambda (words,idd): Row(idd= idd, words = words.split(" ")))
    docDF = spark.createDataFrame(data)
    Vector = CountVectorizer(inputCol="words", outputCol="vectors")
    model = Vector.fit(docDF)
    result = model.transform(docDF)
    
    corpus = result.select("idd", "vectors").rdd.map(lambda (x,y): [x,Vectors.fromML(y)]).cache()
    
    # Cluster the documents into three topics using LDA
    ldaModel = LDA.train(corpus, k=3,maxIterations=100,optimizer='online')
    topics = ldaModel.topicsMatrix()
    vocabArray = model.vocabulary
    
    wordNumbers = 10  # number of words per topic
    topicIndices = sc.parallelize(ldaModel.describeTopics(maxTermsPerTopic = wordNumbers))
    
    def topic_render(topic):  # specify vector id of words to actual words
        terms = topic[0]
        result = []
        for i in range(wordNumbers):
            term = vocabArray[terms[i]]
            result.append(term)
        return result
    
    topics_final = topicIndices.map(lambda topic: topic_render(topic)).collect()
    
    for topic in range(len(topics_final)):
        print ("Topic" + str(topic) + ":")
        for term in topics_final[topic]:
            print (term)
        print ('\n')
    

    如问题中所述,在文本数据上提取的主题如下:

    enter image description here

相关问题