LDA变分EM实现详解与C源代码剖析

需积分: 10 6 下载量 67 浏览量 更新于2024-07-20 收藏 1.96MB PDF 举报
本资源是一份名为《LDA漫游指南》的电子书,其中的第八章专门介绍了Latent Dirichlet Allocation (LDA)的变分Expectation-Maximization (EM)算法实现。LDA是一种流行的无监督主题模型,常用于文本分析和文档聚类,以发现潜在的主题结构。该章的核心内容围绕以下几个部分展开: 1. 回顾与理解:章节开始回顾了前文关于变分LDA的推导,特别是变分EM的过程,强调了E-step(估计)和M-step(最大化)的重要性。E-step通过当前的α和β参数来估算文档的主题分布和主题-词分布,而M-step则是基于E步的结果更新α和β,以最大化下界函数。 2. 伪代码框架:为了帮助读者更好地理解和实现,作者提供了LDA变分EM的伪代码框架。它展示了算法的基本流程,包括从初始化参数开始,通过迭代E步和M步进行参数估计和优化,直至收敛。 3. 详细剖析:作者深入解析了Blei版本的C语言源代码,可能会涉及到细节如数据结构的选择、性能优化技巧以及如何处理初始化和迭代过程中的复杂性。这个部分对于希望亲手实践LDA的开发者来说极具价值,因为它提供了实际操作的指导。 4. 比喻解释:用杂技演员抛鸡蛋的比喻形象地说明了M-step中的参数更新过程,即参数在不同步骤间反复调整以提升下界函数的价值,这是一个典型的迭代优化过程。 通过阅读此章节,读者不仅能够掌握LDA的基本原理,还能学习如何将理论应用到实际编程中,这对于理解和应用LDA技术至关重要。此外,对于那些初次接触LDA或者编程实现的同学,这份资源提供了清晰的学习路径和实践经验,有助于提升他们在IT领域的技能。

把这段代码的PCA换成LDA:LR_grid = LogisticRegression(max_iter=1000, random_state=42) LR_grid_search = GridSearchCV(LR_grid, param_grid=param_grid, cv=cvx ,scoring=scoring,n_jobs=10,verbose=0) LR_grid_search.fit(pca_X_train, train_y) estimators = [ ('lr', LR_grid_search.best_estimator_), ('svc', svc_grid_search.best_estimator_), ] clf = StackingClassifier(estimators=estimators, final_estimator=LinearSVC(C=5, random_state=42),n_jobs=10,verbose=1) clf.fit(pca_X_train, train_y) estimators = [ ('lr', LR_grid_search.best_estimator_), ('svc', svc_grid_search.best_estimator_), ] param_grid = {'final_estimator':[LogisticRegression(C=0.00001),LogisticRegression(C=0.0001), LogisticRegression(C=0.001),LogisticRegression(C=0.01), LogisticRegression(C=0.1),LogisticRegression(C=1), LogisticRegression(C=10),LogisticRegression(C=100), LogisticRegression(C=1000)]} Stacking_grid =StackingClassifier(estimators=estimators,) Stacking_grid_search = GridSearchCV(Stacking_grid, param_grid=param_grid, cv=cvx, scoring=scoring,n_jobs=10,verbose=0) Stacking_grid_search.fit(pca_X_train, train_y) Stacking_grid_search.best_estimator_ train_pre_y = cross_val_predict(Stacking_grid_search.best_estimator_, pca_X_train,train_y, cv=cvx) train_res1=get_measures_gridloo(train_y,train_pre_y) test_pre_y = Stacking_grid_search.predict(pca_X_test) test_res1=get_measures_gridloo(test_y,test_pre_y) best_pca_train_aucs.append(train_res1.loc[:,"AUC"]) best_pca_test_aucs.append(test_res1.loc[:,"AUC"]) best_pca_train_scores.append(train_res1) best_pca_test_scores.append(test_res1) train_aucs.append(np.max(best_pca_train_aucs)) test_aucs.append(best_pca_test_aucs[np.argmax(best_pca_train_aucs)].item()) train_scores.append(best_pca_train_scores[np.argmax(best_pca_train_aucs)]) test_scores.append(best_pca_test_scores[np.argmax(best_pca_train_aucs)]) pca_comp.append(n_components[np.argmax(best_pca_train_aucs)]) print("n_components:") print(n_components[np.argmax(best_pca_train_aucs)])

2023-07-22 上传
2023-05-26 上传
2023-06-09 上传

n_topics = 10 lda = LatentDirichletAllocation(n_components=n_topics, max_iter=50, learning_method='batch', learning_offset=50, #doc_topic_prior=0.1, #topic_word_prior=0.01, random_state=0) lda.fit(tf) ###########每个主题对应词语 import pandas as pd from openpyxl import Workbook # 获取主题下词语的概率分布 def get_topic_word_distribution(lda, tf_feature_names): arr = lda.transform(tf_vectorizer.transform([' '.join(tf_feature_names)])) return arr[0] # 打印主题下词语的概率分布 def print_topic_word_distribution(lda, tf_feature_names, n_top_words): dist = get_topic_word_distribution(lda, tf_feature_names) for i in range(lda.n_topics): print("Topic {}: {}".format(i, ', '.join("{:.4f}".format(x) for x in dist[i]))) # 输出每个主题下词语的概率分布至Excel表格 def output_topic_word_distribution_to_excel(lda, tf_feature_names, n_top_words, filename): # 创建Excel工作簿和工作表 wb = Workbook() ws = wb.active ws.title = "Topic Word Distribution" # 添加表头 ws.cell(row=1, column=1).value = "Topic" for j in range(n_top_words): ws.cell(row=1, column=j+2).value = tf_feature_names[j] # 添加每个主题下词语的概率分布 dist = get_topic_word_distribution(lda, tf_feature_names) for i in range(lda.n_topics): ws.cell(row=i+2, column=1).value = i for j in range(n_top_words): ws.cell(row=i+2, column=j+2).value = dist[i][j] # 保存Excel文件 wb.save(filename) n_top_words = 30 tf_feature_names = tf_vectorizer.get_feature_names() topic_word = print_topic_word_distribution(lda, tf_feature_names, n_top_words) #print_topic_word_distribution(lda, tf_feature_names, n_top_words) output_topic_word_distribution_to_excel(lda, tf_feature_names, n_top_words, "topic_word_distribution.xlsx")报错Traceback (most recent call last): File "D:\python\lda3\data_1.py", line 157, in <module> topic_word = print_topic_word_distribution(lda, tf_feature_names, n_top_words) File "D:\python\lda3\data_1.py", line 129, in print_topic_word_distribution for i in range(lda.n_topics): AttributeError: 'LatentDirichletAllocation' object has no attribute 'n_topics'

2023-05-25 上传