File "D:\pythonProject\信息检索系统实践\第三次实验\textrank.py", line 102, in <module> results.extend(future.result()) File "D:\python\lib\concurrent\futures\_base.py", line 451, in result return self.__get_result() File "D:\python\lib\concurrent\futures\_base.py", line 403, in __get_result raise self._exception File "D:\python\lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "D:\pythonProject\信息检索系统实践\第三次实验\textrank.py", line 83, in process_chunk tr4w.analyze(chunk, lower=True, window=2) File "D:\python\lib\site-packages\textrank4zh\TextRank4Keyword.py", line 93, in analyze self.keywords = util.sort_words(_vertex_source, _edge_source, window = window, pagerank_config = pagerank_config) File "D:\python\lib\site-packages\textrank4zh\util.py", line 160, in sort_words nx_graph = nx.from_numpy_matrix(graph) AttributeError: module 'networkx' has no attribute 'from_numpy_matrix' 进程已结束,退出代码为 1
时间: 2023-07-21 11:24:27 浏览: 292
这个错误与你之前提到的 `networkx` 相关,可能是因为 `networkx` 版本太低,不支持 `from_numpy_matrix` 方法。
你可以尝试更新 `networkx` 到最新版本:
```
pip install --upgrade networkx
```
如果你已经安装了最新版本的 `networkx`,仍然出现这个错误,你可以尝试修改 `TextRank4Keyword.py` 文件中的代码,在导入 `networkx` 时使用以下语句:
```
import networkx as nx
```
然后在 `sort_words` 函数中的 `from_numpy_matrix` 方法改为:
```
nx_graph = nx.Graph(graph)
```
这应该可以解决你遇到的问题。
相关问题
Traceback (most recent call last): File "D:\pythonFiles\wikiextractor-master\wikiextractor-master\wikiextractor\WikiExtractor.py", line 648, in <module> main() File "D:\pythonFiles\wikiextractor-master\wikiextractor-master\wikiextractor\WikiExtractor.py", line 642, in main process_dump(input_file, args.templates, output_path, file_size, File "D:\pythonFiles\wikiextractor-master\wikiextractor-master\wikiextractor\WikiExtractor.py", line 428, in process_dump reduce.start() File "D:\软件\python\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\软件\python\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\软件\python\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "D:\软件\python\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle '_io.TextIOWrapper' object (venv) D:\pythonFiles\wikiextractor-master\wikiextractor-master>Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\软件\python\lib\multiprocessing\spawn.py", line 102, in spawn_main source_process = _winapi.OpenProcess( OSError: [WinError 87] 参数错误。 怎么解决
这个错误通常是由于在使用pickle模块将对象序列化时,对象中存在不能被序列化的属性或方法所导致的。有可能是在使用多进程库时,进程间传递了一个不能被序列化的_io.TextIOWrapper对象,导致了这个错误。你可以尝试将输出路径参数output_path改为一个字符串类型而非一个文件对象类型。另外,你可以尝试将输出文件的写入操作放在主进程中,而非在子进程中进行。这样可以避免在子进程中对文件对象进行pickle序列化。以下是一个可能的解决方案:
``` python
def worker_process(input_queue, output_queue, output_path):
# create output file in main process
with open(output_path, 'w', encoding='utf-8') as output_file:
while True:
data = input_queue.get()
if data is None:
break
# process data
# ...
output = process_data(data)
# put result into output queue
output_queue.put(output)
# write result to output file
output_file.write(output)
def process_dump(input_file, templates, output_path, file_size, process_count):
# create input queue
input_queue = Queue(maxsize=process_count)
# create output queue
output_queue = Queue(maxsize=process_count)
# spawn processes
processes = []
for i in range(process_count):
p = Process(target=worker_process, args=(input_queue, output_queue, output_path))
processes.append(p)
p.start()
# read input file and put data into input queue
with open(input_file, 'r', encoding='utf-8') as f:
for line in f:
input_queue.put(line)
# stop worker processes
for i in range(process_count):
input_queue.put(None)
for p in processes:
p.join()
# collect results from output queue and write to output file
with open(output_path, 'w', encoding='utf-8') as output_file:
while not output_queue.empty():
output = output_queue.get()
output_file.write(output)
```
在这个改进的代码中,我们将输出路径参数output_path改为一个字符串类型,并在主进程中打开输出文件,将输出文件的写入操作放在主进程中进行,避免在子进程中对文件对象进行pickle序列化。
Traceback (most recent call last): File "D:\anaconda\lib\site-packages\IPython\core\interactiveshell.py", line 3460, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-dc57cfac1dc3>", line 1, in <module> runfile('D:\\Users\\Admin\\PycharmProjects\\pythonProject2\\用模型预测.py', wdir='D:\\Users\\Admin\\PycharmProjects\\pythonProject2') File "D:\PyCharm 2023.1.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\PyCharm 2023.1.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:\Users\Admin\PycharmProjects\pythonProject2\用模型预测.py", line 20, in <module> print([round(x) for x in y_pred_proba]) File "D:\Users\Admin\PycharmProjects\pythonProject2\用模型预测.py", line 20, in <listcomp> print([round(x) for x in y_pred_proba]) TypeError: type numpy.ndarray doesn't define __round__ method
这个错误是因为`round`函数无法直接作用于numpy数组中的元素。你可以通过将numpy数组中的元素转换为Python内置的`float`类型来解决这个问题。比如,如果你想将`y_pred_proba`中的概率值四舍五入取整,可以使用如下代码:
```
# 使用模型进行预测
y_pred = np.argmax(model.predict(X), axis=-1)
y_pred_proba = model.predict(X)
# 过滤掉和值超过6或小于6的预测值,并去除重复的行
results = pd.DataFrame({'y_pred': y_pred, 'y_pred_proba': [round(float(x)) for x in y_pred_proba[:, 1]]})
results = results[(results['y_pred'] >= 0) & (results['y_pred'] <= 6)]
results.drop_duplicates(inplace=True)
# 将预测结果打印出来
print([round(x) for x in y_pred])
print([round(float(x)) for x in y_pred_proba])
```
这里使用列表推导式和`round`函数将概率值四舍五入取整,并将其转换为Python内置的`float`类型,然后将其添加到DataFrame中。
阅读全文