解释这代码 voc_dict=sorted([_ for _ in voc_dict.items() if _[1]>min_seq], key=lambda x:x[1], reverse=True)[:top_n]
时间: 2023-06-19 20:05:09 浏览: 95
这段代码主要实现了以下功能:
1. 遍历词典 `voc_dict` 中的每个元素,筛选出出现次数大于 `min_seq` 的元素,形成一个新列表。
2. 对新列表进行排序,按照元素出现次数从大到小进行排序。
3. 取出排序后的前 `top_n` 个元素,形成最终的词典。
具体来说,代码中使用了 Python 中的列表解析和排序函数 `sorted`,其中:
- `[_ for _ in voc_dict.items() if _[1]>min_seq]` 表示遍历 `voc_dict` 中的每个元素,将出现次数大于 `min_seq` 的元素作为一个新列表返回。
- `key=lambda x:x[1]` 表示按照元素的第二个值(即出现次数)进行排序。
- `reverse=True` 表示按照降序排列。
- `[:top_n]` 表示取出排序后的前 `top_n` 个元素。
相关问题
def convert_midi(fp, _seq_len): notes_list = [] stream = converter.parse(fp) partitions = instrument.partitionByInstrument(stream) # print([(part.getInstrument().instrumentName, len(part.flat.notes)) for part in partitions]) # 获取第一个小节(Measure)中的节拍数 _press_time_dict = defaultdict(list) partition = None for part_sub in partitions: if part_sub.getInstrument().instrumentName.lower() == 'piano' and len(part_sub.flat.notes) > 0: partition = part_sub continue if partition is None: return None, None for _note in partition.flat.notes: _duration = str(_note.duration.quarterLength) if isinstance(_note, NoteClass.Note): _press_time_dict[str(_note.offset)].append([str(_note.pitch), _duration]) notes_list.append(_note) if isinstance(_note, ChordClass.Chord): press_list = _press_time_dict[str(_note.offset)] notes_list.append(_note) for sub_note in _note.notes: press_list.append([str(sub_note.pitch), _duration]) if len(_press_time_dict) == _seq_len: break _items = list(_press_time_dict.items()) _items = sorted(_items, key=lambda t:float(Fraction(t[0])))[:_seq_len] if len(_items) < _seq_len: return None,None last_step = Fraction(0,1) notes = np.zeros(shape=(_seq_len,len(notes_vocab),len(durations_vocab)),dtype=np.float32) steps = np.zeros(shape=(_seq_len,len(offsets_vocab)),dtype=np.float32) for idx,(cur_step,entities) in enumerate(_items): cur_step = Fraction(cur_step) diff_step = str(cur_step - last_step) if diff_step in offsets_vocab: steps[idx,offsets_vocab.index(diff_step)] = 1. last_step = cur_step else: steps[idx,offsets_vocab.index('0')] = 1. for pitch,quarterLen in entities: notes[idx,notes_vocab.index(pitch),durations_vocab.index(quarterLen if quarterLen in durations_vocab else '0')] = 1. notes = notes.reshape((seq_len,-1)) inputs = np.concatenate([notes,steps],axis=-1) return inputs,notes_list
这段代码是用来将midi文件转化为神经网络模型的输入,其中的`fp`参数是midi文件路径,`_seq_len`是序列长度。它首先使用`music21`库的`converter`模块读取midi文件,然后使用`instrument`模块按照乐器将音符分开。接着,它找到了乐器为钢琴的音轨,并将其中的音符按照时间顺序收集到`_press_time_dict`字典中。最后,它将`_press_time_dict`中的音符转化为神经网络模型的输入,其中包括每个时间步的音符和持续时间等信息。
Traceback (most recent call last): File "D:\python\code\阈值处理20221227\robustness.py", line 7, in <module> data = scipy.io.loadmat('D:/Network-Data-master/01_Jazz.mat') File "D:\python\code\venv\lib\site-packages\scipy\io\matlab\_mio.py", line 226, in loadmat MR, _ = mat_reader_factory(f, **kwargs) File "D:\python\code\venv\lib\site-packages\scipy\io\matlab\_mio.py", line 80, in mat_reader_factory raise NotImplementedError('Please use HDF reader for matlab v7.3 ' NotImplementedError: Please use HDF reader for matlab v7.3 files, e.g. h5py
这个错误说明你正在尝试加载一个 Matlab v7.3 文件,而 Scipy 的 loadmat 函数不支持直接读取这种格式的文件。你可以使用 h5py 库来读取这种文件格式。以下是可能的实现方法:
```python
import h5py
import networkx as nx
import numpy as np
# 加载真实网络的jazz.mat文件
with h5py.File('D:/Network-Data-master/01_Jazz.mat', 'r') as f:
adjacency_matrix = np.array(f['Problem']['A'])
G = nx.from_numpy_matrix(adjacency_matrix)
# 根据节点度数、介数、局部聚类系数、中介中心度、pangerank大小降序排序,burt约束系数大小升序排序,得到每种方法节点顺序表
degree = dict(G.degree())
degree_seq = [node for node, deg in sorted(degree.items(), key=lambda x: x[1], reverse=True)]
betweenness = nx.betweenness_centrality(G)
betweenness_seq = [node for node, btwn in sorted(betweenness.items(), key=lambda x: x[1], reverse=True)]
clustering = nx.clustering(G)
clustering_seq = [node for node, clust in sorted(clustering.items(), key=lambda x: x[1], reverse=True)]
centrality = nx.eigenvector_centrality(G)
centrality_seq = [node for node, cent in sorted(centrality.items(), key=lambda x: x[1], reverse=True)]
pagerank = nx.pagerank(G)
pagerank_seq = [node for node, pr in sorted(pagerank.items(), key=lambda x: x[1], reverse=True)]
burt = nx.constraint(G)
burt_seq = [node for node, con in sorted(burt.items(), key=lambda x: x[1])]
# 每次累计删除10%节点,计算网络的连通性、网络效率、平均最短路径长度,并加入判断,避免出现网络不连通而报错,记录结果
num_nodes = len(G.nodes())
num_steps = 10
results = []
for i in range(num_steps):
percent_deleted = (i+1) * 0.1
nodes_to_delete = degree_seq[:int(percent_deleted*num_nodes)]
G_copy = G.copy()
G_copy.remove_nodes_from(nodes_to_delete)
if nx.is_connected(G_copy):
results.append((percent_deleted, nx.average_shortest_path_length(G_copy), nx.global_efficiency(G_copy)))
else:
results.append((percent_deleted, np.nan, np.nan))
# 将节点删除过程和结果绘制成图表,观察网络鲁棒性下降的情况
fig, ax = plt.subplots(1, 3)
fig.set_size_inches(18, 6)
ax[0].set_title('Degree')
ax[0].plot([degree[n] for n in degree_seq])
ax[0].set_xlabel('Node rank')
ax[0].set_ylabel('Degree')
ax[1].set_title('Efficiency and path length')
ax[1].plot([r[0] for r in results], [r[1] for r in results], label='Average shortest path length')
ax[1].plot([r[0] for r in results], [r[2] for r in results], label='Global efficiency')
ax[1].set_xlabel('Percent nodes deleted')
ax[1].set_ylabel('Metric value')
ax[1].legend()
ax[2].set_title('Burt constraint')
ax[2].plot([burt[n] for n in burt_seq])
ax[2].set_xlabel('Node rank')
ax[2].set_ylabel('Burt constraint')
plt.show()
```
这个代码和之前的代码非常相似,主要区别在于加载和处理 Matlab v7.3 文件的部分。现在使用 h5py 库来读取文件,然后将邻接矩阵转换为 NetworkX 图。
阅读全文