sklearn.datasets使用方法
时间: 2023-11-06 11:04:27 浏览: 93
dataset的使用
sklearn.datasets是scikit-learn库中用于加载和处理数据集的模块。可以使用fetch_20newsgroups_vectorized函数加载20个新闻组数据集并将其转换为tf-idf向量。使用该函数的默认设置,可以方便地完成tf-idf转换。首先,需要导入相关的库和模块,如下所示:
```python
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
```
然后,可以使用fetch_20newsgroups_vectorized函数加载数据集,并将其转换为tf-idf向量。示例代码如下:
```python
bunch = fetch_20newsgroups_vectorized(subset='all')
X, y = shuffle(bunch.data, bunch.target)
print(X.shape) # (18846, 130107)
```
数据集可以通过shuffle函数进行洗牌,以打乱数据的顺序。接下来,可以使用train_test_split函数将数据集划分为训练集和测试集。示例代码如下:
```python
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print(X_train.shape) # (13192, 130107)
print(X_test.shape) # (5654, 130107)
```
这样就完成了使用sklearn.datasets.fetch_20newsgroups_vectorized加载数据集并进行划分的过程。\[1\]
#### 引用[.reference_title]
- *1* *3* [sklearn基础篇(一)-- datasets数据集](https://blog.csdn.net/xq151750111/article/details/127412337)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item]
- *2* [sklearn自带数据集的使用以及特征提取、特征预处理](https://blog.csdn.net/qq_32350719/article/details/97396547)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item]
[ .reference_list ]
阅读全文