import csv counts = {'不满意': 0, '一般': 0, '很满意': 0, '满意': 0} with open('result.txt', 'r') as a: f=a.read() key = f.split(',') for i in key: if i in counts: counts[i] += 1 with open('result.csv', 'w', encoding='gbk', newline='') as f: writer = csv.writer(f) writer.writerow(['评语', '次数']) for key, value in counts.items(): writer.writerow([key, value])变为python代码
时间: 2023-08-22 10:10:03 浏览: 80
```python
import csv
counts = {'不满意': 0, '一般': 0, '很满意': 0, '满意': 0}
with open('result.txt', 'r') as a:
f = a.read()
key = f.split(',')
for i in key:
if i in counts:
counts[i] += 1
with open('result.csv', 'w', encoding='gbk', newline='') as f:
writer = csv.writer(f)
writer.writerow(['评语', '次数'])
for key, value in counts.items():
writer.writerow([key, value])
```
相关问题
对调查问卷结果实现统计,并将结果写入csv文件 【问题描述】为了监督饮食质量,食堂向学生发起了一次问卷调查,请大家在“非常满意”、“满意”、“一般”、“不满意”中选择一个评语评价食堂当天的饮食情况,评价结果存在文件“result.txt”中,请编程实现统计各个评语出现的次数,并将统计结果保存到同目录下的"result.csv"文件中。
好的,下面是代码实现:
```python
import csv
# 读取文件
with open('result.txt', 'r') as f:
data = f.readlines()
# 统计各个评语出现的次数
counts = {'非常满意': 0, '满意': 0, '一般': 0, '不满意': 0}
for line in data:
line = line.strip()
if line in counts:
counts[line] += 1
# 将统计结果保存到csv文件中
with open('result.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['评语', '次数'])
for k, v in counts.items():
writer.writerow([k, v])
```
这段代码首先通过`with open('result.txt', 'r') as f:`将文件打开并读取其中的内容,然后使用一个字典`counts`统计每个评语出现的次数。最后,使用`csv`模块将统计结果写入到`result.csv`文件中。
使用python爬取微博近两天的前十个热搜,写入文档nove.txt,使用jieba进行词频统计,结果排序输出到result.csv中,最后生成词云图。
好的,这是一个比较复杂的任务,但是可以分步骤来完成。首先,我们需要使用Python爬取微博热搜榜单上的热门关键词。可以使用`requests`和`BeautifulSoup`库来完成这个任务。以下是代码示例:
```python
import requests
from bs4 import BeautifulSoup
url = 'https://s.weibo.com/top/summary'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
keywords = soup.select('.td-02 a')
with open('nove.txt', 'w', encoding='utf-8') as f:
for i in range(10):
keyword = keywords[i].text.strip()
f.write(keyword + '\n')
```
这段代码会将前十个热门关键词写入到`nove.txt`文件中。
接下来,我们需要使用`jieba`库进行分词和词频统计。以下是示例代码:
```python
import jieba
import csv
jieba.setLogLevel(20)
jieba.load_userdict('userdict.txt')
with open('nove.txt', 'r', encoding='utf-8') as f:
text = f.read()
words = jieba.lcut(text)
word_counts = {}
for word in words:
if len(word) == 1:
continue
elif word == '微博':
continue
else:
word_counts[word] = word_counts.get(word, 0) + 1
word_counts_items = list(word_counts.items())
word_counts_items.sort(key=lambda x: x[1], reverse=True)
with open('result.csv', 'w', encoding='utf-8-sig', newline='') as f:
writer = csv.writer(f)
writer.writerow(['词语', '词频'])
for item in word_counts_items:
writer.writerow([item[0], item[1]])
```
这段代码会将`nove.txt`中的文本进行分词和词频统计,并将结果按照词频排序写入到`result.csv`文件中。
最后,我们可以使用`wordcloud`库生成词云图。以下是示例代码:
```python
import wordcloud
with open('nove.txt', 'r', encoding='utf-8') as f:
text = f.read()
wc = wordcloud.WordCloud(
background_color='white',
font_path='C:/Windows/Fonts/simhei.ttf',
max_words=100,
width=800,
height=600,
margin=2
)
wc.generate(text)
wc.to_file('wordcloud.png')
```
这段代码会将`nove.txt`中的文本生成词云图,并将结果保存到`wordcloud.png`文件中。
以上就是完整的代码示例,可以将这些代码组合起来,完成任务。
阅读全文