# 添加tensorboardwriter = SummaryWriter("../logs_train")
时间: 2024-10-24 17:03:19 浏览: 36
`SummaryWriter` 是 TensorFlow 的一个高级功能,它允许你在训练神经网络或其他机器学习模型的过程中记录各种数据可视化信息,如损失、指标、权重等。当你创建 `SummaryWriter` 对象时,你需要提供一个路径给它,比如 `../logs_train`,这将作为日志文件的保存位置。
例如,你可能会这样做:
```python
import tensorflow as tf
# 创建TensorBoard writer对象
tensorboard_writer = tf.summary.create_file_writer('../logs_train')
# 在每个训练步骤或周期结束后,可以使用writer来写入数据点
with tensorboard_writer.as_default():
tf.summary.scalar('loss', loss_value, step=global_step) # 记录损失值
tf.summary.histogram('weights_distribution', model.weights, step=global_step) # 绘制权重分布
# 当训练完成后,关闭writer
tensorboard_writer.close()
```
相关问题
writer = SummaryWriter("../logs_max pool") NameError: name 'SummaryWriter' is not defined
`SummaryWriter` 是 PyTorch Lightning 或 TensorBoard 中的一个类,用于记录训练过程中的日志。它通常是在创建 Lightning Trainer 实例时作为配置的一部分导入并使用的。然而,您提到的 `NameError: name 'SummaryWriter' is not defined` 错误表示您的环境中还没有导入 SummaryWriter 这个名字,这可能是由于以下几个原因:
1. 您可能还没安装 PyTorch Lightning 或者还没有引入相应的库(如 torch.utils.tensorboard)。
2. 如果您在非PyTorch Lightning项目中遇到这个错误,那可能是其他库里的 `SummaryWriter`,需要确认是否正确引用了对应库。
要解决这个问题,您可以按照以下步骤操作:
- 确认已安装 PyTorch Lightning (可以使用 `pip install pytorch-lightning` 或查看环境变量设置)。
- 如果在使用 Lightning,确保在适当的地方导入 `SummaryWriter`,例如在实验类(`LightningModule` 的子类)的构造函数中:
```python
from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger("path/to/logs", name="my_run")
writer = logger.experiment.writer
```
下面代码转化为paddle2.2.2代码 :log_dir = './logs/pretrain' if not os.path.isdir(log_dir): os.makedirs(log_dir) writer = SummaryWriter(log_dir) learning_rate = 1e-4 isp = torch.load('isp/ISP_CNN.pth').cuda() for k,v in isp.named_parameters(): v.requires_grad=False predenoiser = torch.load('./predenoising/PreDenoising.pth') for k,v in predenoiser.named_parameters(): v.requires_grad=False denoiser = RViDeNet(predenoiser=predenoiser).cuda() initial_epoch = findLastCheckpoint(save_dir=save_dir) if initial_epoch > 0: print('resuming by loading epoch %03d' % initial_epoch) denoiser = torch.load(os.path.join(save_dir, 'model_epoch%d.pth' % initial_epoch)) initial_epoch += 1 opt = optim.Adam(denoiser.parameters(), lr = learning_rate) # Raw data takes long time to load. Keep them in memory after loaded. gt_raws = [None] * len(gt_paths) iso_list = [1600,3200,6400,12800,25600] a_list = [3.513262,6.955588,13.486051,26.585953,52.032536] g_noise_var_list = [11.917691,38.117816,130.818508,484.539790,1819.818657] if initial_epoch==0: step=0 else: step = (initial_epoch-1)*int(len(gt_paths)/batch_size) temporal_frames_num = 3
```
import os
import paddle
from paddle import nn
from paddle.nn import functional as F
from paddle.io import DataLoader
from paddle.vision.datasets import ImageFolder
from paddle.optimizer import Adam
from paddle.utils.tensorboard import SummaryWriter
log_dir = './logs/pretrain'
if not os.path.isdir(log_dir):
os.makedirs(log_dir)
writer = SummaryWriter(log_dir)
learning_rate = 1e-4
isp = paddle.load('isp/ISP_CNN.pdparams')
for k, v in isp.named_parameters():
v.stop_gradient = True
predenoiser = paddle.load('./predenoising/PreDenoising.pdparams')
for k, v in predenoiser.named_parameters():
v.stop_gradient = True
denoiser = RViDeNet(predenoiser=predenoiser)
initial_epoch = findLastCheckpoint(save_dir=save_dir)
if initial_epoch > 0:
print('resuming by loading epoch %03d' % initial_epoch)
denoiser.set_state_dict(paddle.load(os.path.join(save_dir, 'model_epoch%d.pdparams' % initial_epoch)))
initial_epoch += 1
opt = Adam(denoiser.parameters(), lr=learning_rate)
# Raw data takes long time to load. Keep them in memory after loaded.
gt_raws = [None] * len(gt_paths)
iso_list = [1600, 3200, 6400, 12800, 25600]
a_list = [3.513262, 6.955588, 13.486051, 26.585953, 52.032536]
g_noise_var_list = [11.917691, 38.117816, 130.818508, 484.539790, 1819.818657]
if initial_epoch == 0:
step = 0
else:
step = (initial_epoch - 1) * int(len(gt_paths) / batch_size)
temporal_frames_num = 3
```
阅读全文