state_dict = torch.load(opts.checkpoint)
try:
trainer.net.load_state_dict(state_dict['net_param'])
except Exception:
trainer.net = torch.nn.DataParallel(trainer.net)
trainer.net.load_state_dict(state_dict['net_param'])
【pytorch并行化常见bug】This is for dealing a checkpoint trained in parallel.
try:
out = trainer.net.forward()
except:
out = trainer.net.module.forward()
Simialarly, the net need to be transformed to
module
for being compatible with paralelly trained model.推荐阅读
- pytorch|使用pytorch从头实现多层LSTM
- SG平滑轨迹算法的原理和实现
- pytorch|YOLOX 阅读笔记
- Keras|将Pytorch模型迁移到android端(android studio)【未实现】
- Android|将Pytorch模型部署到Android端
- nvidia|nvidia jetson xavier nx安装pytorch
- python|PyTorch单机多卡分布式训练教程及代码示例
- 深度瞎搞|PyTorch单机多卡训练(DDP-DistributedDataParallel的使用)备忘记录
- Pytorch图像分割实践|Pytorch自定义层或者模型类
- 安装问题|win10+cuda11.1+anaconda+pytorch+pycharm配置环境