gpu: if torch.cuda.is_aviliable():a=a.cuda() gpu->cpu: a=a.cpu() pytorch中的分布式 【【pytorch】数。【pytorch】数据转换。" />

【pytorch】数据转换

pytorch中的硬件设备 查看显卡信息

nvidia-smi

在代码中指定设备
import os os.environ["CUDA_VISIBLE_DEVICES"] = "2"`

cpu->gpu:
if torch.cuda.is_aviliable(): a=a.cuda()

gpu->cpu:
a=a.cpu()

pytorch中的分布式 【【pytorch】数据转换】分布式并行处理数据在一些代码里面见到过,但是由于现在还没有机会用到,就先简单写一下,后面用到了深入了解吧
torch.distributed.init_process_group(world_size=4, init_method='...') model = torch.nn.DistributedDataParallel(model)

pytorch中的数据类型
Data type dtype CPU tensor GPU tensor
32-bit floating point torch.float32 or torch.float torch.FloatTensor torch.cuda.FloatTensor
64-bit floating point torch.float64 or torch.double torch.DoubleTensor torch.cuda.DoubleTensor
16-bit floating point torch.float16 or torch.half torch.HalfTensor torch.cuda.HalfTensor
8-bit integer (unsigned) torch.uint8 torch.ByteTensor torch.cuda.ByteTensor
8-bit integer (signed) torch.int8 torch.CharTensor torch.cuda.CharTensor
16-bit integer (signed) torch.int16 or torch.short torch.ShortTensor torch.cuda.ShortTensor
32-bit integer (signed) torch.int32 or torch.int torch.IntTensor torch.cuda.IntTensor
64-bit integer (signed) torch.int64 or torch.long torch.LongTensor torch.cuda.LongTensor
x,y=torch.from_numpy(x),torch.from_numpy(y) x,y=x.type(torch.DoubleTensor),y.type(torch.DoubleTensor

z=torch.zeros([2, 4], dtype=torch.int32)

    推荐阅读