自然语言处理|Attention is all you need pytorch实现 源码解析04 - 模型的测试以及翻译

【自然语言处理|Attention is all you need pytorch实现 源码解析04 - 模型的测试以及翻译】今天是最后一节对Attention is all you need pytorch实现的解析,这一节非常的简单,我将会一笔带过。
上一讲连接在此:
Attention is all you need pytorch实现 源码解析01 - 数据预处理、词表的构建 - https://blog.csdn.net/weixin_42744102/article/details/87006081
Attention is all you need pytorch实现 源码解析02 - 模型的训练(1)- 模型的训练代码 - https://blog.csdn.net/weixin_42744102/article/details/87076089
Attention is all you need pytorch实现 源码解析03 - 模型的训练(2)- transformer模型的代码实现以及结构 - https://blog.csdn.net/weixin_42744102/article/details/87088748
先上github源码:https://github.com/Eathoublu/attention-is-all-you-need-pytorch
今天讲解的是translate.py

''' Translate input text with trained model. '''import torch import torch.utils.data import argparse from tqdm import tqdmfrom dataset import collate_fn, TranslationDataset from transformer.Translator import Translator from preprocess import read_instances_from_file, convert_instance_to_idx_seq# 1 - main函数开始,传入参数,其中,-model -src -vocab是必须传入的,这几个参数分别是:模型的路径,data的路径以及vocab(词表)的路径。 def main(): '''Main Function'''parser = argparse.ArgumentParser(description='translate.py')parser.add_argument('-model', required=True, help='Path to model .pt file') parser.add_argument('-src', required=True, help='Source sequence to decode (one line per sequence)') parser.add_argument('-vocab', required=True, help='Source sequence to decode (one line per sequence)') parser.add_argument('-output', default='pred.txt', help="""Path to output the predictions (each line will be the decoded sequence""") parser.add_argument('-beam_size', type=int, default=5, help='Beam size') parser.add_argument('-batch_size', type=int, default=30, help='Batch size') parser.add_argument('-n_best', type=int, default=1, help="""If verbose is set, will output the n_best decoded sentences""") parser.add_argument('-no_cuda', action='store_true')opt = parser.parse_args() opt.cuda = not opt.no_cuda# Prepare DataLoader preprocess_data = https://www.it610.com/article/torch.load(opt.vocab) preprocess_settings = preprocess_data['settings'] test_src_word_insts = read_instances_from_file( opt.src, preprocess_settings.max_word_seq_len, preprocess_settings.keep_case) test_src_insts = convert_instance_to_idx_seq( test_src_word_insts, preprocess_data['dict']['src'])test_loader = torch.utils.data.DataLoader( TranslationDataset( src_word2idx=preprocess_data['dict']['src'], tgt_word2idx=preprocess_data['dict']['tgt'], src_insts=test_src_insts), num_workers=2, batch_size=opt.batch_size, collate_fn=collate_fn)# 2 - 以上为载入数据,现在实例化一个translator对象translator = Translator(opt)# 3 - 打开输出文件,开始进行translatewith open(opt.output, 'w') as f: for batch in tqdm(test_loader, mininterval=2, desc='- (Test)', leave=False):# 4 - 将data中的数据按batch送入translate中,并得到结果,输出到文件中。all_hyp, all_scores = translator.translate_batch(*batch) for idx_seqs in all_hyp: for idx_seq in idx_seqs: pred_line = ' '.join([test_loader.dataset.tgt_idx2word[idx] for idx in idx_seq]) f.write(pred_line + '\n') print('[Info] Finished.')if __name__ == "__main__": main()

    推荐阅读