An Empirical Exploration of Recurrent Network Architectures重点
文章作者做了多组实验检测各种不同结构的RNN在不同的问题上的表现,得到的结论包括:
【An Empirical Exploration of Recurrent Network Architectures重点】(1)GRU在除了语言模型的其他地方比LSTM表现好
(2)LSTM with dropout在语言模型上表现好,有大的遗忘门偏置后表现更好
(3)在LSTM中,各个门的重要性为:遗忘门>输入门>输出门
(4)遗忘门在除了语言模型外的情况下影响非常大
语言模型的长期依赖效应强于其他场景
推荐阅读
- cf|Codeforces1182E Product Oriented Recurrence(递推+矩乘快速幂)
- Accurate Single Stage Detector Using Recurrent Rolling Convolution
- Look Closer to See Better Recurrent Attention Convolutional Neural Network for Fine-grained Image Re
- 论文笔记(mixup: BEYOND EMPIRICAL RISK MINIMIZATION)
- ICLR2018_mixup:|ICLR2018_mixup: Beyond Empirical Risk Minimization
- AI学习|Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information,语义相似度
- mixup: Beyond Empirical Risk Minimization
- 经验模态分解
- [深度学习论文笔记][ICLR 18]mixup: BEYOND EMPIRICAL RISK MINIMIZATION
- 机器学习|Mixup:Beyond Empirical Risk Minimization