20170131 | 阅读 | 个人向
NIPS 2016最全盘点:主题详解、前沿论文及下载资源
- 《吴恩达 NIPS 2016 演讲现场直击:如何使用深度学习开发人工智能应用?》
「nuts and bolts of building AI」
- Scale driving Deep Learning progresses
- the rise of end-to-end learning:更纯粹,但需要更大的训练集
- 机器学习策略:如何有效地处理数据集
- Avoidable bias + Variance = bias-variance trade-off(偏差-方差权衡)
- Training error high? -> Bias: bigger model, train longer, new model architecture.
Dev error high? -> Variance: more data, regularization, new model architecture.
- Data Synthesis
- 开发集和测试集要遵从相同的数据分布(distribution),也可以拿出训练集中的一部分内容作为训练-开发集(train-dev set)
- 人类误差Human level error与训练集误差之间的差值还是称为 bias(偏差);训练集误差与训练-开发误差之间的差值称为「训练集的过拟合」(也就是说,它代表了模型单纯在训练集上表现能力);训练-开发误差与开发集误差之间的差值称为「data mismatch」(数据不匹配,就是刚才说的两组数据不在同一个「宇宙」带来的偏差);开发集误差与测试集误差之间的差值称为「开发集过拟合」(同理)。
- Training error high? -> Bias: bigger model, train longer, new model architecture
Training-Dev error high? -> Variance: more data, regularization, new model architecture
Dev set error high? -> Train-test data mismatch: make training data more similar to test data, data synthesis(Domain adaptation),New model architecture
Test set error high? -> Overfit dev set: more dev set data
- 关于人工智能的未来
迁移学习(transfer learning) - 人工智能产品管理(AI product management)
- 【20170131 | 阅读 | 个人向】《GAN 之父 NIPS 2016 演讲现场直击:全方位解读生成对抗网络的原理及未来》
- … Tips and Tricks
- 把数据标签给 GAN -> one-sided label smoothing(单边标签平滑)
- Batch Norm: 取「一批」数据,把它们给规范化(normalise)一下(减平均值,除以标准差)。
问题: 同一批(batch)里面的数据太过相似,对一个无监督的 GAN 而言,很容易被带偏而误认为它们这些数据都是一样的。也就是说,最终的生成模型的结果会混着同一个 batch 里好多其它特征。这不是我们想要的形式
-> Reference Batch Norm: 取出一批数据(固定的)当作我们的参照数据集 R。然后把新的数据 batch 都依据 R 的平均值和标准差来做规范化。
问题: 如果 R 取得不好,效果也不会好。或者,数据可能被 R 搞得过拟合。换句话说:我们最后生成的数据可能又都变得跟 R 很像
-> Virtual Batch Norm: 取出 R,但是所有的新数据 x 做规范化的时候,我们把 x 也加入到 R 中形成一个新的 virtual batch V。并用这个 V 的平均值和标准差来标准化 x。这样就能极大减少 R 的风险。 - 平衡好 G 和 D
使用非饱和(non-saturating)博弈来写目标函数,保证 D 学完之后,G 还可以继续学习;使用标签平滑化。
- 问题
- 不稳定,很多情况下都无法收敛(non-convergence):局部最优解,模式崩溃(mode collapse)-> minibatch GAN, unrolling GAN
- 评估
- 离散输出
- 强化学习的连接
- 结合
PGN(Plug and Play Generative Models/即插即用生成模型)(Nguyen et al, 2016) - video, pdf
- … Tips and Tricks
- 《Bengio 和 LeCun 在 NIPS 2016 上的演讲》
- 1【生物学可信深度学习】《Towards biologically plausible deep learning》by Yoshua Bengio
- 2.【能量 GAN 与对抗方法】《Energy-Based GANs & other Adversarial things》by Yann LeCun
- 论文推荐 部分
- 对话模型的对抗式评估(Adversarial Evaluation of Dialogue Models)。
- 构建像人一样学习和思考的机器(Building Machines That Learn and Think Like People)
- 理解深度学习需要重新思考泛化(Understanding deep learning requires rethinking generalization)
- 资源
- NIPS 2016 Annual Meeting
- Peter Abbeel, “Tutorial: Deep Reinforcement Learning through Policy Optimization” - http://people.eecs.berkeley.edu/~pabbeel/nips-tutorial-policy-optimization-Schulman-Abbeel.pdf
- Yoshua Bengio, “Towards a Biologically Plausible Model of Deep Learning” - http://www.iro.umontreal.ca/~bengioy/talks/Brains+Bits-NIPS2016Workshop.pptx.pdf
- Mathieu Blondel, “Higher-order Factorization Machines” - http://www.mblondel.org/talks/mblondel-stair-2016-09.pdf
- Kyle Cramer (keynote), “Machine Learning & Likelihood Free Inference in Particle Physics” - https://figshare.com/articles/NIPS_2016_Keynote_Machine_Learning_Likelihood_Free_Inference_in_Particle_Physics/4291565
- Xavier Giro, “Hierarchical Object Detection with Deep Reinforcement Learning” - http://www.slideshare.net/xavigiro/hierarchical-object-detection-with-deep-reinforcement-learning
- Ian Goodfellow, “Adversarial Approaches to Bayesian Learning and Bayesian Approaches to Adversarial Robustness” - http://www.iangoodfellow.com/slides/2016-12-10-bayes.pdf
- Ian Goodfellow, “Tutorial: Introduction to Generative Adversarial Networks” - http://www.iangoodfellow.com/slides/2016-12-9-gans.pdf
- Neil Lawrence, “Personalized Health: Challenges in Data Science” - Personalized Health: Challenges in Data Science
- Yann LeCun, “Energy-Based GANs & other Adversarial things” - https://drive.google.com/file/d/0BxKBnD5y2M8NbzBUbXRwUDBZOVU/view
- Yann LeCun (keynote), “Predictive Learning” - https://drive.google.com/file/d/0BxKBnD5y2M8NREZod0tVdW5FLTQ/view
- Valerio Maggio, “Deep Learning for Rain and Lightning Nowcasting” - https://speakerdeck.com/valeriomaggio/deep-learning-for-rain-and-lightning-nowcasting-at-nips2016
- Sara Magliacane, “Joint causal inference on observational and experimental data” - http://www.slideshare.net/SaraMagliacane/talk-joint-causal-inference-on-observational-and-experimental-data-nips-2016-what-if-workshop-poster
- Andrew Ng, “Nuts and Bolts of Building Applications using Deep Learning” - https://www.dropbox.com/s/dyjdq1prjbs8pmc/NIPS2016%20-%20Pages%202-6%20(1).pdf
- John Schulman, “The Nuts and Bolts of Deep RL Research” - http://rll.berkeley.edu/deeprlcourse/docs/nuts-and-bolts.pdf
- Dustin Tran, “Tutorial: Variational Inference: Foundations and Modern Methods” - http://www.cs.columbia.edu/~blei/talks/2016_NIPS_VI_tutorial.pdf
- Jenn Wortman Vaughan, “Crowdsourcing: Beyond Label Generation” - http://www.jennwv.com/projects/crowdtutorial/crowdslides.pdf
- Reza Zedah, “FusionNet: 3D Object Classification Using Multiple Data Representations” - http://matroid.com/papers/fusionnet_slides.pdf
推荐阅读
- 一个人的旅行,三亚
- 一个人的碎碎念
- 七年之痒之后
- 考研英语阅读终极解决方案——阅读理解如何巧拿高分
- Ⅴ爱阅读,亲子互动——打卡第178天
- 异地恋中,逐渐适应一个人到底意味着什么()
- 上班后阅读开始变成一件奢侈的事
- 遗憾是生活的常态,但孝顺这件事,我希望每一个人都不留遗憾
- NO.38|NO.38 我不是嫁不出去,而是不想嫁
- 历史教学书籍