2021|谷歌董事会主席John Hennessy:AI技术发展放缓,我们正处于半导体产业寒冬 | 钛媒体T-EDGE( 四 )


我们还面临另一个问题,即所谓的登纳德缩放定律 。登纳德缩放定律是由罗伯特·登纳德 领导的一项观察实验,他是DRAM的发明人 。据他的观察,随着尺寸缩小,电压和其他共振也会缩小,这将导致每毫米硅的功率几乎恒定 。这意味着由于每一毫米中的晶体管数量从一代到下一代急剧增加,每个计算的功率实际上下降得非常快 。这在 2007 年左右最为明显,在 2000 年到 2007 年间开始缓慢上升的功耗开始激增 。这意味着功耗确实是关键问题,随着这些技术的发展,弄清楚如何获得更高的能源效率将变得越来越重要 。
Combine results of this is that we've seen a leveling off of unit processor performance, single core performance, after going through a rapid growth in the early period of the industry of roughly 25% a year and then a remarkable period with the introduction of RISC technologies, instructional-level parallelism, of over 50% a year and then a slower period which focused very much on multicore and building on these technologies.
在经历了行业早期每年大约 25% 的增长之后,随着 RISC 技术的引入和指令级并行技术的出现,开始有每年超过 50% 的性能增长 。之后我们就迎来了多核时代,专注于在现有技术上进行深耕 。
In the last two years, only less than 5% improvement in performance per year. Even if you were to look at multicore designs with the inefficiencies that come about you see that that doesn't significantly improve things across this.
在过去的两年中,每年的性能提升不到 5%,即使多核设计也没有显着改善能效方面的问题 。
And indeed we are in the we are in the era of dark silicon where multicore often slow down or shut off a core to prevent overheating and that overheating comes from power consumption.
事实上,我们正处于半导体寒冬 。多核处理器还是会因为担心过热而限制自身的性能 。而过热的问题就来自功耗 。
So what are we going to do? We're in this dilemma here where we've got a new technology deep learning which seems able to do problems that we never thought we could do quite effectively. But it requires massive amounts of computing power to go forward and at the same time Moore's law on the end of Dennard Scaling is creating a squeeze on the ability of the industry to do what it relies on for many years, namely just get the next generation of semiconductor technology everything gets faster.
那么我们能做什么呢?我们在这里陷入了两难境地,我们拥有一项新技术,深度学习,它似乎能够高效地解决很多问题,但同时它需要大量的算力才能进步 。同时,一边我们有着登纳德缩放定律,一边有着摩尔定律,我们再也不能期待半导体技术的更新迭代能给我们带来飞跃的性能增长 。
So we have to think about a new solution. There are three possible directions to go.
因此,我们必须考虑新的解决方案 。这里有三个可能的方向 。
Software centric mechanisms where we look at improving the efficiency of our software so it makes more efficient use of the hardware, in particular the move to scripting languages such as python for example better dynamically-typed. They make programming very easy but they're not terribly efficient as you will see in just a second.
以软件为中心的机制 。我们着眼于提高软件的效率,以便更有效地利用硬件,特别是脚本语言,例如 python 。这些语言让编程变得非常简单,但它们的效率并不高,接下来我会详细解释 。
Hardware centric approaches. Can we change the way we think about the architecture of these machines to make them much more efficient? This approach is called domain specific architectures or domain specific accelerator. The idea is to just do a few tasks but to tune the hardware to do those tasks extremely well. We've already seen examples of this in graphics for example or modem that's inside your cell phone. Those are special purpose architectures that use intensive computational techniques but are not general purpose. They are not programmed for arbitrary things. They are not designed to do a range of graphics operations or the operation is required by modem.

推荐阅读