Two implications of the end of the Moore's Law

What do we do when computers stop being more powerful every year?

Two implications of the end of the Moore's Law /img/moore-s-law.jpg

The MIT Technology Review just wrote that Moore’s Law fueled prosperity of the last 50 years, but the end is now in sight.

Fifty-five years ago, Gordon Moore forecast that the number of components on an integrated circuit would double every year until, and that this would lead society to the promised land, namely “such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment”. And almost all the MIT’s “most important breakthrough technologies of the year” since 2001 are possible “only because of the computation advances described by Moore’s Law”.

What does this mean NOW?

The article describes how and why Moore’s law may be finally coming to an end. Two effects of this scenario have particularly interesting implications (for me, at least):

1: “those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law… really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing”

My translation: if you are investing in AI stocks, please seriously reconsider the wisdom of such an investment.

2: “Wanted: A Marshall Plan for chips, because general economic growth cannot continue without improvements of microchips”

My translation (at least for Europe): the time for European microprocessors and FPGAs that I proposed in 2012 is coming!

Image source: “Of Memes and Memory and Moore’s Law”