Do you have adblock enabled?
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Ars Technica)   Its August, time for the annual "Is Moore's Law going to run into the laws of physics" article. All the usual suspects are mentioned, but subby is going to stick with Gordon Moore and his 49 year winning streak   (arstechnica.com) divider line 65
    More: Obvious, physics, diffraction, computer performance, performance measurement, argon, Ars Technica, diminishing returns, quantum systems  
•       •       •

4766 clicks; posted to Main » on 15 Aug 2014 at 4:16 PM (46 weeks ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



65 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

First | « | 1 | 2 | » | Last | Show all
 
2014-08-15 11:41:35 AM  
Don't tell an engineer something is limited or impossible. That just makes them wet.
 
2014-08-15 11:48:19 AM  
Short answer: yes long aswer: yyyyyeeeeeeeeeeesssssssssss

It was an off the cuff prediction that some dumbass decided to call a law, not a law that stood up to vigorous scientific investigation like, say, the law of gravity.
 
2014-08-15 12:04:15 PM  
this is like we are going to run out of oil?
still 10-20 years left?
BAH
why we even talking about this, other than some of the research is super cool?
 
2014-08-15 12:17:40 PM  

cretinbob: Short answer: yes long aswer: yyyyyeeeeeeeeeeesssssssssss

It was an off the cuff prediction that some dumbass decided to call a law, not a law that stood up to vigorous scientific investigation like, say, the law of gravity.


www.brainpickings.org
The "law of gravity"?
 
2014-08-15 12:24:23 PM  

mr_a: cretinbob: Short answer: yes long aswer: yyyyyeeeeeeeeeeesssssssssss

It was an off the cuff prediction that some dumbass decided to call a law, not a law that stood up to vigorous scientific investigation like, say, the law of gravity.

[www.brainpickings.org image 640x625]
The "law of gravity"?


You know what I mean.

I just woke up damnit

Newtons laws of thermodynamics. That's probably where the short cicuit happened
 
vpb [TotalFark]
2014-08-15 03:15:14 PM  
The reason that More's "law" has held is that it's a goal rather than a prediction.  Processing power doubles every 18 months because the management of processor making companies shoot for that as a target and allocate R&D funds accordingly.
 
2014-08-15 04:17:45 PM  
that's because subby is ignoring power, and the physical limits of transistor size (and their associated leakage currents)
 
2014-08-15 04:19:20 PM  
The solution probably lies in a series of boxes and a series of cats.
 
2014-08-15 04:20:06 PM  

namatad: this is like we are going to run out of oil?



No, it's nothing like that.
 
2014-08-15 04:22:51 PM  

mr_a: cretinbob: Short answer: yes long aswer: yyyyyeeeeeeeeeeesssssssssss

It was an off the cuff prediction that some dumbass decided to call a law, not a law that stood up to vigorous scientific investigation like, say, the law of gravity.

[www.brainpickings.org image 640x625]
The "law of gravity"?


What goes up, must come down
Spinning wheel, got to go round
 
2014-08-15 04:28:02 PM  
Is there a name for the law that dictates software bloat will outpace hardware progress?
 
2014-08-15 04:29:37 PM  
Aside from the limits of technology, I'm wondering about the market limit. At some point the extra utility starts to drop off, and eventually a very small group of people remains who might be in need of atom-scale transistors.
 
2014-08-15 04:32:07 PM  
Roger Moore fought the laws of physics for years.
paulgeorgedaniel.files.wordpress.com
 
2014-08-15 04:32:19 PM  

Elemental79: Is there a name for the law that dictates software bloat will outpace hardware progress?


Wirth's Law
 
2014-08-15 04:32:29 PM  
www.maglev.net
Impossible!
 
2014-08-15 04:32:54 PM  

Elemental79: Is there a name for the law that dictates software bloat will outpace hardware progress?


Nortons Law.

/Or Samsungs
 
2014-08-15 04:33:45 PM  

vpb: The reason that More's "law" has held is that it's a goal rather than a prediction.  Processing power doubles every 18 months because the management of processor making companies shoot for that as a target and allocate R&D funds accordingly.


Not really.
On the consumer CPU side Intel has been getting a 5-10% bump in performance each year. Nowhere near doubling every 18 months.
 
2014-08-15 04:36:25 PM  
I honestly haven't noticed much change in PC processing power in nearly 10 years. Clock speeds have stabilized with most stuff still selling around 3-5 GHz, there have been some minor optimizations, but most of the growth has been in putting more cores on a single die.

All the impressive stuff has been happening with low power (as in wattage) processors in phones and tablets.
 
2014-08-15 04:41:09 PM  
What about Thurston Moore's law?  The guitar neck becomes increasingly crowded with screwdrivers, nail clippers, and hacksaw blades that by 2026, the guitar will need to be played by one of those surgery-performing robots.
 
2014-08-15 04:42:34 PM  
Is this like the periodic article stating that we have run out of things to discover??
We've solved EVERYTHING!!
 
2014-08-15 04:51:32 PM  

madgonad: vpb: The reason that More's "law" has held is that it's a goal rather than a prediction.  Processing power doubles every 18 months because the management of processor making companies shoot for that as a target and allocate R&D funds accordingly.

Not really.
On the consumer CPU side Intel has been getting a 5-10% bump in performance each year. Nowhere near doubling every 18 months.


I for one am glad to not have to upgrade my computer every year.
It was ... fun and tiring back in the day
 
2014-08-15 04:58:59 PM  
I don't know about an exact timeframe, but there's a legitimate concern that Moore's Law is coming to an end with respect to contemporary CMOS fabrication processes.

The reason? Yes, physics... but more importantly, SIZE. The individual transistors on computer chips are just getting too dang small, and this present some problems.

1) Electrical efficiency. Think of what happens when you run too many electrical appliances on a thin extension cord. The cord gets hot, and if that heat isn't removed it could potentially heat too much and catch fire. What's the solution? Use a bigger cord- a thicker conductor can carry more electricity while heating up less.

The same thing happens in processors. When you run current through the circuits on the chip, some of that electricity is lost as waste heat. If you keep the amount of energy running through a processor constant, but continue to shrink the physical size of transistors and conductive materials inside a processor, they will continue to generate more and more waste heat. At a certain point, you can't go any smaller without burning up chips.

Intel has figured out some stopgap measures around this problem, but it's not perfect. For one, processors are using less energy these days and at lower voltages, which means less heat, but it also means that the clock rate isn't getting any faster. As the transistors continue to shrink, they will have to go slower and slower, so there's a tradeoff right now between number of transistors and speed. They've also experimented with different materials that have better electrical properties to push this as far as they can.


2) The physical size of atoms. You wouldn't think it, but this is actually a problem that Intel is having. Currently, their process creates a product that can be measured as being a few dozen atoms thick. At this level, normal electro-mechanical properties start to break down and electricity can start to behave very weirdly. There are quantum-type effects that might put a fundamental limit on how small we can actually go.

Even if there aren't, most people don't realize how close we are to the fundamental limits of reality. Right now Intel's latest and greatest manufacturing process is 22 nanometers. What that means is that each individual transistor inside their Ivy Bridge processor line has an area of 22nm x 22nm.

How small is that exactly? Well, a silicon atom has a "width" of about 250 picometers, or 0.25 nm. That means that each transistor in the 22nm process has 88 atoms to a side. Simple common sense tells you that at this point, traditional CMOS technology can't scale much further. Even if you were magically able to shrink an entire transistor into a single atom, and somehow have that function properly to make a computer, that means that you can only get about 88 times more efficient than what we are now.

For those of you keeping score at home, 2^6=64 and 2^7=128. That means that even under the most unrealistically optimistic assumption possible, Moore's Law can only hold out for another six to seven generations of product, or on the 18-month timescale, nine or ten years. Under more practical assumptions, the 22nm process might be the fundamental limit to CMOS already, so we've already hit the limit and just don't know it yet untill everyone's 14nm research fails.

So... I don't know what my next computer will be use, CMOS or something else, but I guarantee you that the one after that won't be CMOS anymore.
 
2014-08-15 05:07:45 PM  

JesseL: I honestly haven't noticed much change in PC processing power in nearly 10 years. Clock speeds have stabilized with most stuff still selling around 3-5 GHz, there have been some minor optimizations, but most of the growth has been in putting more cores on a single die.

More cores, and techniques like hyperthreading. Most people won't really get that much advantage from them on a single-user machine.
 
2014-08-15 05:07:53 PM  

madgonad: vpb: The reason that More's "law" has held is that it's a goal rather than a prediction.  Processing power doubles every 18 months because the management of processor making companies shoot for that as a target and allocate R&D funds accordingly.

Not really.
On the consumer CPU side Intel has been getting a 5-10% bump in performance each year. Nowhere near doubling every 18 months.


Moore's law is a prediction about transistor counts, not about processor performance. It actually has held true, but around the year 2000 the processor makers became less and less able to leverage their increasing number of transistors into additional speed, due to the "power wall," a phenomenon where as the transistor size on processor shrunk, the required energy increased exponentially.

As a result, the trend has been to make multi-core processors and use enormous on-chip caches to improve performance, but this hasn't lead to the doubling of performance like it used to.

img.fark.net
 
2014-08-15 05:11:47 PM  

ImpendingCynic: JesseL: I honestly haven't noticed much change in PC processing power in nearly 10 years. Clock speeds have stabilized with most stuff still selling around 3-5 GHz, there have been some minor optimizations, but most of the growth has been in putting more cores on a single die.
More cores, and techniques like hyperthreading. Most people won't really get that much advantage from them on a single-user machine.


Yep. Most people just don't run programs capable of exploiting multi-core processors. The following graph shows the performance benefit (speedup) of running a program with a given percentage content of parallelizable code. Honestly, programs that are 50% parallelizable are still a stretch these days, nevermind the higher lines.

img.fark.net
 
2014-08-15 05:18:25 PM  
The paper this article was published about is by University of Michigan's Igor Markov. What I have not been able to figure out is if there is a relation to Andrey Markov of "Markov Chain" fame. The ages are about right for Andrey -> Andrei -> Igor.

http://en.wikipedia.org/wiki/Markov_chain
 
2014-08-15 05:25:59 PM  
As someone who is working on glass manufacturing to support the next generation(s) of lithography, I laugh at these articles.
 
2014-08-15 05:33:51 PM  
whatever can go wrong will go twice as wrong in 18-months
 
2014-08-15 05:37:32 PM  
As others have pointed out, where's already in the post-Moore era. The age of 18-month doublings has come and gone.

And yes, subby: the laws of physics do put a strict limit on how small you can make transistors. Do you think that there's some sort of magical sub-atomic realm that we're suddenly going to be able to access to build them, just because Moore (very astutely) noticed a trend?
 
2014-08-15 05:37:54 PM  

madgonad: On the consumer CPU side Intel has been getting a 5-10% bump in performance each year. Nowhere near doubling every 18 months.


Indeed, and despite what people tell me about having many cores and that "we just don't use multi-cores well", the hardware doesn't support it well either. I mean, some of the problems are software based (it seems that 2-4 gigs per processor currently feels right for most non-trivial applications, including server applications) You can buy 64 gigs of ram for a single box, but it's a lot simpler and less risky just to have separate boxes that can be swapped out rather than laying out all that cash at once. (or, throw it on AWS)

Intel was excitedly selling us the idea of hundred core machines when it was clear they had run out of headroom in Ghz, and all the promises seem to have melted away.
 
2014-08-15 05:40:06 PM  

Victoly: Elemental79: Is there a name for the law that dictates software bloat will outpace hardware progress?

Wirth's Law


You know this is actually where we need to be moving next.  Once we started hitting physical limits on circuit width we shifted to architecture optimization.  Now that we (may be) hitting limits on architecture optimization we need to go back and start cleaning up our damned codebase.
 
2014-08-15 05:40:17 PM  
So the biggest innovation will end up being,,,forcing programmers to compile things in multi-threaded manners?
 
2014-08-15 05:40:45 PM  
I think the biggest innovation will be getting programmers to write better programs.
 
2014-08-15 05:42:04 PM  
The most demanding thing I do with my PC is gaming, 4 cores at 4.5Ghz is plenty and likely will be plenty for a long time on that front for me.  Now at work though feed me as many cores/CPU as you can and stuff as many of those into a server as you can because ESX hosts are glorious.  AMD need not apply, I ain't got time for those under performing paper weights.
 
2014-08-15 05:42:12 PM  
I've been working at Intel for over 16 years now and every time we hear that we've hit the wall our teams of amazing engineers find a new way to keep going.

Right now, as it has been for a while, the limitation is lithography.

Defects also become a huge problem. Imagine counting the number of particles on a 18" diameter dinner plate that are smaller than 1 micron ( 1 billionth of a meter). Then figure out where they are coming from and how to get rid of them.

That's the world I work in, and it's as boring as it sounds.
 
2014-08-15 05:43:58 PM  

Fubini: I don't know about an exact timeframe, but there's a legitimate concern that Moore's Law is coming to an end with respect to contemporary CMOS fabrication processes.

The reason? Yes, physics... but more importantly, SIZE. The individual transistors on computer chips are just getting too dang small, and this present some problems.

1) Electrical efficiency. Think of what happens when you run too many electrical appliances on a thin extension cord. The cord gets hot, and if that heat isn't removed it could potentially heat too much and catch fire. What's the solution? Use a bigger cord- a thicker conductor can carry more electricity while heating up less.

The same thing happens in processors. When you run current through the circuits on the chip, some of that electricity is lost as waste heat. If you keep the amount of energy running through a processor constant, but continue to shrink the physical size of transistors and conductive materials inside a processor, they will continue to generate more and more waste heat. At a certain point, you can't go any smaller without burning up chips.

Intel has figured out some stopgap measures around this problem, but it's not perfect. For one, processors are using less energy these days and at lower voltages, which means less heat, but it also means that the clock rate isn't getting any faster. As the transistors continue to shrink, they will have to go slower and slower, so there's a tradeoff right now between number of transistors and speed. They've also experimented with different materials that have better electrical properties to push this as far as they can.


2) The physical size of atoms. You wouldn't think it, but this is actually a problem that Intel is having. Currently, their process creates a product that can be measured as being a few dozen atoms thick. At this level, normal electro-mechanical properties start to break down and electricity can start to behave very weirdly. There are quantum-type effects that migh ...


Um Intel just announced 14nm chips shipping later this year. Codename Broadwell, will probably be called 5th Generation.
 
2014-08-15 05:44:19 PM  

cretinbob: Short answer: yes long aswer: yyyyyeeeeeeeeeeesssssssssss

It was an off the cuff prediction that some dumbass decided to call a law, not a law that stood up to vigorous scientific investigation like, say, the law of gravity.


Fubini: ImpendingCynic: JesseL: I honestly haven't noticed much change in PC processing power in nearly 10 years. Clock speeds have stabilized with most stuff still selling around 3-5 GHz, there have been some minor optimizations, but most of the growth has been in putting more cores on a single die.
More cores, and techniques like hyperthreading. Most people won't really get that much advantage from them on a single-user machine.

Yep. Most people just don't run programs capable of exploiting multi-core processors. The following graph shows the performance benefit (speedup) of running a program with a given percentage content of parallelizable code. Honestly, programs that are 50% parallelizable are still a stretch these days, nevermind the higher lines.

[img.fark.net image 648x486]


I'll see your Amdahl and raise you a Gustafson plus big data.
 
2014-08-15 05:49:35 PM  

TDBoedy: So the biggest innovation will end up being,,,forcing programmers to compile things in multi-threaded manners?


And that's if the application supports multi-threading in the first place. In some cases, the system will have to be redesigned from the ground up to take parallel processes into account. Asynchronous code execution leads to some very fun coding puzzles.
 
2014-08-15 06:00:00 PM  

saintstryfe: Fubini: I don't know about an exact timeframe, but there's a legitimate concern that Moore's Law is coming to an end with respect to contemporary CMOS fabrication processes.

The reason? Yes, physics... but more importantly, SIZE. The individual transistors on computer chips are just getting too dang small, and this present some problems.

1) Electrical efficiency. Think of what happens when you run too many electrical appliances on a thin extension cord. The cord gets hot, and if that heat isn't removed it could potentially heat too much and catch fire. What's the solution? Use a bigger cord- a thicker conductor can carry more electricity while heating up less.

The same thing happens in processors. When you run current through the circuits on the chip, some of that electricity is lost as waste heat. If you keep the amount of energy running through a processor constant, but continue to shrink the physical size of transistors and conductive materials inside a processor, they will continue to generate more and more waste heat. At a certain point, you can't go any smaller without burning up chips.

Intel has figured out some stopgap measures around this problem, but it's not perfect. For one, processors are using less energy these days and at lower voltages, which means less heat, but it also means that the clock rate isn't getting any faster. As the transistors continue to shrink, they will have to go slower and slower, so there's a tradeoff right now between number of transistors and speed. They've also experimented with different materials that have better electrical properties to push this as far as they can.


2) The physical size of atoms. You wouldn't think it, but this is actually a problem that Intel is having. Currently, their process creates a product that can be measured as being a few dozen atoms thick. At this level, normal electro-mechanical properties start to break down and electricity can start to behave very weirdly. There are quantum-type effects t ...


that's still 140 angstroms.
IBM ages ago made a transistor that was 7A (i think) but something like that is impossible for consumer goods.
We WILL hit a minimum transistor size, where anything smaller costs so much more than it's worth and consumers won't pay for it.
 
2014-08-15 06:00:35 PM  
damnit...
i meant to quote:

 "Um Intel just announced 14nm chips shipping later this year. Codename Broadwell, will probably be called 5th Generation."
 
2014-08-15 06:01:10 PM  
The problem isn't just hitting some laws-of-physics wall at a certain geometry. It's that the transistors are getting more expensive than the geometry shrink is worth. That, and the leakage currents are going up, up, up.

Not to mention the masking costs are in the millions these days. This makes it very difficult for small fabless concerns to use a bleeding-edge processes.
 
2014-08-15 06:03:05 PM  

what the cat dragged in: The problem isn't just hitting some laws-of-physics wall at a certain geometry. It's that the transistors are getting more expensive than the geometry shrink is worth. That, and the leakage currents are going up, up, up.

Not to mention the masking costs are in the millions these days. This makes it very difficult for small fabless concerns to use a bleeding-edge processes.


masking costs have always  been that high, and I'd imagine the onus is more on TSMC to get a new fab process right than the system designers. Besides, ironing out those issues with a new process is what validation and bringup are for.
 
2014-08-15 06:04:58 PM  

Slypork: Roger Moore fought the laws of physics for years.
[paulgeorgedaniel.files.wordpress.com image 611x338]


4.bp.blogspot.com

So long as the lupins hold out, my money's on Dennis Moore.
 
2014-08-15 06:08:41 PM  

bmayer: The paper this article was published about is by University of Michigan's Igor Markov. What I have not been able to figure out is if there is a relation to Andrey Markov of "Markov Chain" fame. The ages are about right for Andrey -> Andrei -> Igor.

http://en.wikipedia.org/wiki/Markov_chain


I've often thought that if Andrey Markov had only teamed up with Jesus and Mary, they would have made the best band ever.
 
2014-08-15 06:10:14 PM  
10nm processes are nearing the point of initial production, and 7nm process development is coming along smoothly. As someone stated earlier, Moore's Law doesn't discuss performance - it merely discusses transistor density. Yes, the physical limit of density is being approached, but in the end we're more concerned about parallelism and energy usage, as seen in IBM's development of neuron-like chips. There are novel ways to move forward, and that's what we'll pursue.
 
2014-08-15 06:15:10 PM  

Uchiha_Cycliste: what the cat dragged in: The problem isn't just hitting some laws-of-physics wall at a certain geometry. It's that the transistors are getting more expensive than the geometry shrink is worth. That, and the leakage currents are going up, up, up.

Not to mention the masking costs are in the millions these days. This makes it very difficult for small fabless concerns to use a bleeding-edge processes.

masking costs have always  been that high, and I'd imagine the onus is more on TSMC to get a new fab process right than the system designers. Besides, ironing out those issues with a new process is what validation and bringup are for.


Deep-submicron processes require more masks, and each mask is more expensive. That, and I suppose the number of metal layers is going up as well.

Another challenge is that mixed-signal IP doesn't scale as readily as logic. So while the logic is getting smaller, the padrings are not (at least, not at the same pace.) So  you're paying top dollar for that process but you're not able to get the die-per-wafer benefit that used to justify its cost.
 
2014-08-15 06:25:09 PM  

what the cat dragged in: Uchiha_Cycliste: what the cat dragged in: The problem isn't just hitting some laws-of-physics wall at a certain geometry. It's that the transistors are getting more expensive than the geometry shrink is worth. That, and the leakage currents are going up, up, up.

Not to mention the masking costs are in the millions these days. This makes it very difficult for small fabless concerns to use a bleeding-edge processes.

masking costs have always  been that high, and I'd imagine the onus is more on TSMC to get a new fab process right than the system designers. Besides, ironing out those issues with a new process is what validation and bringup are for.

Deep-submicron processes require more masks, and each mask is more expensive. That, and I suppose the number of metal layers is going up as well.

Another challenge is that mixed-signal IP doesn't scale as readily as logic. So while the logic is getting smaller, the padrings are not (at least, not at the same pace.) So  you're paying top dollar for that process but you're not able to get the die-per-wafer benefit that used to justify its cost.


I'll grant you more and more complex masks. I also think most of the new goodies being created are enterprise targeted. Frankly most consumer are morons and their computers are media and internet boxes. I suspect dev costs have scaled with enterprise charges and it costs about the same proportionally to develop and debug the chip.
Mixed Signal Ip doesn't scale, but I bet it's a temp problem. That once the power and size walls are hit, those will be done in house to speed and clean things up a bit. Then we can all stop sucking intel's dick to get sub-par memory controller IP.
 
2014-08-15 06:31:07 PM  

cretinbob: Short answer: yes long aswer: yyyyyeeeeeeeeeeesssssssssss

It was an off the cuff prediction that some dumbass decided to call a law, not a law that stood up to vigorous scientific investigation like, say, the law of gravity.


It's not even a  prediction so much as an unofficial industry goal.  It's the benchmark by which we set the threshold where miniaturization advancements are 'sufficient'... it's important not because Moore is Nostradamus, it's important because if the interconnect people, the lithography people, the design people, and the semiconductor people aren't all on the same page as to what dimensions the next generation of tech will use,  the parts won't fit together.

There would be no point in inventing a transistor capable of hitting the 5 nm node if no one's spent the corresponding amount of money and time to push interconnect beyond the 22 nm node scalings, for instance.  You'd just have wasted a few billion dollars and gotten steam-rolled by your competition that kept the parts more in synch.

// That's as laymans-terms as I can get with the explanation, I think.

// We are actually running into the issue of the physics changing at small scales already, most notably the big shift in the early 2000s where the choke point changed from direct response delay to cyclic R-C delays in the interconnect since the wires are higher aspect ratio and closer together.

// Honestly, given that I'm being paid for it I don't  mind the tech focus being mostly on low-k materials and interconnect issues, the  best solution is the engineering solution that's been around since literally the mid-80s, design your programs to be able to farm out sub-processes in parallel and thus dodge the issue almost entirely.  Add in off-device computing where you send some calculations to a different device and your comp's just a terminal, and there you go, as much processing 'on your computer' as you like from the end user's perspective.
 
2014-08-15 06:37:29 PM  

Jim_Callahan: // We are actually running into the issue of the physics changing at small scales already, most notably the big shift in the early 2000s where the choke point changed from direct response delay to cyclic R-C delays in the interconnect since the wires are higher aspect ratio and closer together.


memory hot spots have become a LOT more interesting now that things are so small, too.
 
2014-08-15 07:16:41 PM  
Who knew Fark was 50% chip designers?
 
Displayed 50 of 65 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »
Advertisement
On Twitter






In Other Media


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report