Do you have adblock enabled?
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Geek.com)   Fancy 90 Gigaflops in a package the size of a credit card that runs Linux for $99?   (geek.com) divider line 107
    More: Cool, Raspberry Pi, linux, microSD, parallel processing  
•       •       •

8170 clicks; posted to Geek » on 01 Dec 2013 at 10:53 AM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



107 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest
 
2013-12-01 06:27:53 AM  
Sweet, I want one.
 
2013-12-01 06:36:29 AM  
What would I do with that much computation?
 
2013-12-01 07:11:47 AM  

doglover: What would I do with that much computation?


Compute faster.
 
2013-12-01 07:18:33 AM  
in 1981 you could buy a 80 MFLOP Cray I for about $5 million and it had up to 8 megabytes of memory.  This thing could be 1000 times faster.
 
2013-12-01 08:07:40 AM  

Slaxl: doglover: What would I do with that much computation?

Surf porn Compute faster.


FTFY
 
2013-12-01 08:46:56 AM  

Demetrius: Slaxl: doglover: What would I do with that much computation?

Surf porn Compute faster.

FTFY


i.imgur.com
 
2013-12-01 09:04:03 AM  

Slaxl: doglover: What would I do with that much computation?

Compute faster.


Compute what? I don't do weather simulations or render Pixar animations. The only thing I could possible see this being used for in my life is create one of those late 80's early 90's cyberpunk brute force hacking gadgets from movies you plugged into things with a giant ribbon cable and watched the numbers match up. And even then, it would only be a curio because nothing accepts ribbon cables anymore.
 
2013-12-01 10:28:03 AM  
I'm trying to imagine porn in 15360 x 8640 resolution.
 
2013-12-01 10:31:53 AM  

doglover: What would I do with that much computation?


since all your shiat won't fit on the smallish SSD's you would still wait on your 7200rpm hard drive to deliver data just like you do now
 
2013-12-01 10:43:04 AM  

doglover: What would I do with that much computation?


I dunno but I want one anyways
 
2013-12-01 10:56:21 AM  
genomics. to answer your question.
 
2013-12-01 10:59:22 AM  
Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!
 
2013-12-01 11:00:03 AM  

doglover: What would I do with that much computation?


Run linux.

Cause when you run linux, that's pretty much all you can do.
 
2013-12-01 11:01:13 AM  

doglover: What would I do with that much computation?


Celebrate winning the SETI with your basement full of mined Bitcoins!
 
2013-12-01 11:04:12 AM  

fluffy2097: Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!


My GPU has the same processing power and runs multiple cores. Besides 3d rendering, scientific research and bit coin mining, there are few applications for this board. Perhaps it can be used for low price robotics research and development. The reduced size and power consumption compared to other options does offer some serious possibilities.
 
2013-12-01 11:05:33 AM  

fluffy2097: Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!


We're working on it!

/X is a flaming turd.
http://wayland.freedesktop.org/
 
2013-12-01 11:07:28 AM  

fluffy2097: How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?


imagineeringnow.com
 
2013-12-01 11:09:38 AM  

fluffy2097: How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?


"This product which is clearly not a consumer product is pointless because it is not a consumer product."

At $100 a pop, it has lots of applications in scientific computing. It'd be a great platform to use for Erlang applications.
 
2013-12-01 11:18:19 AM  
This thread has everything.

1) Windows weenies bashing linux.
2) Some guy claiming his GPU is faster.
3) Some guy who completely misunderstood the purpose of the project (and didn't read the article.)
4) Bitcoin
5) Porn

A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.
 
2013-12-01 11:28:41 AM  
Apparently the $99 doesn't get you the 90 gigaflop version...
 
2013-12-01 11:30:15 AM  
Wonder how many hashes a second it can do..
 
2013-12-01 11:32:53 AM  
I'd run Crysis on medium settings.
 
2013-12-01 11:36:32 AM  

fluffy2097: How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?


While its true that most people dont currently have a need for it... if its cheap and its there, we could just write programs differently to utilize having a bajillion extra cores.  Most programs do not need to be single threaded.
 
2013-12-01 11:49:23 AM  

Alonjar: we could just write programs differently to utilize having a bajillion extra cores.


We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.


Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!
 
2013-12-01 11:55:57 AM  

fluffy2097: Lemme give you a hint. Heat output and power use are proportional to how much work gets done.


If every architecture were identical, maybe.
 
2013-12-01 12:12:44 PM  

fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!


Damn right. Also, what speed is the RAM and what is the transfer speed with the CPU? ARM9 only supports DDR2 if I remember.

A 99$ card isn't going to beat out a higher tech system. What is nice about this is the low wattage for specialty applications, mainly mobile.
 
2013-12-01 12:13:03 PM  

fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!


While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...
 
2013-12-01 12:13:52 PM  
So sell on bitcoin? Seems like a miners dream.
 
2013-12-01 12:18:25 PM  

fluffy2097: Very few programs in fact lend themselves to massively parallel processing.


That's not really true. Lots of problems can be solved in parallel. The problem is that parallel programming is  hard. There's a lot more emphasis on building parallel code these days, and it's getting a lot easier. Enterprise languages, like .NET, are adopting some really clever approaches to simplifying (async/await, parallel.for, etc). Functional languages, like Haskell, already parallelize naturally. Erlang  forces you to write parallel code.

fluffy2097: Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.


That's because Epiphany has its own memory. Not a lot, but that's how it's designed. Remember, their primary business case is to have a massively parallel chip suitable for embedded applications- like facial recognition for cellphones.

That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.
 
2013-12-01 12:23:33 PM  

FarkGrudge: As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"


Well. They say its an ARM A9 Dual core, and it's funded by kickstarter.

/I'm about as confident this is not a super computer as I was that OUYA would be the turd it is.
 
2013-12-01 12:32:27 PM  
FTA: For comparison, that amount of GFLOPS is equivalent to a 45GHz processor.

That statement is retarded. This is assuming 2 floating point instructions per clock. AMD's original Athlon was capable of of 3 per clock using x87. Of course, this is the theoretical peak.

For actual performance, a 3.4Ghz quad-core Sandy Bridge can reach 87GFLOPS in Linpack (Link). This chip came out almost 3 years ago.

Sure, it costs twice as much (and isn't a complete SOC), but it also doesn't require highly customized software to approach those speeds - all that is needed is a good compiler.
 
2013-12-01 12:34:08 PM  

ancker: This thread has everything.

1) Windows weenies bashing linux.
2) Some guy claiming his GPU is faster.
3) Some guy who completely misunderstood the purpose of the project (and didn't read the article.)
4) Bitcoin
5) Porn

A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.


6. Guy arguing 'his computers are fast enough so why bother improving them.'
 
2013-12-01 12:35:33 PM  

LasersHurt: fluffy2097: Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

If every architecture were identical, maybe.


This. It's not just about how much work gets done, but also how much work is wasted. An i7 and a PowerPC are pretty far apart in terms of pure performance, but if all you needed to do was run math computations the PowerPC architecture is vastly superior. It won't handle very complex instructions, but it's not made to do it. The transition between RISC and CISC can be spanned in software and firmware, and with the price point of smaller chips, you can easily get into a situation where throwing more cores into a CPU unit can snowball into a more powerful CPU than anything out there now. Ever wonder why Mac's and musicians go together? The PowerPC was the best platform for crunching the math computations for encoding.

I was just thinking last night about how we have locked ourselves into thinking that the 386 architecture is the way to go, but it will eventually get replaced with newer and better CPUs. Just look at AMD's GPU-on-a-CPU hybrid chips. The way I see it, this could be a tipping point where we start seriously looking at designing massively parallel computing as a serious competitor to the current architecture and computing paradigms. Quantum computers need extremely low temperatures to work, and making a massive CISC core running quantum computations would be difficult to keep cool. Smaller cores' cooling envelope can much more easily be handled because of their smaller footprint. I think that this massively parallel technology will be a natural stepping stone towards processors that really are futuristic, cheaper, and wildly more powerful than anything we can currently imagine.

/Yes, I want one.
 
2013-12-01 12:36:08 PM  

t3knomanser: fluffy2097: Very few programs in fact lend themselves to massively parallel processing.

That's not really true. Lots of problems can be solved in parallel. The problem is that parallel programming is  hard. There's a lot more emphasis on building parallel code these days, and it's getting a lot easier. Enterprise languages, like .NET, are adopting some really clever approaches to simplifying (async/await, parallel.for, etc). Functional languages, like Haskell, already parallelize naturally. Erlang  forces you to write parallel code.

fluffy2097: Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.

That's because Epiphany has its own memory. Not a lot, but that's how it's designed. Remember, their primary business case is to have a massively parallel chip suitable for embedded applications- like facial recognition for cellphones.

That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.


I think it could spur some innovation on developing uses for the parallel processor. I already have a few projects in mind that it would be pretty well suited for that would be infeasible without the small size and low power consumption.

Now I just need a workshop and the time.
 
2013-12-01 12:36:20 PM  

fluffy2097: FarkGrudge: As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

Well. They say its an ARM A9 Dual core, and it's funded by kickstarter.

/I'm about as confident this is not a super computer as I was that OUYA would be the turd it is.


Heh, wasn't that suppposed to be out by now? I remember tons of games were being hyped as "available for OUYA on Late 2013." Well, we're in December. Fess up! You have nothing, guys.
 
2013-12-01 12:41:57 PM  

FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...


Looks to be 65-28nm.
 
2013-12-01 12:42:01 PM  
If they only called it "The Sinclair"
 
2013-12-01 12:46:29 PM  

rocky_howard: Heh, wasn't that suppposed to be out by now?


It is out. It's been out since March. It's also been a complete flop, because, seriously? What were they thinking?
 
2013-12-01 12:49:14 PM  

Stibium: LasersHurt: fluffy2097: Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

If every architecture were identical, maybe.

This. It's not just about how much work gets done, but also how much work is wasted. An i7 and a PowerPC are pretty far apart in terms of pure performance, but if all you needed to do was run math computations the PowerPC architecture is vastly superior. It won't handle very complex instructions, but it's not made to do it. The transition between RISC and CISC can be spanned in software and firmware, and with the price point of smaller chips, you can easily get into a situation where throwing more cores into a CPU unit can snowball into a more powerful CPU than anything out there now. Ever wonder why Mac's and musicians go together? The PowerPC was the best platform for crunching the math computations for encoding.

I was just thinking last night about how we have locked ourselves into thinking that the 386 architecture is the way to go, but it will eventually get replaced with newer and better CPUs. Just look at AMD's GPU-on-a-CPU hybrid chips. The way I see it, this could be a tipping point where we start seriously looking at designing massively parallel computing as a serious competitor to the current architecture and computing paradigms. Quantum computers need extremely low temperatures to work, and making a massive CISC core running quantum computations would be difficult to keep cool. Smaller cores' cooling envelope can much more easily be handled because of their smaller footprint. I think that this massively parallel technology will be a natural stepping stone towards processors that really are futuristic, cheaper, and wildly more powerful than anything we can currently imagine.

/Yes, I want one.


The second gen core I chips from Intel are of a similar architecture to the AMD apu and the subsequent FX line of CPUs.

Pairing the complex x64 cores (x86 went away a while ago, most OSs just didn't support the full features of the new cores. Now they all do) with a GPU parallel processor let's the main cores do what they are good at (moving big blocks, sorts, matching numbers etc.) and the GPU solves things like square roots and other problems that are really slow on an x64 core.

The use of Apples by musicians and publishers had more to do with exclusive and really good applications on the platform along with excellent peripherals. Apple knew that these were weak on the PC and filled the void.

The Power PC architecture is accually very similar to the current AMD set up. Intel has started to go its own way again with memory access and control, but it has pushed up what DDR3 can do.
 
2013-12-01 12:51:21 PM  

Your Hind Brain: FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...

Looks to be 65-28nm.


My GPU is 28nm fab. Gives you way more processing per Watt.
 
2013-12-01 12:51:24 PM  

doglover: Slaxl: doglover: What would I do with that much computation?

Compute faster.

Compute what? I don't do weather simulations or render Pixar animations. The only thing I could possible see this being used for in my life is create one of those late 80's early 90's cyberpunk brute force hacking gadgets from movies you plugged into things with a giant ribbon cable and watched the numbers match up. And even then, it would only be a curio because nothing accepts ribbon cables anymore.


You know, I have this weird feeling that maybe they didn't make it for you.
 
2013-12-01 12:53:14 PM  
It's not *always* true that computation = power consumption... That's a CMOS thing. Crays and high-speed RTOS stuff like sonar / radar etc were built using ECL which ran just as hot sitting there idle as computing furiously. That's *why* they were so speedy (for the time... )They didn't go into saturation.

Me, I would LOVE to have a few of these to play with. I don't have a serious use for them, but I do have a hobbyist interest involving rendering, prime # calculation etcetera. And I'm pretty good at multithreaded programming.
 
2013-12-01 12:57:03 PM  
Is 'fancy' supposed to be a verb?  Because on this side of the pond it's an adjective mostly used by certain men for things like curtains.
 
2013-12-01 12:58:26 PM  
t3knomanser:
That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.

This is exactly the kind of thing that should go into a quad-copter.  Self-controlled flight is awesome.

Cheers

//To the workshop!
 
2013-12-01 01:01:46 PM  

drumhellar: FTA: For comparison, that amount of GFLOPS is equivalent to a 45GHz processor.

That statement is retarded. This is assuming 2 floating point instructions per clock. AMD's original Athlon was capable of of 3 per clock using x87. Of course, this is the theoretical peak.

For actual performance, a 3.4Ghz quad-core Sandy Bridge can reach 87GFLOPS in Linpack (Link). This chip came out almost 3 years ago.

Sure, it costs twice as much (and isn't a complete SOC), but it also doesn't require highly customized software to approach those speeds - all that is needed is a good compiler.


A Linux kernel isn't what I'd call a piece of "highly customized software."

As well, each of the accelerator cores can only run one FLOP per clock cycle, but with 64 of them running at 700 MHz you get 44.8 GFLOPs. That's half the performance of Sandy Bridge, but it's running nearly 5 times slower. You need to keep in mind that these are NOT designed for floating point performance, but rather for simpler integer arithmetic, which is just as useful. FLOPs is the real misnomer here; punching a few numbers into a calculator to get a number you can compare isn't quite how things work when you are comparing apples to oranges.
 
2013-12-01 01:05:26 PM  

gozar_the_destroyer: Your Hind Brain: FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...

Looks to be 65-28nm.

My GPU is 28nm fab. Gives you way more processing per Watt.


That's the idea of smaller dimensions. Cooler temp-wise too. But of course, your talking about a GPU. Water cooled?
 
2013-12-01 01:07:25 PM  

syrynxx: Is 'fancy' supposed to be a verb?  Because on this side of the pond it's an adjective mostly used by certain men for things like curtains.


Would you fancy getting out more?

Newsflash: "Fancy" has been equated with "desire" or "like" for quite some time even in the states.
 
2013-12-01 01:08:43 PM  

doglover: What would I do with that much computation?


Go figure.
 
2013-12-01 01:12:30 PM  

BigLuca: I'd run Crysis on medium settings.


Winner
 
2013-12-01 01:17:08 PM  
I'd be interested to see how they tweaked the kernel to fully utilize 64 cores and why they choses Ubuntu.
 
2013-12-01 01:19:22 PM  
Also, since we seem to have a gathered gaggle of geeks here, Richard Feynman did some lectures on the theory of computation that fascinated the hell out of me. But then I'm easily impressed. I wish I had a link to it.
 
2013-12-01 01:19:24 PM  

gozar_the_destroyer: The use of Apples by musicians and publishers had more to do with exclusive and really good applications on the platform along with excellent peripherals. Apple knew that these were weak on the PC and filled the void.


I meant it more for Macs when they were PowerPC-based. At the time the processor kicked major ass at running numbers, which is why companies developed that kind of kick-ass software for Macs... the same performance simply couldn't be had with the x86 architecture of the time. The PowerPC pipelined better and crunched numbers faster for those particular applications (especially FFTs), it was simply better for that application so that's where developers put it. (See also OS/2's history where devs went flocking towards Windows because it was an easier platform to work with and could the same programs ran just as well, if not better, under OS/2.)

Granted today there is no contest between the better processor, but you still are left with the question of "what is the better way to do things" and it's still not a question that can be completely answered because we always find better ways to do things. RISC versus CISC is an open-ended question that is answered whatever way you want to answer. IBM answered it one way with the PowerPC and Cell, Intel and AMD answered it another with the 386 and later generations of processors. To speak of which is the better way of doing things I'd say the answer is closer to which paradigm has more potential for success. Smaller, cheaper cores simply scale better in every way, and software tweaks can iron out the issues where you have to emulate or work around the simpler instruction set. In the end you are left with processors who have equivalent FLOP performance, but far better integer crunching performance.
 
2013-12-01 01:23:33 PM  

t3knomanser: rocky_howard: Heh, wasn't that suppposed to be out by now?

It is out. It's been out since March. It's also been a complete flop, because, seriously? What were they thinking?


Oh wow. Talk about failure. Thanks for the info.
 
2013-12-01 01:38:01 PM  

Your Hind Brain: I'd be interested to see how they tweaked the kernel to fully utilize 64 cores and why they choses Ubuntu.


Meth.
 
2013-12-01 01:46:49 PM  

Your Hind Brain: gozar_the_destroyer: Your Hind Brain: FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...

Looks to be 65-28nm.

My GPU is 28nm fab. Gives you way more processing per Watt.

That's the idea of smaller dimensions. Cooler temp-wise too. But of course, your talking about a GPU. Water cooled?


Air.

Sapphire Raideon HD 7950 with boost to 925 MHz. Card is still pretty good, but the latest line from ATI is a leap forward.

I have it at 1.1 GHz on the over clock and the memory is maxed out. Water cooled would be a little more silent and give me another 100 MHz according to those with such a rig. Temp would come down too, but the chip is designed to run at 70 C° and barely reaches that. I lucked out and got a card that over clocks really well.
 
2013-12-01 01:50:36 PM  

Stibium: gozar_the_destroyer: The use of Apples by musicians and publishers had more to do with exclusive and really good applications on the platform along with excellent peripherals. Apple knew that these were weak on the PC and filled the void.

I meant it more for Macs when they were PowerPC-based....

Granted today there is no contest between the better processor, but you still are left with the question of "what is the better way to do things" and it's still not a question that can be completely answered because we always find better ways to do things. RISC versus CISC is an open-ended question that is answered whatever way you want to answer. ...


I get what you're trying to say, but you have history incorrect. Apple dropped the PowerPC simply because Motorola could never get its act together in delivering a chip design that could be produced in sufficient quantities.
 
2013-12-01 01:56:49 PM  

Stibium: A Linux kernel isn't what I'd call a piece of "highly customized software."

As well, each of the accelerator cores can only run one FLOP per clock cycle, but with 64 of them running at 700 MHz you get 44.8 GFLOPs. That's half the performance of Sandy Bridge, but it's running nearly 5 times slower. You need to keep in mind that these are NOT designed for floating point performance, but rather for simpler integer arithmetic, which is just as useful. FLOPs is the real misnomer here; punching a few numbers into a calculator to get a number you can compare isn't quite how things work when you are comparing apples to oranges.


Nothing you said describes reality.

First, the Linux kernel isn't running on those 64 cores, which are doing the heavy work (And the basis for the 90GFLOPS claim). A Linux kernel would run on the dual-core ARM chip, but not the Epiphany Accelerator. The cores in that part use a custom instruction set. The accelerator chip has it's own memory (32KB per core). It cant' access system memory directly, though, and is limited to the 32KB per core the chip has available. That memory space is a single memory space in the eyes of the accelerator chip, but it is a separate pool from the 1GB system memory. So, you'd have to design your software to run without help from POSIX interfaces or generic libraries, and fit within 2MB of memory. That sounds fairly custom to me.

Also, Saying "90GFLOPS is equal to a 45GHz chip" is retarded (Not so retarded that it insults pants, however) because different cores can do different numbers of floating operations per second. As I said, the original Athlon could do 3 per clock. A modern x86-64 processor with a single AVX unit can do 8 per clock per core.

Also, these cores are designed for floating point operation - that is what a FLOP is. These cores are capable of two single-precision floating point ops per clock cycle. At 700MHz and 64 cores, this equals 89.6 GFLOPS theoretical.
Seeing as how a Sandy Bridge core can do 8 floating point ops per clock, a quad-core Sandy Bridge at 3.4GHz can do 108.8GLFOPS, theoretically.

Stibium: Smaller, cheaper cores simply scale better in every way


This is simply not true. Not every task is can be parallelized, or parallelized enough to where dumping cores at the problem helps. Also, as core counts increase, the fabric necessary to interconnect them becomes more and more complex. Extra cores also doesn't help with branch-heavy code, either, where latency becomes more of a limiting factor.
 
2013-12-01 02:06:25 PM  

rocky_howard: t3knomanser: rocky_howard: Heh, wasn't that suppposed to be out by now?

It is out. It's been out since March. It's also been a complete flop, because, seriously? What were they thinking?

Oh wow. Talk about failure. Thanks for the info.


My local Target sells them, but they are probably being outsold by the Sega Genesis emulators with 80 games packed in.
 
2013-12-01 02:26:51 PM  
Fastest. Credit card. Ever.
 
2013-12-01 02:51:12 PM  
20 years ago the GF-11 supercomputer was used to calculate the mass of the proton. 11 peak gigaflops. I wonder if you could run that code today?
 
2013-12-01 03:04:41 PM  

Marcus Aurelius: I'm trying to imagine porn in 15360 x 8640 resolution.


I'm trying *not* to.
 
2013-12-01 03:09:34 PM  

Doctor Funfrock: So sell on bitcoin? Seems like a miners dream.


First thing that came to mind. *nods*
 
2013-12-01 03:21:50 PM  

drumhellar: This is simply not true. Not every task is can be parallelized, or parallelized enough to where dumping cores at the problem helps. Also, as core counts increase, the fabric necessary to interconnect them becomes more and more complex. Extra cores also doesn't help with branch-heavy code, either, where latency becomes more of a limiting factor.


It's funny, but when I read this what I heard in my head was "you can't make a baby in a month just because you have nine women".
 
2013-12-01 03:25:08 PM  

kasmel: It's funny, but when I read this what I heard in my head was "you can't make a baby in a month just because you have nine women".


I love it.
 
2013-12-01 03:41:52 PM  

Your Hind Brain: I'd be interested to see how they tweaked the kernel to fully utilize 64 cores and why they choses Ubuntu.


Because Ubuntu is Linux for retards?  At this point Canonical has pretty much pissed off everyone in the Ubuntu community who actually understands what's happening, all those people have jumped to Mint or gone to Debian.

I'm still wondering what the point of this thing in the article is though, beyond the nerd factor the Pi is so popular not because it's powerful (it really isn't) but the GPU can do h264 decoding which makes it popular with XBMC people (like me).  So neither it or the Pi would be competing with each other.
 
2013-12-01 03:47:26 PM  

La Maudite: Marcus Aurelius: I'm trying to imagine porn in 15360 x 8640 resolution.

I'm trying *not* to.


I'd love it, personally.  That resolution squeezed onto a normal sized monitor would enable zooming on the parts that you're interested in without it becoming a blurry mess.  Porn can have some really crappy camera work.  Make bad home movie people look like skilled Hollywood directors.

That, in addition to alternate angles(that actually worked seamlessly), and I'll be in my bunk.

Or you could enhance with cgi and multiple cameras to make it a 3d experience(IE move the viewing angle and zoom, not stereoscopic vision).
 
2013-12-01 03:49:58 PM  

doglover: What would I do with that much computation?


THIS

Back in my day, we had a 16Mhz 80386sx and we liked it.
 
2013-12-01 04:04:55 PM  
who wants to benchmark VASP on one of these for me ?
 
2013-12-01 04:24:35 PM  

t3knomanser: rocky_howard: Heh, wasn't that suppposed to be out by now?

It is out. It's been out since March. It's also been a complete flop, because, seriously? What were they thinking?


Ashens found an Ouya in a pawn shop before it was even released.
 
2013-12-01 04:31:29 PM  

rdnjr1234: fluffy2097: Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!

We're working on it!

/X is a flaming turd.
http://wayland.freedesktop.org/


There's also Ubuntu's Mir thing that is being built for the unity DE.
 
2013-12-01 04:35:52 PM  

omeganuepsilon: La Maudite: Marcus Aurelius: I'm trying to imagine porn in 15360 x 8640 resolution.

I'm trying *not* to.

I'd love it, personally.  That resolution squeezed onto a normal sized monitor would enable zooming on the parts that you're interested in without it becoming a blurry mess.  Porn can have some really crappy camera work.  Make bad home movie people look like skilled Hollywood directors.


I find that I can generally spot the scars from boob jobs even when I'm not actually looking for them, so I'm pretty sure I don't want to get any closer than that. Have you got a fetish for ingrown pubes or something? The occasional lone chest hair next to a nipple? Do you go over your sex partners with a high-powered magnifying glass, looking for enlarged pores and individual freckles?
 
2013-12-01 04:48:04 PM  

rocky_howard: t3knomanser: rocky_howard: Heh, wasn't that suppposed to be out by now?

It is out. It's been out since March. It's also been a complete flop, because, seriously? What were they thinking?

Oh wow. Talk about failure. Thanks for the info.


Not really, actually.  "Flop" in terms of selling millions of PS4s?  Sure.
Is the company losing money?  Unknown.  The general consensus is No.
Do they have 500+ games (of varying quality)?  Yes
Are any of them actually "good"?  Reportedly yes, and fun
Does it run emulators up to PS1 and N64 "OK" and getting better?  Yes
Does it run XBMC and Netflix and Hulu?  Yes and those companies are reportedly working to make the experience even better.
Is the list of games that were announced "back then" that haven't arrived exist?  yes but it's like.. two of them, minecraft being one.
They are releasing new models, new versions.  It absolutely blows away what little competition this niche has in it.  Gamestick what?

Yes I have one, yes two of my friends with the kids have one.  The little crap boxes get lots of use.  So... flop?  Maybe to you but it's been worth every single penny spent.  And there are many enjoyable games that cost like... $4 bucks.  Which costs less than the beer that beer snobs drink.  And kids under 12?  Do you think they give a single crap about how many polygons you can render at 120fps?  No, they just want to play and laugh.  Plus, IF I want to I can install Debian on it.  Which I will eventually when I replace it.  So...

So.. not sure what "talk about failure" means.
 
2013-12-01 05:01:30 PM  

Eddie Adams from Torrance: doglover: What would I do with that much computation?

THIS

Back in my day, we had a 16Mhz 80386sx and we liked it.


Mine was on an Amiga board.

www.heiko-pruessing.de
 
2013-12-01 05:37:12 PM  

La Maudite: omeganuepsilon: La Maudite: Marcus Aurelius: I'm trying to imagine porn in 15360 x 8640 resolution.

I'm trying *not* to.

I'd love it, personally.  That resolution squeezed onto a normal sized monitor would enable zooming on the parts that you're interested in without it becoming a blurry mess.  Porn can have some really crappy camera work.  Make bad home movie people look like skilled Hollywood directors.

I find that I can generally spot the scars from boob jobs even when I'm not actually looking for them, so I'm pretty sure I don't want to get any closer than that. Have you got a fetish for ingrown pubes or something? The occasional lone chest hair next to a nipple? Do you go over your sex partners with a high-powered magnifying glass, looking for enlarged pores and individual freckles?


I happen to like freckles and enjoy the ones on Mrs. Destroyer very much.
 
2013-12-01 05:37:23 PM  

Vaneshi: Your Hind Brain: I'd be interested to see how they tweaked the kernel to fully utilize 64 cores and why they choses Ubuntu.

Because Ubuntu is Linux for retards?  At this point Canonical has pretty much pissed off everyone in the Ubuntu community who actually understands what's happening, all those people have jumped to Mint or gone to Debian.

I'm still wondering what the point of this thing in the article is though, beyond the nerd factor the Pi is so popular not because it's powerful (it really isn't) but the GPU can do h264 decoding which makes it popular with XBMC people (like me).  So neither it or the Pi would be competing with each other.


I wouldn't go so far as to say that only retards use Ubuntu, just a weird choice. Take in to consideration that the guy spearheading the project is an IC designer foremost and maybe whatever Linux distro he would use is irrevelant. TFA didn't really get into why he used Ubuntu. Did he really focus on the architecture and then just slap on Ubuntu because it was easy? With all the smart guys involved with this project, you'd think that somebody would take the time to compile their own kernel.
 
2013-12-01 05:53:17 PM  

La Maudite: omeganuepsilon: La Maudite: Marcus Aurelius: I'm trying to imagine porn in 15360 x 8640 resolution.

I'm trying *not* to.

I'd love it, personally.  That resolution squeezed onto a normal sized monitor would enable zooming on the parts that you're interested in without it becoming a blurry mess.  Porn can have some really crappy camera work.  Make bad home movie people look like skilled Hollywood directors.

I find that I can generally spot the scars from boob jobs even when I'm not actually looking for them, so I'm pretty sure I don't want to get any closer than that. Have you got a fetish for ingrown pubes or something? The occasional lone chest hair next to a nipple? Do you go over your sex partners with a high-powered magnifying glass, looking for enlarged pores and individual freckles?


You're not very bright.

I mentioned zooming, which would be but a sample of that resolution, and not necessarily deliver the detail that you pretend grosses you out so much. Thou dost prosteth too much.

Here's a hint, we have digital photo's with similar high resolutions now, but you can only see a portion of them at a time on most screens.  They're not necessarily taken closer, but the frame size is larger than your monitor.  It's not more detail, it's more real estate.

Also, there is enough porn out there where you can, and brace yourself, this is a shocker, find something else if you don't like what you've found.

Crazy, I know.

/porn has plenty of ugly and hideous things before HD came around, most people with a modicum of taste simply find something else

That's the whole idea behind porn in the first place, to find something that you like.  What are you, some sort of masochist?
 
2013-12-01 06:12:14 PM  
omeganuepsilon:

That's the whole idea behind porn in the first place, to find something that you like.  What are you, some sort of masochist?

img.fark.net

/ roast beef
 
2013-12-01 06:31:19 PM  

t3knomanser: fluffy2097: Very few programs in fact lend themselves to massively parallel processing.

That's not really true. Lots of problems can be solved in parallel. The problem is that parallel programming is  hard. There's a lot more emphasis on building parallel code these days, and it's getting a lot easier. Enterprise languages, like .NET, are adopting some really clever approaches to simplifying (async/await, parallel.for, etc). Functional languages, like Haskell, already parallelize naturally. Erlang  forces you to write parallel code.

fluffy2097: Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.

That's because Epiphany has its own memory. Not a lot, but that's how it's designed. Remember, their primary business case is to have a massively parallel chip suitable for embedded applications- like facial recognition for cellphones.

That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.


One area where I have definitely heard some buzz re these boards (outside of the scientific community)--these would actually be quite good as a cheap, powerful controller for SDR kits (like the Softrock series or UHFSDR kits).  Parallel processing is good for stuff like this, and it makes it easier to build an "all in one" SDR rig. :D

For folks who don't speak ham radio, this means in essence you could build a ham radio for $300 or so (and realistically, probably more like $250) that has the capabilities of a $1500 store-bought rig, and is also rather more expandable (and has the bonus of knowing how it is put together so it's fixable if it breaks). :D

/y yes, I would be one of those who's been following this little board primarily for the SDR applications thereof
 
2013-12-01 06:35:13 PM  

Your Hind Brain: omeganuepsilon:

That's the whole idea behind porn in the first place, to find something that you like.  What are you, some sort of masochist?

[img.fark.net image 190x94]

/ roast beef


See, a small portion of a larger picture, it's all blurry and indistinct.  With what I'm talking about, you wouldn't have that.  You'd be able to tell for certain that it was moist meat.  Well...wait a minute....
 
2013-12-01 06:59:02 PM  

Great_Milenko: doglover: What would I do with that much computation?

Run linux.

Cause when you run linux, that's pretty much all you can do.


Or run your computer to play games, read email and surf the web.
Or run your phone.
Or run your car.
Or run your router.
Or run your thermostat.
Or run manufacturing machinery.
...
 
2013-12-01 07:06:42 PM  

Stibium: I meant it more for Macs when they were PowerPC-based. At the time the processor kicked major ass at running numbers, which is why companies developed that kind of kick-ass software for Macs... the same performance simply couldn't be had with the x86 architecture of the time. The PowerPC pipelined better and crunched numbers faster for those particular applications (especially FFTs), it was simply better for that application so that's where developers put it. (See also OS/2's history where devs went flocking towards Windows because it was an easier platform to work with and could the same programs ran just as well, if not better, under OS/2.)


Pipelined better? The Pentium 4 initially had a 20 stage pipeline compared to the PowerPC's 970 11/16. The PowerPC architecture was faster for a time in floating point benchmarks, but Intel processors had caught up at this point. Apple switched over at just the right time.
 
2013-12-01 07:19:02 PM  

The_Time_Master: Or run your computer to play a vastly lower number of outdated games or struggle with wine to kind of gimp along with a modern game.


FTFY

No window's fan myself, but gaming is never an apt argument for linux.  You argue like a dirty conservative politician that is denying he was caught doing coke and farking the bellboy on the government's dime.

It always strikes me as funny(in a very sad way), because it's a null argument from a technological standpoint anyways, it's all business deal what games get to which platform, not superior code or what have you.
 
2013-12-01 07:48:20 PM  

omeganuepsilon: The_Time_Master: Or run your computer to play a vastly lower number of outdated games or struggle with wine to kind of gimp along with a modern game.

FTFY

No window's fan myself, but gaming is never an apt argument for linux.  You argue like a dirty conservative politician that is denying he was caught doing coke and farking the bellboy on the government's dime.

It always strikes me as funny(in a very sad way), because it's a null argument from a technological standpoint anyways, it's all business deal what games get to which platform, not superior code or what have you.


Hear Hear!
 
2013-12-01 07:53:20 PM  

Great_Milenko: doglover: What would I do with that much computation?

Run linux.

Cause when you run linux, that's pretty much all you can do.


Well.. unless you're willing to get your hands dirty. Some of actually like digging through a system to configure it properly; I'll typically lose interest in a fully-functional system and start trying out different OS's just to play with them.

So obviously, Linux is a great system for myself and many others, but not for everyone.. just like not everyone who uses cars knows anything about what's under the hood, other than "Oil go here" and "Gas go here". :)
 
2013-12-01 08:02:00 PM  

omeganuepsilon: The_Time_Master: Or run your computer to play a vastly lower number of outdated games or struggle with wine to kind of gimp along with a modern game.

FTFY

No window's fan myself, but gaming is never an apt argument for linux.  You argue like a dirty conservative politician that is denying he was caught doing coke and farking the bellboy on the government's dime.

It always strikes me as funny(in a very sad way), because it's a null argument from a technological standpoint anyways, it's all business deal what games get to which platform, not superior code or what have you.


So, what are your thoughts on SteamOS then?
 
2013-12-01 08:17:51 PM  

gozar_the_destroyer: omeganuepsilon: The_Time_Master: Or run your computer to play a vastly lower number of outdated games or struggle with wine to kind of gimp along with a modern game.

FTFY

No window's fan myself, but gaming is never an apt argument for linux.  You argue like a dirty conservative politician that is denying he was caught doing coke and farking the bellboy on the government's dime.

It always strikes me as funny(in a very sad way), because it's a null argument from a technological standpoint anyways, it's all business deal what games get to which platform, not superior code or what have you.

So, what are your thoughts on SteamOS then?


Not speaking for Omegan.

Licensing
 
2013-12-01 08:17:53 PM  

gozar_the_destroyer: So, what are your thoughts on SteamOS then?


That largely depends on how well steam can support games that were designed for windows(because I kind of doubt all developers going out of their way when they've already got the other market's cornered), IF they can at all.

As it stands now, Steam's linux game library is...underdeveloped.

If they can manage a good emulator(if that's the right term) so that "Window's" games can be played with ease and not be gimped in any way, good on them.

Sure, you'll get a few dev's that will work to make sure their product plays on a steam box, but not many.

Most are very happy writing for console(s), and then porting to PC today, with steam already right here doing it's thing.

IE I don't think the steambox will make any headway in that department unless it comes from their own studios.
 
2013-12-01 08:44:27 PM  

omeganuepsilon: gozar_the_destroyer: So, what are your thoughts on SteamOS then?

That largely depends on how well steam can support games that were designed for windows(because I kind of doubt all developers going out of their way when they've already got the other market's cornered), IF they can at all.

As it stands now, Steam's linux game library is...underdeveloped.

If they can manage a good emulator(if that's the right term) so that "Window's" games can be played with ease and not be gimped in any way, good on them.

Sure, you'll get a few dev's that will work to make sure their product plays on a steam box, but not many.

Most are very happy writing for console(s), and then porting to PC today, with steam already right here doing it's thing.

IE I don't think the steambox will make any headway in that department unless it comes from their own studios.


I was talking about the free OS, not the hardware. Keep in mind that any game put out on PS4 is going to be coded in OpenGL. That means that porting it to PC is more of a beta test to get it to work on varied hardware. Also, unless the game is an XBone exclusive, they will have to do an OpenGL version to get it on Sony hardware. At that point, it goes into the beta mode for PC launch.

They all share the same CPU architecture, so as long as the Devs code in a neutral language, recompiling for OS is easy. Then again, I suspect that if you want to publish for XBone, it will have to be written in DotNet 4.0 or above.
 
2013-12-01 08:51:17 PM  
I`d be happy to have a bank of these crunching video...

/subby
 
2013-12-01 09:25:15 PM  

gozar_the_destroyer: Then again, I suspect that if you want to publish for XBone, it will have to be written in DotNet 4.0 or above.


MS is known for being stubborn about sharing and interoptable, so i wouldn't put anything past them.

Most of your post was above my paygrade.  I don't know too much about consoles or the capability of what linux can do, just that historically, it's been deemed not worth the effort.(much to my dismay, would love to be able to switch over to linux)

  So I really don't have an answer. (unless you were being coy, yes, I know steamos is going to be linux based, and as it stands, they'll need a helping hand from dev's unless they come up with something revolutionary to fill in for how other games reference Windows resources.[ie as a concept.... that's been the roadblock and why things like WINE's stability are questionable, or worse, iirc, subscription/fee based and still not 100%)
 
2013-12-01 09:52:01 PM  

DON.MAC: in 1981 you could buy a 80 MFLOP Cray I for about $5 million and it had up to 8 megabytes of memory.  This thing could be 1000 times faster.


Boy, I bet those early adopters feel dumb.
 
2013-12-01 09:59:24 PM  

beer4breakfast: Stibium: I meant it more for Macs when they were PowerPC-based. At the time the processor kicked major ass at running numbers, which is why companies developed that kind of kick-ass software for Macs... the same performance simply couldn't be had with the x86 architecture of the time. The PowerPC pipelined better and crunched numbers faster for those particular applications (especially FFTs), it was simply better for that application so that's where developers put it. (See also OS/2's history where devs went flocking towards Windows because it was an easier platform to work with and could the same programs ran just as well, if not better, under OS/2.)

Pipelined better? The Pentium 4 initially had a 20 stage pipeline compared to the PowerPC's 970 11/16. The PowerPC architecture was faster for a time in floating point benchmarks, but Intel processors had caught up at this point. Apple switched over at just the right time.


I'd also throw in that, in my experience, Intel chips from that era have held up better multimedia-wise than PowerPC chips that came out around the same time. The same video files that played fine on my Latitude D400 would skip/stutter on my Powerbook G4 (with maxed out RAM and fastest CPU). Some of that is likely due to lessening support for the powerpc line, but it always seemed a pretty large disparity to me, at least in the last few years. Same kind of performance hit I noticed on my powermac G5 versus some Optiplex 280s.
 
2013-12-01 10:18:03 PM  

Marcus Aurelius: I'm trying to imagine porn in 15360 x 8640 resolution.


Wrong direction with this.  Think 90 simultaneously open tabs....you might just finally get to that girl on the thumbnail page.
 
2013-12-01 10:27:38 PM  

doglover: What would I do with that much computation?


If you need to ask, then probably nothing. Personally, I could make use of this for some image processing algorithms I have than can barely keep up with a 30 hz frame rate at standard definition resolution. This could handle them at high def resolutions without breaking a sweat.
 
2013-12-01 11:37:14 PM  

jjorsett: doglover: What would I do with that much computation?

If you need to ask, then probably nothing. Personally, I could make use of this for some image processing algorithms I have than can barely keep up with a 30 hz frame rate at standard definition resolution. This could handle them at high def resolutions without breaking a sweat.


Using OpenCL?  ATI 7970 has 3.8 TFLOPS, it is a beast.  The 7990 has 8.2 FLOPS.  I use the 7970 for optical coherence tomography processing and anisotropic filtering and, yeah, it is fast.
 
2013-12-02 12:34:00 AM  

doglover: What would I do with that much computation?


media.sfx.co.uk
Ziggy says there's a 73% chance you'd...build Ziggy!
 
2013-12-02 02:20:49 AM  

Brontes: jjorsett: doglover: What would I do with that much computation?

If you need to ask, then probably nothing. Personally, I could make use of this for some image processing algorithms I have than can barely keep up with a 30 hz frame rate at standard definition resolution. This could handle them at high def resolutions without breaking a sweat.

Using OpenCL?  ATI 7970 has 3.8 TFLOPS, it is a beast.  The 7990 has 8.2 FLOPS.  I use the 7970 for optical coherence tomography processing and anisotropic filtering and, yeah, it is fast.


My need is for something compact and low power to mount in a vehicle. The current processor uses a DSP chip, is credit-card sized like this device is, and I implement the algorithms directly in C. When we have to go to HD cameras, the current processor isn't going to cut it. We'd also like to jack up the frame rate to cut down on the latency. That 7970 sounds sweet, but I don't have the power or space for something like that.
 
2013-12-02 04:12:36 AM  
These stories always remind me of my first PC: 4.77mhz, 256k RAM ($1000 for the optional 1mb board), and no hard drive or modem, all for the amazingly low price of $3000.

/off my lawn.
 
2013-12-02 04:13:49 AM  

Slaxl: doglover: What would I do with that much computation?

Compute

 Fap faster.

FTFY
 
2013-12-02 04:14:35 AM  

Demetrius: Slaxl: doglover: What would I do with that much computation?

Surf porn Compute faster.

FTFY


Dammit.
 
2013-12-02 08:03:58 AM  

doglover: What would I do with that much computation?


This could be a game changer for big data, if it can hold more than 1G RAM.
 
2013-12-02 09:18:09 AM  

BigLuca: I'd run Crysis on medium settings.


Funny, i just got my new Alienware yesterday.  Crysis 2 was my "test" to see how well it runs.

IT'S FREAKING AWESOME!   32 gig of ram, dual video cards. Beast of a laptop but I didn't buy it so I could lug it around the world.  Just from room to room in my house.

/i actually got it a week ago but yesterday was the first day I had time to test it out since i was on holiday.
 
2013-12-02 09:36:16 AM  

jjorsett: If you need to ask, then probably nothing. Personally, I could make use of this for some image processing algorithms I have than can barely keep up with a 30 hz frame rate at standard definition resolution. This could handle them at high def resolutions without breaking a sweat.


You'd probably slam into some bottleneck somewhere else. Like I/O. you get 2.4 GB/s between the main processor and the FPGA that attaches to epiphany. The link between the FPGA and epiphany itself, is only 1.4GB/S

You're also working with DDR2 memory, and all your video data is either going to have to go over the gigabit ethernet link, a USB 2.0 connection, or an SD card.

Oh, and Epiphany is programmed in C/C#.
 
2013-12-02 12:42:05 PM  

beer4breakfast: The Pentium 4 initially had a 20 stage pipeline compared to the PowerPC's 970 11/16.


"Number of pipeline stages" is not a particularly illuminative metric, especially when comparing RISC and CISC designs.
 
2013-12-02 02:19:33 PM  

fluffy2097: jjorsett: If you need to ask, then probably nothing. Personally, I could make use of this for some image processing algorithms I have than can barely keep up with a 30 hz frame rate at standard definition resolution. This could handle them at high def resolutions without breaking a sweat.

You'd probably slam into some bottleneck somewhere else. Like I/O. you get 2.4 GB/s between the main processor and the FPGA that attaches to epiphany. The link between the FPGA and epiphany itself, is only 1.4GB/S

You're also working with DDR2 memory, and all your video data is either going to have to go over the gigabit ethernet link, a USB 2.0 connection, or an SD card.

Oh, and Epiphany is programmed in C/C#.


And instead of having to attach a PC in a NEMA enclosure to your camera for that processing you can now do it in the same case as the camera is in and save upwards of $12,000 on your face recognition security camera.  Who says it has to run linux?
 
2013-12-02 02:20:19 PM  

Demetrius: Slaxl: doglover: What would I do with that much computation?

Surf porn Compute faster.

FTFY


i2.photobucket.com
 
2013-12-02 07:03:27 PM  
Why would I want to simulate anything? I'm just gonna sit here with my Oculus Rift and my omnidirectional treadmill waiting for the killer app, why would I need to simulate fluids or physics or light reflections? I simply cannot imagine why a home computer would need the capability to model realistic environments for any purpose. Mario's world isn't realistic, why would he need such a chip? It's not like anyone has to do vertex calculations or simulate collisions between objects on a HOME COMPUTER anyway. That stuff's all done at the University level now.
 
Displayed 107 of 107 comments

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »
On Twitter





In Other Media


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report