If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Geek.com)   Fancy 90 Gigaflops in a package the size of a credit card that runs Linux for $99?   (geek.com) divider line 107
    More: Cool, Raspberry Pi, linux, microSD, parallel processing  
•       •       •

8167 clicks; posted to Geek » on 01 Dec 2013 at 10:53 AM (41 weeks ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



107 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

First | « | 1 | 2 | 3 | » | Last | Show all
 
2013-12-01 06:27:53 AM
Sweet, I want one.
 
2013-12-01 06:36:29 AM
What would I do with that much computation?
 
2013-12-01 07:11:47 AM

doglover: What would I do with that much computation?


Compute faster.
 
2013-12-01 07:18:33 AM
in 1981 you could buy a 80 MFLOP Cray I for about $5 million and it had up to 8 megabytes of memory.  This thing could be 1000 times faster.
 
2013-12-01 08:07:40 AM

Slaxl: doglover: What would I do with that much computation?

Surf porn Compute faster.


FTFY
 
2013-12-01 08:46:56 AM

Demetrius: Slaxl: doglover: What would I do with that much computation?

Surf porn Compute faster.

FTFY


i.imgur.com
 
2013-12-01 09:04:03 AM

Slaxl: doglover: What would I do with that much computation?

Compute faster.


Compute what? I don't do weather simulations or render Pixar animations. The only thing I could possible see this being used for in my life is create one of those late 80's early 90's cyberpunk brute force hacking gadgets from movies you plugged into things with a giant ribbon cable and watched the numbers match up. And even then, it would only be a curio because nothing accepts ribbon cables anymore.
 
2013-12-01 10:28:03 AM
I'm trying to imagine porn in 15360 x 8640 resolution.
 
2013-12-01 10:31:53 AM

doglover: What would I do with that much computation?


since all your shiat won't fit on the smallish SSD's you would still wait on your 7200rpm hard drive to deliver data just like you do now
 
2013-12-01 10:43:04 AM

doglover: What would I do with that much computation?


I dunno but I want one anyways
 
2013-12-01 10:56:21 AM
genomics. to answer your question.
 
2013-12-01 10:59:22 AM
Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!
 
2013-12-01 11:00:03 AM

doglover: What would I do with that much computation?


Run linux.

Cause when you run linux, that's pretty much all you can do.
 
2013-12-01 11:01:13 AM

doglover: What would I do with that much computation?


Celebrate winning the SETI with your basement full of mined Bitcoins!
 
2013-12-01 11:04:12 AM

fluffy2097: Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!


My GPU has the same processing power and runs multiple cores. Besides 3d rendering, scientific research and bit coin mining, there are few applications for this board. Perhaps it can be used for low price robotics research and development. The reduced size and power consumption compared to other options does offer some serious possibilities.
 
2013-12-01 11:05:33 AM

fluffy2097: Awesome. All that processing power is spread over 64 cores. Which equals out to 700 mhz a core.

How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?

/crickets
//You linux farks would be far better off getting rid of the X window system.
///This is the year of desktop linux! The year 1984 to 2014!


We're working on it!

/X is a flaming turd.
http://wayland.freedesktop.org/
 
2013-12-01 11:07:28 AM

fluffy2097: How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?


imagineeringnow.com
 
2013-12-01 11:09:38 AM

fluffy2097: How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?


"This product which is clearly not a consumer product is pointless because it is not a consumer product."

At $100 a pop, it has lots of applications in scientific computing. It'd be a great platform to use for Erlang applications.
 
2013-12-01 11:18:19 AM
This thread has everything.

1) Windows weenies bashing linux.
2) Some guy claiming his GPU is faster.
3) Some guy who completely misunderstood the purpose of the project (and didn't read the article.)
4) Bitcoin
5) Porn

A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.
 
2013-12-01 11:28:41 AM
Apparently the $99 doesn't get you the 90 gigaflop version...
 
2013-12-01 11:30:15 AM
Wonder how many hashes a second it can do..
 
2013-12-01 11:32:53 AM
I'd run Crysis on medium settings.
 
2013-12-01 11:36:32 AM

fluffy2097: How many consumer applications support 64 cores? What use does a consumer have for massively parallel computing?


While its true that most people dont currently have a need for it... if its cheap and its there, we could just write programs differently to utilize having a bajillion extra cores.  Most programs do not need to be single threaded.
 
2013-12-01 11:49:23 AM

Alonjar: we could just write programs differently to utilize having a bajillion extra cores.


We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.


Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!
 
2013-12-01 11:55:57 AM

fluffy2097: Lemme give you a hint. Heat output and power use are proportional to how much work gets done.


If every architecture were identical, maybe.
 
2013-12-01 12:12:44 PM

fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!


Damn right. Also, what speed is the RAM and what is the transfer speed with the CPU? ARM9 only supports DDR2 if I remember.

A 99$ card isn't going to beat out a higher tech system. What is nice about this is the low wattage for specialty applications, mainly mobile.
 
2013-12-01 12:13:03 PM

fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!


While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...
 
2013-12-01 12:13:52 PM
So sell on bitcoin? Seems like a miners dream.
 
2013-12-01 12:18:25 PM

fluffy2097: Very few programs in fact lend themselves to massively parallel processing.


That's not really true. Lots of problems can be solved in parallel. The problem is that parallel programming is  hard. There's a lot more emphasis on building parallel code these days, and it's getting a lot easier. Enterprise languages, like .NET, are adopting some really clever approaches to simplifying (async/await, parallel.for, etc). Functional languages, like Haskell, already parallelize naturally. Erlang  forces you to write parallel code.

fluffy2097: Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.


That's because Epiphany has its own memory. Not a lot, but that's how it's designed. Remember, their primary business case is to have a massively parallel chip suitable for embedded applications- like facial recognition for cellphones.

That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.
 
2013-12-01 12:23:33 PM

FarkGrudge: As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"


Well. They say its an ARM A9 Dual core, and it's funded by kickstarter.

/I'm about as confident this is not a super computer as I was that OUYA would be the turd it is.
 
2013-12-01 12:32:27 PM
FTA: For comparison, that amount of GFLOPS is equivalent to a 45GHz processor.

That statement is retarded. This is assuming 2 floating point instructions per clock. AMD's original Athlon was capable of of 3 per clock using x87. Of course, this is the theoretical peak.

For actual performance, a 3.4Ghz quad-core Sandy Bridge can reach 87GFLOPS in Linpack (Link). This chip came out almost 3 years ago.

Sure, it costs twice as much (and isn't a complete SOC), but it also doesn't require highly customized software to approach those speeds - all that is needed is a good compiler.
 
2013-12-01 12:34:08 PM

ancker: This thread has everything.

1) Windows weenies bashing linux.
2) Some guy claiming his GPU is faster.
3) Some guy who completely misunderstood the purpose of the project (and didn't read the article.)
4) Bitcoin
5) Porn

A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.


6. Guy arguing 'his computers are fast enough so why bother improving them.'
 
2013-12-01 12:35:33 PM

LasersHurt: fluffy2097: Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

If every architecture were identical, maybe.


This. It's not just about how much work gets done, but also how much work is wasted. An i7 and a PowerPC are pretty far apart in terms of pure performance, but if all you needed to do was run math computations the PowerPC architecture is vastly superior. It won't handle very complex instructions, but it's not made to do it. The transition between RISC and CISC can be spanned in software and firmware, and with the price point of smaller chips, you can easily get into a situation where throwing more cores into a CPU unit can snowball into a more powerful CPU than anything out there now. Ever wonder why Mac's and musicians go together? The PowerPC was the best platform for crunching the math computations for encoding.

I was just thinking last night about how we have locked ourselves into thinking that the 386 architecture is the way to go, but it will eventually get replaced with newer and better CPUs. Just look at AMD's GPU-on-a-CPU hybrid chips. The way I see it, this could be a tipping point where we start seriously looking at designing massively parallel computing as a serious competitor to the current architecture and computing paradigms. Quantum computers need extremely low temperatures to work, and making a massive CISC core running quantum computations would be difficult to keep cool. Smaller cores' cooling envelope can much more easily be handled because of their smaller footprint. I think that this massively parallel technology will be a natural stepping stone towards processors that really are futuristic, cheaper, and wildly more powerful than anything we can currently imagine.

/Yes, I want one.
 
2013-12-01 12:36:08 PM

t3knomanser: fluffy2097: Very few programs in fact lend themselves to massively parallel processing.

That's not really true. Lots of problems can be solved in parallel. The problem is that parallel programming is  hard. There's a lot more emphasis on building parallel code these days, and it's getting a lot easier. Enterprise languages, like .NET, are adopting some really clever approaches to simplifying (async/await, parallel.for, etc). Functional languages, like Haskell, already parallelize naturally. Erlang  forces you to write parallel code.

fluffy2097: Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.

That's because Epiphany has its own memory. Not a lot, but that's how it's designed. Remember, their primary business case is to have a massively parallel chip suitable for embedded applications- like facial recognition for cellphones.

That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.


I think it could spur some innovation on developing uses for the parallel processor. I already have a few projects in mind that it would be pretty well suited for that would be infeasible without the small size and low power consumption.

Now I just need a workshop and the time.
 
2013-12-01 12:36:20 PM

fluffy2097: FarkGrudge: As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

Well. They say its an ARM A9 Dual core, and it's funded by kickstarter.

/I'm about as confident this is not a super computer as I was that OUYA would be the turd it is.


Heh, wasn't that suppposed to be out by now? I remember tons of games were being hyped as "available for OUYA on Late 2013." Well, we're in December. Fess up! You have nothing, guys.
 
2013-12-01 12:41:57 PM

FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...


Looks to be 65-28nm.
 
2013-12-01 12:42:01 PM
If they only called it "The Sinclair"
 
2013-12-01 12:46:29 PM

rocky_howard: Heh, wasn't that suppposed to be out by now?


It is out. It's been out since March. It's also been a complete flop, because, seriously? What were they thinking?
 
2013-12-01 12:49:14 PM

Stibium: LasersHurt: fluffy2097: Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

If every architecture were identical, maybe.

This. It's not just about how much work gets done, but also how much work is wasted. An i7 and a PowerPC are pretty far apart in terms of pure performance, but if all you needed to do was run math computations the PowerPC architecture is vastly superior. It won't handle very complex instructions, but it's not made to do it. The transition between RISC and CISC can be spanned in software and firmware, and with the price point of smaller chips, you can easily get into a situation where throwing more cores into a CPU unit can snowball into a more powerful CPU than anything out there now. Ever wonder why Mac's and musicians go together? The PowerPC was the best platform for crunching the math computations for encoding.

I was just thinking last night about how we have locked ourselves into thinking that the 386 architecture is the way to go, but it will eventually get replaced with newer and better CPUs. Just look at AMD's GPU-on-a-CPU hybrid chips. The way I see it, this could be a tipping point where we start seriously looking at designing massively parallel computing as a serious competitor to the current architecture and computing paradigms. Quantum computers need extremely low temperatures to work, and making a massive CISC core running quantum computations would be difficult to keep cool. Smaller cores' cooling envelope can much more easily be handled because of their smaller footprint. I think that this massively parallel technology will be a natural stepping stone towards processors that really are futuristic, cheaper, and wildly more powerful than anything we can currently imagine.

/Yes, I want one.


The second gen core I chips from Intel are of a similar architecture to the AMD apu and the subsequent FX line of CPUs.

Pairing the complex x64 cores (x86 went away a while ago, most OSs just didn't support the full features of the new cores. Now they all do) with a GPU parallel processor let's the main cores do what they are good at (moving big blocks, sorts, matching numbers etc.) and the GPU solves things like square roots and other problems that are really slow on an x64 core.

The use of Apples by musicians and publishers had more to do with exclusive and really good applications on the platform along with excellent peripherals. Apple knew that these were weak on the PC and filled the void.

The Power PC architecture is accually very similar to the current AMD set up. Intel has started to go its own way again with memory access and control, but it has pushed up what DDR3 can do.
 
2013-12-01 12:51:21 PM

Your Hind Brain: FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...

Looks to be 65-28nm.


My GPU is 28nm fab. Gives you way more processing per Watt.
 
2013-12-01 12:51:24 PM

doglover: Slaxl: doglover: What would I do with that much computation?

Compute faster.

Compute what? I don't do weather simulations or render Pixar animations. The only thing I could possible see this being used for in my life is create one of those late 80's early 90's cyberpunk brute force hacking gadgets from movies you plugged into things with a giant ribbon cable and watched the numbers match up. And even then, it would only be a curio because nothing accepts ribbon cables anymore.


You know, I have this weird feeling that maybe they didn't make it for you.
 
2013-12-01 12:53:14 PM
It's not *always* true that computation = power consumption... That's a CMOS thing. Crays and high-speed RTOS stuff like sonar / radar etc were built using ECL which ran just as hot sitting there idle as computing furiously. That's *why* they were so speedy (for the time... )They didn't go into saturation.

Me, I would LOVE to have a few of these to play with. I don't have a serious use for them, but I do have a hobbyist interest involving rendering, prime # calculation etcetera. And I'm pretty good at multithreaded programming.
 
2013-12-01 12:57:03 PM
Is 'fancy' supposed to be a verb?  Because on this side of the pond it's an adjective mostly used by certain men for things like curtains.
 
2013-12-01 12:58:26 PM
t3knomanser:
That said, yes- these Parallela boards are being massively oversold for what they are. At the $100 price point, these controllers definitely have applications, but it's not going to be the sort of thing you're putting on your desktop.

This is exactly the kind of thing that should go into a quad-copter.  Self-controlled flight is awesome.

Cheers

//To the workshop!
 
2013-12-01 01:01:46 PM

drumhellar: FTA: For comparison, that amount of GFLOPS is equivalent to a 45GHz processor.

That statement is retarded. This is assuming 2 floating point instructions per clock. AMD's original Athlon was capable of of 3 per clock using x87. Of course, this is the theoretical peak.

For actual performance, a 3.4Ghz quad-core Sandy Bridge can reach 87GFLOPS in Linpack (Link). This chip came out almost 3 years ago.

Sure, it costs twice as much (and isn't a complete SOC), but it also doesn't require highly customized software to approach those speeds - all that is needed is a good compiler.


A Linux kernel isn't what I'd call a piece of "highly customized software."

As well, each of the accelerator cores can only run one FLOP per clock cycle, but with 64 of them running at 700 MHz you get 44.8 GFLOPs. That's half the performance of Sandy Bridge, but it's running nearly 5 times slower. You need to keep in mind that these are NOT designed for floating point performance, but rather for simpler integer arithmetic, which is just as useful. FLOPs is the real misnomer here; punching a few numbers into a calculator to get a number you can compare isn't quite how things work when you are comparing apples to oranges.
 
2013-12-01 01:05:26 PM

gozar_the_destroyer: Your Hind Brain: FarkGrudge: fluffy2097: Alonjar: we could just write programs differently to utilize having a bajillion extra cores.

We've had multiple cores for many years now and most programs are still single threaded.

Very few programs in fact lend themselves to massively parallel processing.

ancker: A researcher where I work has a cluster with a few thousand cores and 2ish petabytes of data. They spend tons of money on powering/cooling the cluster alone. IF (big IF) something like this could take over the processing duties, they could cut costs in power/cooling drastically and then purchase more CPU/disk.

Lemme give you a hint. Heat output and power use are proportional to how much work gets done.

/Kinda like how a quad core i5 will beat the everloving shiat out of an quad core ARM processor any day of the week because it's running 75 watts of power vs 15.
//Oh yeah, it IS just running an ARM processor. Epiphany is an entirely seperate chip that doesn't even have direct access to RAM.
///Oh shiat! Someone forgot to look at the block diagram!

While I agree that computation and heat dissipation are proportional, you are neglecting the far bigger factor: transistor technology. As the fab technology they are using was not mentioned, how could you be so confident in your "insight?"

100 core2 duos at 2gz (2xcores each) vs. 50 i5 down clocked to 2gz (4xcores each) will not have even close to the same dissipation.

/won't even go into what's wrong with your assumption on single threaded apps...

Looks to be 65-28nm.

My GPU is 28nm fab. Gives you way more processing per Watt.


That's the idea of smaller dimensions. Cooler temp-wise too. But of course, your talking about a GPU. Water cooled?
 
2013-12-01 01:07:25 PM

syrynxx: Is 'fancy' supposed to be a verb?  Because on this side of the pond it's an adjective mostly used by certain men for things like curtains.


Would you fancy getting out more?

Newsflash: "Fancy" has been equated with "desire" or "like" for quite some time even in the states.
 
2013-12-01 01:08:43 PM

doglover: What would I do with that much computation?


Go figure.
 
2013-12-01 01:12:30 PM

BigLuca: I'd run Crysis on medium settings.


Winner
 
2013-12-01 01:17:08 PM
I'd be interested to see how they tweaked the kernel to fully utilize 64 cores and why they choses Ubuntu.
 
Displayed 50 of 107 comments

First | « | 1 | 2 | 3 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »






Report