If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Ars Technica)   Now you can add 'Hard Drive' to the list of things you can't upgrade in a new iMac   (arstechnica.com) divider line 195
    More: Fail, iMacs, iFixit, metal spinning, SATA, secondary markets, Apple SSD, connectedness, library  
•       •       •

6064 clicks; posted to Geek » on 04 Dec 2012 at 2:26 PM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



195 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

Archived thread

First | « | 1 | 2 | 3 | 4 | » | Last | Show all
 
2012-12-05 02:24:36 AM
So buy one with 16gb ram and use an external hdd if the drive fails and stfu.

Do you idiots know that it can also be used as a Thunderbolt Display? Add a Mac mini in 5 years for $500 and there's your upgrade.

Get a tower if you want to diddlefark around.

//still pissed at the lack of Mac Pro refresh
 
2012-12-05 04:24:22 AM

gadian: First thing I ever learned how to replace on my computer was the memory. The second was the hard drive. These are the basics, like being able to change your own tires. If you can't change a hard drive or memory yourself, you don't need to own a computer.


2/10

Too obvious. How many people do you know who change their own tires?
 
2012-12-05 06:08:58 AM

American Decency Association: BumpInTheNight: finnished: BraveNewCheneyWorld:
If you backup that often, then you'll only be out of action for some time while the data is transferred. If you went with a raid 5 array, then you'd have no downtime when a drive fails. You'd just want to be sure to replace the failed drive as soon as possible. You'd need a raid 5 controller if your motherboard doesn't support it, and another drive to make it work.

With 1 TB SATA drives, and RAID 5, you are pretty much guaranteed data loss. Never use RAID 5. RAID-1 for two drives or RAID-10 for more is the gold standard.

I am curious as to where that opinion comes from? Got any links about that?

it is anecdotal. i can confirm though, ppl in my circle will never ever use 5 again after bad experiences.


Ahh kk, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P
 
2012-12-05 07:03:34 AM

Tourney3p0: It's not like teenage girls are known for upgrading their computers anyway.


Don`t be like that. Not all macs are owned by teenage girls, some are owned by gay people and some are work computers or college computers.
 
2012-12-05 07:09:38 AM

BumpInTheNight: American Decency Association: BumpInTheNight: finnished: BraveNewCheneyWorld:
If you backup that often, then you'll only be out of action for some time while the data is transferred. If you went with a raid 5 array, then you'd have no downtime when a drive fails. You'd just want to be sure to replace the failed drive as soon as possible. You'd need a raid 5 controller if your motherboard doesn't support it, and another drive to make it work.

With 1 TB SATA drives, and RAID 5, you are pretty much guaranteed data loss. Never use RAID 5. RAID-1 for two drives or RAID-10 for more is the gold standard.

I am curious as to where that opinion comes from? Got any links about that?

it is anecdotal. i can confirm though, ppl in my circle will never ever use 5 again after bad experiences.

Ahh kk, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P


I have a RAID 5 on six 1TB drives, have had it running for years without a single byte of lost data. Just for adding to the `anecdotal` data

Of course, I picked drives that work well with RAID and don`t go into sleep or into low power modes or any stuff like that.

Thinking of going to 2tb and RAID 10 though to get rid of the parity overhead and maybe get some more speed for the video editing. Do that with your `no upgrade or changes after purchase` machines...
 
2012-12-05 08:01:15 AM

BumpInTheNight: k, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P


It's not necessarily flawed, it's just that it doesn't necessarily give the kind of protection you think it does. That in addition to its downsides, like slow writing due to parity calculation, makes RAID-1/RAID-10 the better option. Especially considering, and also due to, big hard drives being cheap.

RAID-5 was great years ago, when hard drives were small, but still redundancy and space was needed. It was a compromise.

The quick rundown is this: besides failing catastrophically, hard drives can experience read errors. In fact, this is more likely than a hard drive completely dying. So, when a RAID-5 array loses a drive, and starts rebuilding it, if it encounters a read error on the remaining drives, the entire array is lost.

Now you might say "but the chances of the read error happening must be very small". And it is, kind of. Looking at the datasheets, it can be very small. Like to the tune of 1:10^14 bits for SATA. But remember that the rebuild operation needs to read the entire disks. And when the disk sizes are in the TB range then all of a sudden we start reaching probabilities that are actually probable. Not to mention the long rebuild time required.

But regardless of all this, what if you think "Well, I'm just a home user, I don't need the hardcore redundancy." Ok, say you want to create a 2 TB array. With RAID-5 you could do 3 WD Black Drives, $119.99 each at Newegg. Or, do 2 2TB drives with RAID-1 for $179.99 each. The RAID-5 array ends up costing you $0.01 less. What's the benefit of saving that $0.01? What's the downside?

Why RAID 5 stops working in 2009
Data Storage: The Myth of Redundancy
 
2012-12-05 08:22:51 AM

Kazan: Darth_Lukecash: I think most computer problems stem from stupidity of do it yourself people.

no


This. My machine is going on 5 years old, still runs like a new one. Then again, I hand-picked the parts and built to my specification. Your home build crew are the ones that take the time to research out their parts and only buy decent equipment. Your PC hardware problems are mainly from your Ma and Pa Kettles that buy the cheapest $250 eMachines off the shelf at Wal-Mart because the kids said they need a computer. Now, with the parts I buy, I can build a starter rig for about $450 that can handle even some 3D games with not too much trouble (Not Skyrim on full detail 1600x900, but it'll handle most games at a playable framerate). That machine will last 5-8 years on average. I have some machines I have built that are going on 12 years and no failures.

That $300 Wal-Mart special is using the cheapest, slowest RAM that can be bought, hard disks usually from IBM or Hitachi which are cheaper but have proven to be far less reliable, and most often sub-standard power supplies that fail in just over a year most of the time and can take down a motherboard once they start to flake out.

The problem is that probably 70% of home users are the ones buying the crap systems off the shelf and that's what gives PCs such the stigma of being poor hardware choices. It's also why Apple users think Apple's hardware is so superior, even though now it's the exact same components that any hobby system builder is already building machines with, and they're usually better quality components than Apple's.
 
2012-12-05 08:59:44 AM

finnished: BumpInTheNight: k, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P

It's not necessarily flawed, it's just that it doesn't necessarily give the kind of protection you think it does. That in addition to its downsides, like slow writing due to parity calculation, makes RAID-1/RAID-10 the better option. Especially considering, and also due to, big hard drives being cheap.

RAID-5 was great years ago, when hard drives were small, but still redundancy and space was needed. It was a compromise.

The quick rundown is this: besides failing catastrophically, hard drives can experience read errors. In fact, this is more likely than a hard drive completely dying. So, when a RAID-5 array loses a drive, and starts rebuilding it, if it encounters a read error on the remaining drives, the entire array is lost.

Now you might say "but the chances of the read error happening must be very small". And it is, kind of. Looking at the datasheets, it can be very small. Like to the tune of 1:10^14 bits for SATA. But remember that the rebuild operation needs to read the entire disks. And when the disk sizes are in the TB range then all of a sudden we start reaching probabilities that are actually probable. Not to mention the long rebuild time required.

But regardless of all this, what if you think "Well, I'm just a home user, I don't need the hardcore redundancy." Ok, say you want to create a 2 TB array. With RAID-5 you could do 3 WD Black Drives, $119.99 each at Newegg. Or, do 2 2TB drives with RAID-1 for $179.99 each. The RAID-5 array ends up costing you $0.01 ...


Interesting set of articles, the one from 2007 seems to have several doomsayer parrots but then also several that explain away the false assumptions that went into the statistics he used to come up with his theory, the biggest one being that a raid controller would decide to nuke an entire rebuild over one lost bit rather then just flag that set corrupt and move on with its life.

2TB array? No, 12TB arrays are your magic mark at this point my friend. Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

Link to WD's big fat warning about using black drives in raids

I'd say if that wasn't on your radar I really suggest you reconsider.
 
2012-12-05 09:15:59 AM
I build every computer I buy. No reliability issues requiring the replacement of the entire computer, ever. One motherboard failure in the last 12 years, 2 video cards, and 1 HDD. I have five computers for four family members (one file server) and all are up and running.

I had ONE iMac, failed after 3 years. Had a macbook, failed after 3.5 years. Not going down the apple path after I gave them two (2) chances to build a better PC than I can.
 
2012-12-05 09:38:02 AM

pxlboy: Personally, I can't justify the cost of a Mac Pro.


Don't buy a Mac Pro, man. Just buy a spec'ed out Mini. Mac Pros are for people who need to be convinced that a computer is "industrial-strength" by looking at it. It's one of the worst desktop Macs ever designed. Everyone I know who has bought one is unhappy with it. I'm not surprised, you can tell Apple hates the product and regrets making it by the way they treat it. It's going the way of the X-Serve, which had way more reason to exist than the Mac Pro ever has.

psy5ive: //still pissed at the lack of Mac Pro refresh


See? Look at this guy. Don't be this guy (sorry)

spqr_ca: Solaris


My condolences.

ChaoticLimbs: Not going down the apple path after I gave them two (2) chances to build a better PC than I can.


This happens. I used to work for Apple during the 1990s, and they have had intermittent QC issues the whole time they have been around. Once a customer found grapes in the box with their Quadra.
 
2012-12-05 09:42:39 AM

BumpInTheNight: Interesting set of articles, the one from 2007 seems to have several doomsayer parrots but then also several that explain away the false assumptions that went into the statistics he used to come up with his theory, the biggest one being that a raid controller would decide to nuke an entire rebuild over one lost bit rather then just flag that set corrupt and move on with its life.

2TB array? No, 12TB arrays are your magic mark at this point my friend. Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

Link to WD's big fat warning about using black drives in raids

I'd say if that wasn't on your radar I really suggest you reconsider.


Yes, with RAID-5 that's exactly what's going to happen. The controller will drop the entire array when the second drive is unreadable. Ironically, the array is

And about the drives, that's exactly what I mean. People will go ahead and buy whatever drives, WD Greens even, put them in RAID-5 thinking that now they're covered against data loss. When they're not.

But what it boils down to is what do you gain by using RAID-5 instead of RAID-1 (Or -10)? Again, RAID-1/10 is the gold standard of RAID.
 
2012-12-05 10:03:26 AM

Z-clipped: Don't kid yourself. The ONLY reason Apple makes things like this difficult is money. They want to capture as much of the money that you spend on computing as possible. Period.


So why don't you have to unglue the screen glass to get to the RAM slots in the 27" iMac model, too?
 
2012-12-05 10:10:20 AM

poot_rootbeer: Z-clipped: Don't kid yourself. The ONLY reason Apple makes things like this difficult is money. They want to capture as much of the money that you spend on computing as possible. Period.

So why don't you have to unglue the screen glass to get to the RAM slots in the 27" iMac model, too?


Wait until the next version.

/at first, it was only the MacBook Air that had a sealed battery
//now they all do
 
2012-12-05 10:27:57 AM

Surool: PsyLord: Obligatory Apple Fanboi retort: Why would you need to upgrade something that is already perfect?

Above: Obligator unhinged iHater post. Nobody says that but you guys.


I actually own a few iProducts. I just wish Apple would make them friendlier to upgrades or connectivity, such as a microSD slot, non-proprietory power/sync port, etc. Just take cell phones for instance. I can charge/sync my Samsung S3 using any micro USB cable. Motorola and HTC also uses micro USB for power/data transfers.
 
2012-12-05 10:47:24 AM

t3knomanser: Does anyone buy an all-in-one computer because they expect it to be upgradeable? I will never, ever understand why anyone gives a shiat about the fact that products obviously designed around a certain form-factor aren't user-serviceable.

I really don't understand why anybody cares about this, or why anybody pretends to be surprised.


But in obsolescence fills landfills.

Why does everyone hate landfills?
 
2012-12-05 10:51:10 AM

HotWingConspiracy: Pincy: Isn't that one of the benefits of buying a mac? That you don't have to know anything about computers other than how to use the interface?

Yeah. In an odd way, not being able to do anything with it justifies the premium pricing.

If you aren't that type of consumer it will never make sense.


www.cvoptical.com
 
2012-12-05 10:54:52 AM

StoPPeRmobile: t3knomanser: Does anyone buy an all-in-one computer because they expect it to be upgradeable? I will never, ever understand why anyone gives a shiat about the fact that products obviously designed around a certain form-factor aren't user-serviceable.

I really don't understand why anybody cares about this, or why anybody pretends to be surprised.

But in obsolescence fills landfills.

Why does everyone hate landfills?


RIP Landfill. 

www.hotflick.net
 
2012-12-05 11:57:16 AM

BumpInTheNight: Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?


The amount of time that a WD drive spends trying to recover a bad block can be changed using a disk utility tool (versions exist for both Windows and Linux). So you could set that time to a sane value for RAID if you want. The only catch is that you're changing the value in volatile memory, not in ROM, so you have to reset it each time the drive powers up from a cold boot.

So you could use Black and Green drives for RAID in PC-based systems if you could get that value changed very early during bootup. You wouldn't be able to use them in stand-alone RAID boxes unless they include firmwares that could make the same change to your drives.


/just went with WD Red 3TB drives instead of messing with hacks
//drives are whisper quiet, which is great since they're in my HTPC/NAS box in the living room
 
2012-12-05 01:38:14 PM
So you aren't allowed to use Green, Blue, or Black drives for RAID?

Before Red drives came out, what were you supposed to use for RAID (that the average person would know existed)?
 
2012-12-05 02:01:47 PM

meyerkev: Before Red drives came out, what were you supposed to use for RAID (that the average person would know existed)?


Enterprise drives
 
2012-12-05 02:07:32 PM

meyerkev: Before Red drives came out, what were you supposed to use for RAID


WD RE, SE or VelociRaptor
Seagate ES or Ns
hiatachi Ultrastar

In short, their enterprise SATA series of drives.
 
2012-12-05 06:22:26 PM

Surool: lilbjorn: What 99% of Fark Mac threads amount to

That figure is a little low. Ever notice that the Samsung worker treatment stories don't even get greenlit on Fark?


We get it, you love Mac.
 
2012-12-05 09:57:35 PM

machodonkeywrestler: Surool: lilbjorn: What 99% of Fark Mac threads amount to

That figure is a little low. Ever notice that the Samsung worker treatment stories don't even get greenlit on Fark?

We get it, you love Mac.


lol, nope.
 
2012-12-06 10:29:46 AM
I think that the glued together computers are an environmental disaster. There's no reason to attach a screen inextricably to a computer that will be nonfunctional if half of the time of the screen. If they're going to do it, at the very least they should warranty the machine for 5 years from date of purchase with no additional charge. After all, with no moving parts, what do they have to lose? It should NEVER fail unless encountering liquids or drops (for laptops) I think we need a right to repair law for computers like there is for cars.
 
2012-12-07 12:02:37 AM

t3knomanser: downstairs: Because hard drives and memory never fail?

The issue is: what's the MTBF. For memory, it's already pretty high. And with SSDs, you're getting into that neighborhood.

The only time I've ever had a RAM stick fail was when I gave it a good static shocking. It's been a long time since I've had a HDD failure of any stripe. Just going on raw probabilities: the chances of these parts failing when the product is outside of warranty and isn't due for replacement in some fashion is pretty slim.

Some of us buy a tower and keep it for decades, gradually upgrading parts like Theseus's ship. Most of us change over computers entirely every 2-5 years. I keep myself on a 3-ish year upgrade cycle. The MTBF for most parts is much larger than that.


As long as you have a decent case.....helps if its a full tower, your upgrade costs over time are almost negligible. I pay about a hundred bucks for an 18 month old 'best video card on the market' every couple years. Whenever I feel the need to reinstall the OS I put in a new hdd, but I store all my important stuff in a Drobo with Carbonite running on it. My chip is one of the first gen 2.4 quad cores from an off the shelf HP I bought after my first big hdd crash, I put it in a new MB at some point, maybe to get SATA or 64 bit I dont remember. My optical drives are old, they maybe cost $20 bucks a piece. Parts almost never break, so upgrading is just a question of what I want to do. I think the most expensive thing I ever did was the upgrade to 64 bit with the 8 gigs of ram.

The point here isnt to say im good with tech, because Im not. The point is that A relative dunderhead like me can continually upgrade his computer for less than a quarter of the cost of matching the capability in Macintosh parts.

I bailed out on Mac when they licensed the clones. I bought one thinking it was all the advantage of a Mac with the expand-ability of the PC world. I was so wrong. As long as I didnt have anything to compare it to it was fine.....but after I used a PC at work I realized it wasn't actually necessary to sit and wait for a computer to do things. Thats when I realized my entire computer life I had been making excuses for the limitations of the Macintosh line. Its like being an abused spouse, you make excuses for your fear of change.
 
2012-12-07 12:08:05 AM

FinFangFark: t3knomanser: downstairs: Because hard drives and memory never fail?

The issue is: what's the MTBF. For memory, it's already pretty high. And with SSDs, you're getting into that neighborhood.

The only time I've ever had a RAM stick fail was when I gave it a good static shocking. It's been a long time since I've had a HDD failure of any stripe. Just going on raw probabilities: the chances of these parts failing when the product is outside of warranty and isn't due for replacement in some fashion is pretty slim.

Some of us buy a tower and keep it for decades, gradually upgrading parts like Theseus's ship. Most of us change over computers entirely every 2-5 years. I keep myself on a 3-ish year upgrade cycle. The MTBF for most parts is much larger than that.

So you've never experienced a HDD failure in all those years?


My last bad HDD fail was a month before I bought my Drobo. Lost everything. Im reasonably paranoid about it now, with one of those uploader backups and a Drobo. I would not trust a laptop with anything important. Cloud computing in my mind is just an admission that your ok with someone else owning all your crap.
 
2012-12-07 04:51:33 PM

finnished: BumpInTheNight: Interesting set of articles, the one from 2007 seems to have several doomsayer parrots but then also several that explain away the false assumptions that went into the statistics he used to come up with his theory, the biggest one being that a raid controller would decide to nuke an entire rebuild over one lost bit rather then just flag that set corrupt and move on with its life.

2TB array? No, 12TB arrays are your magic mark at this point my friend. Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

Link to WD's big fat warning about using black drives in raids

I'd say if that wasn't on your radar I really suggest you reconsider.

Yes, with RAID-5 that's exactly what's going to happen. The controller will drop the entire array when the second drive is unreadable. Ironically, the array is

And about the drives, that's exactly what I mean. People will go ahead and buy whatever drives, WD Greens even, put them in RAID-5 thinking that now they're covered against data loss. When they're not.

But what it boils down to is what do you gain by using RAID-5 instead of RAID-1 (Or -10)? Again, RAID-1/10 is the gold standard of RAID.


We both agree that raids are not a substitute for backups, but what you're arguing for is like saying that the hammer is the gold-standard tool of a tradesman, every tool has its purpose.
 
2012-12-07 07:16:49 PM

BumpInTheNight: We both agree that raids are not a substitute for backups, but what you're arguing for is like saying that the hammer is the gold-standard tool of a tradesman, every tool has its purpose.


No, that's not what I'm saying. As far as tools go, RAID-5 is more like the tool made for cutting holes in floppy disks so you can use the reverse side. At one point it might have been very useful, but not today.

There is no situation where RAID-5 would be a better choice than RAID-1/-10.
 
2012-12-07 07:20:48 PM

finnished: BumpInTheNight: We both agree that raids are not a substitute for backups, but what you're arguing for is like saying that the hammer is the gold-standard tool of a tradesman, every tool has its purpose.

No, that's not what I'm saying. As far as tools go, RAID-5 is more like the tool made for cutting holes in floppy disks so you can use the reverse side. At one point it might have been very useful, but not today.

There is no situation where RAID-5 would be a better choice than RAID-1/-10.


Speed and drive capacity used for storage vs integrity ratio is higher?
 
2012-12-07 07:43:19 PM
No, RAID-5 has an expensive parity calculation that slows it down compared to straight mirroring.

Less lost disk space was certainly a factor in the past, and that's why it was popular. But with today's hard drive prices, it's not a reason any more. And actually, the cheap large drives are a reason NOT to use RAID-5.
 
2012-12-07 07:50:45 PM

finnished: No, RAID-5 has an expensive parity calculation that slows it down compared to straight mirroring.

Less lost disk space was certainly a factor in the past, and that's why it was popular. But with today's hard drive prices, it's not a reason any more. And actually, the cheap large drives are a reason NOT to use RAID-5.


Expensive is true, but which do you tend to run out of first: Disk bandwidth or processing power? With today's quad CPU hex cores let alone dedicated raid controller's abilities its really not a problem to dial up the calculations and still maintain full borne write speeds.
 
2012-12-07 08:05:45 PM

BumpInTheNight: Expensive is true, but which do you tend to run out of first: Disk bandwidth or processing power? With today's quad CPU hex cores let alone dedicated raid controller's abilities its really not a problem to dial up the calculations and still maintain full borne write speeds.


Of course it depends on the amount of read/write, and if you have enough activity, it'll bog down any system. RAID-5 will bog down earlier. This will be especially apparent during a rebuild, which will take hours longer to complete, while leaving your system unprotected.

But now you're just trying to figure out ways to make RAID-5 as good as RAID-1/-10. Why not use RAID-1/-10 to begin with?
 
2012-12-07 08:11:50 PM

finnished: No, RAID-5 has an expensive parity calculation that slows it down compared to straight mirroring.


Doesn't RAID-5 just use a XOR? With a physical controller, aren't you just talking about a few gate delays here and there? (A 4096 bit XOR in discrete ICs has a settle time of what, 24ns? Even at 6Gbps, you'd only receive 19 bytes or 1/26 of a block and an ASIC is going to beat cascaded discretes) Call me crazy, but it doesn't seem like you'd have to worry that much about the parity calculation being the bottleneck. Maybe if SATA was running higher than 150Gbps...

/My math could very well be bad
 
2012-12-07 08:19:48 PM

finnished: BumpInTheNight: Expensive is true, but which do you tend to run out of first: Disk bandwidth or processing power? With today's quad CPU hex cores let alone dedicated raid controller's abilities its really not a problem to dial up the calculations and still maintain full borne write speeds.

Of course it depends on the amount of read/write, and if you have enough activity, it'll bog down any system. RAID-5 will bog down earlier. This will be especially apparent during a rebuild, which will take hours longer to complete, while leaving your system unprotected.

But now you're just trying to figure out ways to make RAID-5 as good as RAID-1/-10. Why not use RAID-1/-10 to begin with?


Not trying to figure out my friend, trying to explain where I use raid5s to leverage surplus processing to increase write speed vs using mirrors. Besides the unprotected status only lasts until the processes are shifted to a different server (usually a few seconds) and then the one with the dead drive is tasked to rebuild with the hot spare before taking the reigns again. I'll admit that URE thing is something I'm very curious about and that'll shift my opinion about what I do at home, but I my gut is still thinking that where it'd truly come into play (disk A dies and then during rebuild disk B errors out too) the controller can handle it or whatever knocked out disk A likely slew disks B,C & D etc as well.
 
2012-12-07 08:21:49 PM
(Sorry, "The controller can handle it" meaning that it'll mark the sector bad and move on rather then spoil the whole rebuild, so far from random searching the 'spoil the rebuild' bug was exterminated many years ago)
 
2012-12-07 08:55:33 PM

ProfessorOhki: Doesn't RAID-5 just use a XOR? With a physical controller, aren't you just talking about a few gate delays here and there?


The real world difference depends hugely on the implementation, so there's no hard and fast numbers to give there. But even a small added delay gets obviously multiplied especially during a rebuild. Or if the array is in use.

The rebuild part is especially problematic, since with RAID-5/RAID-1, if you lose another disk during the rebuild, you're dead in the water. With RAID-10, not necessarily so.
 
2012-12-07 09:07:34 PM

BumpInTheNight: Not trying to figure out my friend, trying to explain where I use raid5s to leverage surplus processing to increase write speed vs using mirrors. Besides the unprotected status only lasts until the processes are shifted to a different server (usually a few seconds) and then the one with the dead drive is tasked to rebuild with the hot spare before taking the reigns again. I'll admit that URE thing is something I'm very curious about and that'll shift my opinion about what I do at home, but I my gut is still thinking that where it'd truly come into play (disk A dies and then during rebuild disk B errors out too) the controller can handle it or whatever knocked out disk A likely slew disks B,C & D etc as well.


So now we're up to enterprise applications with virtualization then? Ok, so say you have a server that has a hard drive that is about to fail. The server has a RAID-5 with a hot spare. Since it's a hot spare, how does the server process get moved to another VM host? The storage might not even be on the same host, even if you're not using a SAN or something. The host has no idea that the rebuild has started.

So, anyway. The operator gets notified, but he doesn't need to respond since the rebuild starts automatically with the hot spare. It churns for a while, everything looks good, the server is still available. But uh-oh, now there's a problem. You get a read error on one of the disks. The storage array disables the volume, the server is down.

Operator gets notified, only to find out the array is down. Bad news. But that's OK, there's a good backup from earlier today. But the backup was made 8 hours ago. You just lost 8 hours of data.

Compare this to a scenario WITHOUT a hot spare.

The hard drive fails. No hot spare. Operator gets notified. The server and data are still available, though. The operator then can either a) make a backup or b) move the VM to another array completely, or both. Rebuild still fails, but it doesn't matter since the data was moved off. No data lost. Life goes on.
 
2012-12-07 09:09:41 PM
Since the thread is going to close soon, anyone who's actually interested in continuing conversation, you can find plenty of professionals at Spiceworks' Storage forum. I'll probably be here till then, though.
 
2012-12-07 09:15:23 PM

finnished: ProfessorOhki: Doesn't RAID-5 just use a XOR? With a physical controller, aren't you just talking about a few gate delays here and there?

The real world difference depends hugely on the implementation, so there's no hard and fast numbers to give there. But even a small added delay gets obviously multiplied especially during a rebuild. Or if the array is in use.

The rebuild part is especially problematic, since with RAID-5/RAID-1, if you lose another disk during the rebuild, you're dead in the water. With RAID-10, not necessarily so.


Yeah, RAID-10 wins if you can swing the cost/Gb. I just didn't think the parity calculation for RADI-5 was as massive as a penalty as suggested.

Of course, then you have my use case. My chassis had spots for 4 drives. One is an independent SSD for the OS. The other 3 are an array; can't run RAID-10 on that :P
 
2012-12-07 09:22:37 PM

ProfessorOhki: Of course, then you have my use case. My chassis had spots for 4 drives. One is an independent SSD for the OS. The other 3 are an array; can't run RAID-10 on that :P


Could do two RAID-1s. Or if it's a server, install a hypervisor on a flash drive, and create a big RAID-10 pool shared for the operating system and data.
 
2012-12-07 09:25:14 PM

finnished: BumpInTheNight: Not trying to figure out my friend, trying to explain where I use raid5s to leverage surplus processing to increase write speed vs using mirrors. Besides the unprotected status only lasts until the processes are shifted to a different server (usually a few seconds) and then the one with the dead drive is tasked to rebuild with the hot spare before taking the reigns again. I'll admit that URE thing is something I'm very curious about and that'll shift my opinion about what I do at home, but I my gut is still thinking that where it'd truly come into play (disk A dies and then during rebuild disk B errors out too) the controller can handle it or whatever knocked out disk A likely slew disks B,C & D etc as well.

So now we're up to enterprise applications with virtualization then? Ok, so say you have a server that has a hard drive that is about to fail. The server has a RAID-5 with a hot spare. Since it's a hot spare, how does the server process get moved to another VM host? The storage might not even be on the same host, even if you're not using a SAN or something. The host has no idea that the rebuild has started.

So, anyway. The operator gets notified, but he doesn't need to respond since the rebuild starts automatically with the hot spare. It churns for a while, everything looks good, the server is still available. But uh-oh, now there's a problem. You get a read error on one of the disks. The storage array disables the volume, the server is down.

Operator gets notified, only to find out the array is down. Bad news. But that's OK, there's a good backup from earlier today. But the backup was made 8 hours ago. You just lost 8 hours of data.

Compare this to a scenario WITHOUT a hot spare.

The hard drive fails. No hot spare. Operator gets notified. The server and data are still available, though. The operator then can either a) make a backup or b) move the VM to another array completely, or both. Rebuild still fails, but it doesn't matter since the ...


What is the Vsphere API plus competent scripting, Alex?
 
2012-12-07 09:27:56 PM

ProfessorOhki: Yeah, RAID-10 wins if you can swing the cost/Gb. I just didn't think the parity calculation for RADI-5 was as massive as a penalty as suggested.


The only cost in the real world is write speed, because most users don't write nearly as often as they read, which for the more typical user is beyond acceptable. I only made the suggestion because the original person I was responding to had a raid 0 for everything, which I assumed was for cost efficiency per gb. I probably should have clarified from the start exactly why I made that suggestion.
 
2012-12-07 09:28:06 PM

BumpInTheNight: What is the Vsphere API plus competent scripting, Alex?


Competent scripting isn't going to save you from incompetent storage decisions.
 
2012-12-07 09:45:07 PM

BraveNewCheneyWorld: ProfessorOhki: Yeah, RAID-10 wins if you can swing the cost/Gb. I just didn't think the parity calculation for RADI-5 was as massive as a penalty as suggested.

The only cost in the real world is write speed, because most users don't write nearly as often as they read, which for the more typical user is beyond acceptable. I only made the suggestion because the original person I was responding to had a raid 0 for everything, which I assumed was for cost efficiency per gb. I probably should have clarified from the start exactly why I made that suggestion.


Nah, not a server, just a desktop. Only reason I even bothered with an array is because I occasionally toss around uncompressed video files and didn't want to get caught with having to fragment something near the ends. For my purposes, I might as well have gone 0, but the controller could do 5 and a bit of redundancy for the overhead seemed like a reasonable trade off. Thanks for the suggestion though.

Depends on what you're working with, I'd think. If my guess about controller implementation holds up, a massive sequential write would have the same latency penalty as a 1 block write. If you were handling discretely large data, maybe something like a renderfarm, you'd be talking about a nano-second scale latency on a multiterrabyte read/write. If you were talking about tons of small writes then it would definitely get multiplied.

/Not an IT guy
//Closer to a chip guy, hence the curiosity
///RAID-0
////More like AID-0.
//Never again
 
2012-12-07 09:57:46 PM

ProfessorOhki: If my guess about controller implementation holds up, a massive sequential write would have the same latency penalty as a 1 block write.


I'm pretty sure the world is not going to run out of bad implementations anytime soon! :)
 
Displayed 45 of 195 comments

First | « | 1 | 2 | 3 | 4 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »






Report