74GB WD Raptors in RAID 0 Questions.

xguntherc

I Click Home To Much
Vista Guru
Hey everyone. It's been a while since I've posted here I know. but I have a few questions and i was hoping some people that are better in this area could answer them. The entire RAID area is NOT my area of expertise.

Basically I currently have a single 500GB HD. and a 500GB External HD.

I have a friend that can and will give me a great deal on these WD 74GB 10,000 rpm Raptor HD's. I've read these are great drives and very very fast. and that getting 2 in RAID 0 and loading my OS, and all games to it would make everything much faster, from start-up, to everyday things, to load times during gaming and what not.

Newegg.com - Western Digital Raptor WD740ADFD 74GB 10000 RPM 16MB Cache SATA 1.5Gb/s Hard Drive - Internal Hard Drives

There they are $139.00 at Newegg.com and he is going to give me 2 of these drives for a total of $95 Shipped. That is a pretty darn good deal IMO.

I'm wondering how worth it this is. and will it be that noticeable of a difference having my OS, and games on these vs my standard 500GB Seagate Cuda.. Also is it worth the hassle of re installing all my games and OS on the new drives in Raid 0 and backing everything up to my current 500GB. oh and he's also throwing in a Raid card also.

I'd love to hear from SCSIRAID on this one. as he's the smart one of Raid stuff. I dont know anything about it. or even how to do it and set it up. So some info on setting up RAID, and performance info and what not would be very much appreciated.

Thanks!
 

My Computer

System One

  • CPU
    Q9650 E0 4.0 GHz @1.304v
    Motherboard
    eVGA 750i FTW
    Memory
    2x2GB Corsair Dominator PC2-8500C5D
    Graphics Card(s)
    eVGA/MSI GTX 260 SLI
    Sound Card
    X-Fi XtremeGamer
    Monitor(s) Displays
    Samsung T240 & 226BW
    Screen Resolution
    1920x1200 & 1680x1050
    Hard Drives
    Seagate Cuda 500GB 32mb Cache SATA 7200.(11) + 500GB Seagate Cuda External eSATA, USB, FW400
    PSU
    PC P&C 750w Silencer PSU
    Case
    CoolerMaster HAF 932 (Water-Cooled)
    Cooling
    Plenty of Fans, and a few 230mm Fans
    Keyboard
    Logitech G11
    Mouse
    Logitech MX-518
    Other Info
    ASUS 20x Optical, Bose Companion 3, ATH-AD500 Cans :), Patriot Xporter 16GB Flash Drive (Very Fast), & Sandisk Micro 8GB.

    Nikon D40 DSLR with 18-105mm VR & 55-200mm VR
I would not do RAID 0. I would install them as two drives. OS on one and apps/games and paging file on the second. RAID 0 is faster at loading games. It is only a few seconds of time. I would use your external drive for a disk to disk backup in Vista. RAID 1 would be fault tolerant and protect your OS and data from a hard drive crash. I would still do the external backups.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
I would not do RAID 0. I would install them as two drives. OS on one and apps/games and paging file on the second. RAID 0 is faster at loading games. It is only a few seconds of time. I would use your external drive for a disk to disk backup in Vista. RAID 1 would be fault tolerant and protect your OS and data from a hard drive crash. I would still do the external backups.

I find it ironic that many benchmarks seem to say completely the opposite of what you are suggesting. RAID 0 has always had a significant increase across the board in read / write perfomance. I know for a fact from using Raptor drives that the overall performance of hard disk access is increased having them in RAID 0 as opposed to seperate disks / partitions.

Sure there is no fault tolerance BUT the Raptor drives have been extremely reliable...I've had my 37 gig drives since they were first released many years ago and they are functioning as they did when they first were installed.
 

My Computer

System One

  • CPU
    Intel Q9550 @ 3.2Ghz (OC)
    Memory
    4 GIG DDR2-6400
    Graphics Card(s)
    ATI Radeon 3750
    Sound Card
    Creative X-Fi
    Monitor(s) Displays
    22" LCD
    Screen Resolution
    1680 x 1050
    Hard Drives
    (2) 500 Gig SATAII 32mb Cache in RAID 0 array
    PSU
    Antec TruPower 650W
    Internet Speed
    10Mbps Cable
So what should I do.. lol. I don't know anything about RAID.. i don't know what 1 is from 0. and all that jazz
 

My Computer

System One

  • CPU
    Q9650 E0 4.0 GHz @1.304v
    Motherboard
    eVGA 750i FTW
    Memory
    2x2GB Corsair Dominator PC2-8500C5D
    Graphics Card(s)
    eVGA/MSI GTX 260 SLI
    Sound Card
    X-Fi XtremeGamer
    Monitor(s) Displays
    Samsung T240 & 226BW
    Screen Resolution
    1920x1200 & 1680x1050
    Hard Drives
    Seagate Cuda 500GB 32mb Cache SATA 7200.(11) + 500GB Seagate Cuda External eSATA, USB, FW400
    PSU
    PC P&C 750w Silencer PSU
    Case
    CoolerMaster HAF 932 (Water-Cooled)
    Cooling
    Plenty of Fans, and a few 230mm Fans
    Keyboard
    Logitech G11
    Mouse
    Logitech MX-518
    Other Info
    ASUS 20x Optical, Bose Companion 3, ATH-AD500 Cans :), Patriot Xporter 16GB Flash Drive (Very Fast), & Sandisk Micro 8GB.

    Nikon D40 DSLR with 18-105mm VR & 55-200mm VR
So what should I do.. lol. I don't know anything about RAID.. i don't know what 1 is from 0. and all that jazz

RAID 0 = Striping. Data is streamed to both drives at once, but data is split up. This increases read and write performance. Anyone telling you the gain is minimal has no real world experience, or doesnt know how to operate a PC.

RAID 1 = Mirroring. Data is copied to both drives identically. Used for redundancy if a drives fails. However it cuts your storage in half since one drive is a duplicate. Does not improve read or write performance.
 

My Computer

System One

  • CPU
    Intel Q9550 @ 3.2Ghz (OC)
    Memory
    4 GIG DDR2-6400
    Graphics Card(s)
    ATI Radeon 3750
    Sound Card
    Creative X-Fi
    Monitor(s) Displays
    22" LCD
    Screen Resolution
    1680 x 1050
    Hard Drives
    (2) 500 Gig SATAII 32mb Cache in RAID 0 array
    PSU
    Antec TruPower 650W
    Internet Speed
    10Mbps Cable
RAID 0 is performance is based on many factors. Number of files you are opening in an app. How fragmented the array has become. Since most applications open in 10-20 seconds. You will only gain a few seconds on opening. Last night, one of my friends lost his RAID 0 array. A SATA cable vibrated loose and crashed the array. With RAID 0, everything is lost on failure of the array. How fast do you need to open a game at home. If it takes 10 seconds longer, are you missing much? Even 30 seconds, does not matter. How long does it take to reinstall what is on the RAID 0 array? If you are doing RAID 0, I would get an external backup drive and use it a few times a week.

Your risk is 50% greater on two drive RAID 0 over a single drive. You need one drive to fail or a cable to come loose to crash your RAID 0 array. I only use RAID 0 for paging, and target for certain applications. Once the project is finished, it gets moved back to a fault tolerant array.

Microsoft and Novell both recommend RAID 1 for OS and RAID 0+1, 1+0, 10 or 5 for apps/data.

I would suggest using the 7200.11 32MB cache SATA drives over RAID 0. They have very good performance. I put two of them as single drives in my A/V server as a test. They were able to sustain good performance for video processing.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
So what should I do.. lol. I don't know anything about RAID.. i don't know what 1 is from 0. and all that jazz


I totally agreed with SCSIraidGuru's concern of the fault tolerant issue only if it is for work or something more important. However, for fun like gaming, I would risk anything just for a few second or just looking at the tiniest improvement on the Window Experience Index.:D

I would raid 0 as you had planed on to do and backup frequently. Put all the game profiles, important data and personal keep sakes on the 500G and system backup on the external.

It has been an on going problem for me that the SATA cables come loose from my raid drives until recently replaced with after market ones. It freezes up my system all the time when it happens but never crashes the array and losts anything. The way I see it, if I really have to reinstall the OS and games, as long as you have a backup, I would imagine I am installing a very large game. Lol! Also, you can leave out some junks or games you hardly play any more.

Good luck and post back.
 

My Computer

System One

  • CPU
    E6850
    Motherboard
    EVGA 122-CK-NF67-A1 680i
    Memory
    4 x OCZ Platinum 1GB
    Graphics Card(s)
    ATI Radeon HD 5850 1GB
    Sound Card
    SB X-Fi X Audio
    Monitor(s) Displays
    Samsung 23" 5MS
    Screen Resolution
    2048 x 1152
    Hard Drives
    2 x Barracuda 7200.10 320GB RAID 0 / 1 x 500GB Maxtor
    PSU
    Seasonic 600W M12
    Case
    CM Centurion 5
    Cooling
    air
    Internet Speed
    100Mbps
I paid around $70 for the Seagate 7200.11 500GB 32MB SATA 2 hard drive.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
Your risk is 50% greater on two drive RAID 0 over a single drive.
Actually, you multiply the number of drives in the RAID 0 array by 1 to figure out the increase in failure risk compared to a single drive. A 2 drive RAID 0 array is two times as likely to fail as a single drive. A 3 drive RAID 0 array is three times as likely to fail as a single drive. That would be a 100% and 200% increase in risk of failure, respectively.

S-
 

My Computer

System One

  • CPU
    Intel E6600 @ 3.0 GHz
    Motherboard
    EVGA nForce 680i SLI (NF68-A1)
    Memory
    4GB - CORSAIR XMS2 PC2 6400
    Graphics Card(s)
    EVGA GeForce 8800 GTS (640MB)
    Hard Drives
    2 - Seagate Barracuda 7200.10 (320GB)
    1 - Seagate Barracuda 7200.10 (500GB)
So some info on setting up RAID, and performance info and what not would be very much appreciated.

Thanks!

Hi Cory,

I looked at setting up a RAID Array some time ago, bought 4 big Hard drives etc, but decided against it in the end, primarily because to get Data security with RAID means a loss of performance. As I was looking at Security, not speed, I decided against it. The links below are worth reading. They are fairly down to earth, and could give you a better understanding of what is really an experts domain and a process that most people don't understand.

Raid Explained
RAID - Wikipedia, the free encyclopedia

How to set up a RAID Array.
How to Setup a RAID System | Hardware Secrets
 

My Computer

System One

  • Manufacturer/Model
    Scratch Built
    CPU
    Intel Quad Core 6600
    Motherboard
    Asus P5B
    Memory
    4096 MB Xtreme-Dark 800mhz
    Graphics Card(s)
    Zotac Amp Edition 8800GT - 512MB DDR3, O/C 700mhz
    Monitor(s) Displays
    Samsung 206BW
    Screen Resolution
    1680 X 1024
    Hard Drives
    4 X Samsung 500GB 7200rpm Serial ATA-II HDD w. 16MB Cache .
    PSU
    550 w
    Case
    Thermaltake
    Cooling
    3 x octua NF-S12-1200 - 120mm 1200RPM Sound Optimised Fans
    Keyboard
    Microsoft
    Mouse
    Targus
    Internet Speed
    1500kbs
    Other Info
    Self built.
You are using each drive 1/2 the time. So your failure rate increases 50%, (1-.5). On three drives you are using them 1/3 the time so (1-1/3) = 67% increase in probability of failure.

Your failure rate does not go up 100% or 200%.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
You are using each drive 1/2 the time. So your failure rate increases 50%, (1-.5). On three drives you are using them 1/3 the time so (1-1/3) = 67% increase in probability of failure.

Your failure rate does not go up 100% or 200%.
SCSIraidGURU,

How are drives rated? By mean time between failures (MTBF) in hours based on the number of power-on hours (POH).

Drives will fail just sitting there doing nothing. So you can't base it on head movement.

Based on your calculation, a 20 drive RAID 0 array would have only a 95% increase in the probability of failure. That's not even twice as likely to have a failure as a one drive. That's clearly wrong. Anyone that has run a data center (me) knows that.

S-
 

My Computer

System One

  • CPU
    Intel E6600 @ 3.0 GHz
    Motherboard
    EVGA nForce 680i SLI (NF68-A1)
    Memory
    4GB - CORSAIR XMS2 PC2 6400
    Graphics Card(s)
    EVGA GeForce 8800 GTS (640MB)
    Hard Drives
    2 - Seagate Barracuda 7200.10 (320GB)
    1 - Seagate Barracuda 7200.10 (500GB)
You are right Sidewinder. The advantages of RAID are to be found in the REDUNDANCY of Data. Although your drive failure rate, and drive access times increase, the ability of the array to rebuild itself makes the probability of catastrophic data loss very low. Any gains in speed by using a non-redundant system such as RAID 0 are illusory as your risk of DATA loss increases by 100% for a minimal speed gain.
 

My Computer

System One

  • Manufacturer/Model
    Scratch Built
    CPU
    Intel Quad Core 6600
    Motherboard
    Asus P5B
    Memory
    4096 MB Xtreme-Dark 800mhz
    Graphics Card(s)
    Zotac Amp Edition 8800GT - 512MB DDR3, O/C 700mhz
    Monitor(s) Displays
    Samsung 206BW
    Screen Resolution
    1680 X 1024
    Hard Drives
    4 X Samsung 500GB 7200rpm Serial ATA-II HDD w. 16MB Cache .
    PSU
    550 w
    Case
    Thermaltake
    Cooling
    3 x octua NF-S12-1200 - 120mm 1200RPM Sound Optimised Fans
    Keyboard
    Microsoft
    Mouse
    Targus
    Internet Speed
    1500kbs
    Other Info
    Self built.
P1014182.JPG


From an engineering stand point, 20 drive RAID 0 would increase your failure rate by 95%. Since most engineers don't like more than 3% probability of failure, they would not do it. This is the probability increase of failure over a single drive. All hard drives fail. You can't determine when. You just weigh the event of it happening in your decision to implement. My RAID 1 + hot spare will fail. Since I have a redundant drive and hot spare, the probability of complete failure is very small. It would require both RAID 1 drives failing together to fail the array. In a server room, you protect data over performance. On my SAN, I have two RAID 10 arrays with two hot spares. I don't need space. I wanted write performance and fault tolerance. RAID 10 does not do parity calculations and writes.

Depends the drives. My Seagate Savvio 2.5" server grade SAS drives are 5 year warrenty and 100% duty cycle. I look at duty cycle more than MTBF. Duty cycle is how many hours of continual use per day. My older Maxtor IDE drives were 33% or 8 hours a day. These are optimal condition ratings.

1.) Protected power
2.) Optimal cooling

My workstation at home is like my server room servers.

Server room: Symmetra LX 16kW UPS on a 100A @ 208A single phase circuit directly connected from the server room main to the basement mains of the building.

Home: 60A service panel to the 200A house main. 20A double pole breaker split to two 20A @ 120V circuits to a commercial grade double pole single throw switch. This connects to two banks of receptables. My workstation has a Tripp Lite 20A 2400W line conditioner on one bank to power the PC Power and Cooling 1kW-SR power supply with a 20A 14ga cord. The other bank goes to a Tripp Lite 20A 2400W line conditioner to power the monitor and other components. They are sine wave line conditioner to maintain power. The 1kW-SR has Mosfets to maintain the power to the power supply. The input power is 117V to 119V @ 60Hz. I tested it with a scope. This maintains perfect power to all components. Spikes and sags can damage your board, hard drives, RAM, etc.

My Savvio drives are in Supermicro 2.5" SAS enclosures. Each has a 40cfm fan with temperature and fan failure alarms. I have had drives fail after 8 hours of use. I had them last 7 years. I use RAID 1 and 10 with a hot spare for fail over. The failed drive spins up to the spare. I also backup to three server grade tape drives on my tape workgroup server in the basement. I prefer tape because I can take them out of the house and store them elsewhere when I am on vacation. My enclosures and controller support 16 SAS drives.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
From an engineering stand point, 20 drive RAID 0 would increase your failure rate by 95%.
SCSIraidGURU,

You can say that all you want but it makes no sense.

Your formula never reaches 100% no matter how many drives you put in a RAID 0 array. That's just shows how bogus your formula is. RAID 0 and JBOD arrays are bad because each drive you add to the array increases the chance of failure by one. Two drives are 2x more likely to fail than one drive. Three drives are 3x more likely to fail than one.

Again, hard drives are rated for mean time between failures (MTBF) in hours based on the number of power-on hours (POH).

Let's say I have a single drive rated at 50,000 hours MTBF. That means that drives of that model average one failure every 50,000 POH. Put two of those drives in a RAID 0 array and now you average 2 failures every 50,000 POH. Put 10 drives in a RAID 0 array and you average 10 failures every 50,000 POH.

Do you see where this is going?

S-
 
Last edited:

My Computer

System One

  • CPU
    Intel E6600 @ 3.0 GHz
    Motherboard
    EVGA nForce 680i SLI (NF68-A1)
    Memory
    4GB - CORSAIR XMS2 PC2 6400
    Graphics Card(s)
    EVGA GeForce 8800 GTS (640MB)
    Hard Drives
    2 - Seagate Barracuda 7200.10 (320GB)
    1 - Seagate Barracuda 7200.10 (500GB)
So what should I do.. lol. I don't know anything about RAID.. i don't know what 1 is from 0. and all that jazz


The whole discussion is enlightening but not answering the original question

2 Drive Array

RAID 0 is fast but failure rate is double that of a single disk. This means twice the risk of catastrophic data loss. More disks in the array means higher bandwidth.

RAID 1 is slower than single disk but data is inherently safe because of redundancy. Array continues to operate so long as at least one drive is functioning.

4 Drive Array

RAID 3 Same as RAID 1 but with speeds on a par with normal single disk transfer.One minor benefit is the dedicated parity disk allows the parity drive to fail and operation will continue without parity or performance penalty.

Raid 4.Same as Raid 3. The error detection is achieved through dedicated parity and is stored in a separate, single disk unit. Any disk failure doesn't destroy the array unless it is the Parity Storage unit. Rarely used form of RAID.

RAID 5 Distributed parity requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive.

RAID 6 Provides fault tolerance from two drive failures; array continues to operate with up to two failed drives. This makes larger RAID groups more practical.

Both hardware and software RAIDs with redundancy may support the use of hot spare drives, a drive physically installed in the array which is inactive until an active drive fails, when the system automatically replaces the failed drive with the spare, rebuilding the array with the spare drive included. This reduces the mean time to recovery (MTTR), though it doesn't eliminate it completely. A second drive failure in the same RAID redundancy group before the array is fully rebuilt will result in loss of the data; rebuilding can take several hours, especially on busy systems.
Rapid replacement of failed drives is important as the drives of an array will all have had the same amount of use, and may tend to fail at about the same time rather than randomly. RAID 6 without a spare uses the same number of drives as RAID 5 with a hot spare and protects data against simultaneous failure of up to two drives, but requires a more advanced RAID controller.
 

My Computer

System One

  • Manufacturer/Model
    Scratch Built
    CPU
    Intel Quad Core 6600
    Motherboard
    Asus P5B
    Memory
    4096 MB Xtreme-Dark 800mhz
    Graphics Card(s)
    Zotac Amp Edition 8800GT - 512MB DDR3, O/C 700mhz
    Monitor(s) Displays
    Samsung 206BW
    Screen Resolution
    1680 X 1024
    Hard Drives
    4 X Samsung 500GB 7200rpm Serial ATA-II HDD w. 16MB Cache .
    PSU
    550 w
    Case
    Thermaltake
    Cooling
    3 x octua NF-S12-1200 - 120mm 1200RPM Sound Optimised Fans
    Keyboard
    Microsoft
    Mouse
    Targus
    Internet Speed
    1500kbs
    Other Info
    Self built.
My formula is increased probability of failure. 100% failure in a probability is complete failure from zero time. This is why it never gets to 100%. You also get disminished returns on RAID. More drives adds more R/W IOs. After 8 drives on a good controller, you don't gain performance. The performance curve flattens out.

I don't care if a drive has 3,000,000 hours of MTTR like my old SCSI drives. That is 342 years of time. I replace hard drives every 3 years, when I replace the server. The best server drive warranty is 5 years. You don't want to get to the point of drive failure to replace them. You replace them on your schedule not hardware failure. My job is to keep the servers online and my users working. I have had courses in my Bachelor's of Electrical Engineering and Mathematics degree on statistics for engineering. I still live on a schedule that I set. In the last 10 years, I have had 3 days of downtime from server failure. My network has fault tolerance and redundancy that keeps up time in component failure. I lose a drive on a Proliant. HP brings out a new drive and replaces it. The hot spare spins over in 30 minutes.

You can use all the statistics you want. All components will fail. You can't determine when or how. All I can say is you put in enough fault tolerance so single failures don't bring down the server. You design a network so you need 3+ points of failure before a server goes down. I usually install at least one spare per raid array. I do full tape backups daily. My Savvio drives have a annualized failure rate of 0.55% I had one fail after 16 hours of use and replaced. My downtime was none. The RAID 1 array failed over to the spare and I kept working.

50,000 hours is 5.7 years. It might last 5.7 years or days. You don't know. You could have a cable vibrate loose and crash your drive. In RAID 0, it bye bye to your data. I know at least 15 users this year who opened their case and bumped a drive cable on RAID 0 and lost their data. They did not understand that a drive failure is the least problem on RAID 0. It is usually power or cabling issues. Even, a bad driver can drop the array long enough to crash it.

We lost a Space Shutter because a O-ring was designed to work around 28F. It had a 1% chance of failure. Now we have 7 dead astronauts and a multi-million dollar lose. As an engineer, you don't do stupid things without understanding the risk. If some one said I had a 50% greater risk of losing my data, why would I risk it?

I love fast hard drives. My RAID 1 SAS RAID on a 8708EM2 cached controller is very fast and redundant.

I have seen 4 drive RAID 0 = one 15K SCSI drive.

First, in 15 years of being a senior network engineer, I have yet to see a production server run RAID 0. I have worked for some of the largest corporations in the world. I have fired idiots for overclocking and doing stupid things in a server room. RAID 0 would be top on my list. A Senior Network Engineer designs for fault tolerance and redundancy. You build hardware for speed. You follow configurations that are recommended and best practice.

This is my RAID 1 array at home.
Vista x64 Ultimate installs in 18 minutes on a USB external DVD to it.

Detailed Benchmark Results
Buffered Read : 1.01GB/s
Sequential Read : 119.73MB/s
Random Read : 128.25MB/s
Buffered Write : 679.29MB/s
Sequential Write : 31.91MB/s
Random Write : 31.21MB/s
Random Access Time : 1ms

Two drives with cached reads over GB/s. The 128MB DDR667 cache on the controller and 32MB cache on the drives along with optimized drivers and firmware give me a great A/V and CAD workstation.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
My formula is increased probability of failure. 100% failure in a probability is complete failure from zero time. This is why it never gets to 100%.
You don't even know what your formula is supposed to tell you. Here is your quote again:

"You are using each drive 1/2 the time. So your failure rate increases 50%, (1-.5). On three drives you are using them 1/3 the time so (1-1/3) = 67% increase in probability of failure.

Your failure rate does not go up 100% or 200%.
"

Does your formula show the percentage increase in the probability of failure or the probability of failure? If the increase in the probability of failure goes up 100%, that means failure is twice as likely, not 100% percent likely. Your formula is useless because it is flawed. It never allows the likely rate of failure to double no matter how mant drives are in the array.

Please read this again:

Hard drives are rated for mean time between failures (MTBF) in hours based on the number of power-on hours (POH).

Let's say I have a single drive rated at 50,000 hours MTBF. That means that drives of that model average 1 failure every 50,000 POH. Put two of those drives in a RAID 0 array and now you average 2 failures every 50,000 POH. Put 10 drives in a RAID 0 array and you average 10 failures every 50,000 POH.

The two drive array show a 100% increase in failure rate compared to a single drive. The ten drive array shows a 900% increase in failure rate compared to a single drive.

Throw your formula away. It is not statistically sound.

S-
 

My Computer

System One

  • CPU
    Intel E6600 @ 3.0 GHz
    Motherboard
    EVGA nForce 680i SLI (NF68-A1)
    Memory
    4GB - CORSAIR XMS2 PC2 6400
    Graphics Card(s)
    EVGA GeForce 8800 GTS (640MB)
    Hard Drives
    2 - Seagate Barracuda 7200.10 (320GB)
    1 - Seagate Barracuda 7200.10 (500GB)
It might last 5.7 years or days. You don't know.

That alone is exactly why the RAID0 failure formula means absolutely nothing in a real world environment. That formula is completely theoretical and in no way reflects real world probability. For example, I've had my old 36GB Raptors running in the exact same RAID0 configuration that it has run in since 2003 without so much as a hiccup. Joe User on the other hand, might buy just one and have it fail a week later. There is no way to know.

Seeing how the original poster wasn't likely referring to a server machine where redundancy is king, that theoretical formulas mean nothing, and knowing that HDD failure and data loss can and will occur sooner or later, you must base your decision on how much do you value your data and whether you are willing to sacrifice performance for redundancy. In any case, it's almost a moot point with the cost of today's HDDs. Personally, I'm perfectly content with my RAID5 setup and having the best of both worlds :p
 
Last edited:

My Computer

System One

  • CPU
    Intel Core 2 Quad Extreme QX9770 Yorkfield 3.2GHz @ 4.3GHz
    Motherboard
    ASUS Striker II Extreme 790i Ultra SLI
    Memory
    8GB (4x 2GB) Corsair Dominator DDR3 2000
    Graphics Card(s)
    3x EVGA GeForce GTX 280 SSC
    Sound Card
    ADI 1988B 8-Channel HD Audio Integrated
    Monitor(s) Displays
    2x Acer 24" LCD
    Screen Resolution
    1920x1200
    Hard Drives
    3x 300GB Western Digital VelociRaptors RAID5 --
    3x 1TB Samsung Spinpoint F1 RAID5
    PSU
    Cooler Master Real Power Pro 1250W
    Case
    Cooler Master Stacker 830 Evolution Full Tower
    Cooling
    HWLabs Black Ice Extreme II water cooling
    Keyboard
    Logitech G15 Gaming Keyboard v2
    Mouse
    Logitech MX518 8-button Optical Gaming Mouse
    Internet Speed
    Cable 16Mb/s (12Mb/s advertised)
It does not go up 100% because your using each drive less. You are just figure event risk. 2 drives: 50%. You are on each drive 1/2 the time. I think you are confusing how increased probability of failure works. Truthfully, its not used in the design process because we know what is the best practice on implementation. MTBF is not a figure a engineer uses in design and implementation. We design to protect the data and maintain uptime in case of a single component failure. I don't really look at those figures. I use SAS and SCSI drives that have lower levels of probability of failure over SATA. It is really meanless all those statistics. You don't do certain things in a server room. I don't recommend do RAID 0 at home. I know a few users who did 8 x 500GB RAID 0 array. They could not backup their data due to cost of a solution. They lost the arrays.

My risk of failure is 0.55% on my SAS drives. I would do RAID 0

drives, failure rate, data
1, 0.55%, 146

10, 3.03%, 1460

Look at the increased probability of failure. What is the cost benefit? Performance vs recovery from the event? How long does it take to restore the data? How many hours of downtime? At ten drives, probably a few days of downtime. A day to get the new hardware installed and formatted. My SAN has 1.4 TB array. You have format time and recovery time of the operating system install.

5-6 hours to format 1.4 TB on SAS drives on a SAN. My 146GB single SAS drives took 90 minutes to format 146GB.
3 hours to reinstall Windows and server packs from Technet
20 hours to recovery the data. It took me that long to move data across from the old servers to the new servers on 1 Gbps teamed connections and verify the data.

At my house on my tape drives, restoring 300GB from tape is 4-5 hours. I have enough tape drives to backup about 800GB with the compression I get from the three tape drives. They can restore about as fast as the USB hard drives. I am considering disk to disk backups on my three firewire drives. I like the tapes. They are removed and stored. I have 5 sets of tapes in the rotation.

How much time do you save loading a game on RAID 0 vs recovering your data and reinstalling everything?

Increased probability only means you have added risk by doing it. It can't predict the time of the failure event. You plan for failure and design to prevent single component failure from causing downtime or lose of data. Why do RAID 0, when RAID 10 can protect your data and give your RAID 0 performance. The drives are cheap. The CPUs and storage bus can handle the load. Most programs load in under 10-20 seconds. So saving 3-5 seconds on loading a game is not critical at home. Saving 3 seconds on a procedure that is ran 100,000x a day on a production server is.

Overclocking can cause data misses on your storage bus. The 1:1 interleave of data reads is designed for normal default bus speed. If you put the bus at 2x the speed, you could cause the hard drive to make 2-3 passes to retrieve data.

Can you accurately predict a drive failure? SMART technology can't do it on the drives. This is why you backup your data and weigh the risk of failure. Most who do RAID 0. Do it once. After the first significant lose of data. They never do it again.
 

My Computer

System One

  • CPU
    pair of Intel E5430 quad core 2.66 GHz Xeons
    Motherboard
    Supermicro X7DWA-N server board
    Memory
    16GB DDR667
    Graphics Card(s)
    eVGA 8800 GTS 640 MB video card
    Hard Drives
    SAS RAID
Back
Top