• ATA Raid under Linux?

    From Poindexter Fortran@VERT/REALITY to All on Wed May 4 21:29:00 2005


    I'm finally going to bite the bullet and rebuild my trusty linux box
    with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
    up my web host/mail host, but want some redundancy before I make the
    switch...

    I've found some cheap ATA-133 RAID controllers with the Silicon Image
    680 chipset; initial googling looks good re: linux support. I'm
    thinking one of those, two 120 gb drives, and consolidate all of my
    network storage onto it.

    Come to think of it, if this is going to be a repository for all my
    data, I should think bigger. 250 GB drives seem to be a nice sweet
    spot.

    I just wish I'd bitten the bullet last year, when I had some
    consulting income to deduct the cost from...

    --pF



    --- MultiMail/Win32 v0.46
    ■ Synchronet ■ realitycheckBBS -- since 1991, more or less...
  • From Angus Mcleod@VERT/ANJO to Poindexter Fortran on Fri May 6 01:08:00 2005
    Re: ATA Raid under Linux?
    By: Poindexter Fortran to All on Wed May 04 2005 17:29:00

    I'm finally going to bite the bullet and rebuild my trusty linux box
    with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
    up my web host/mail host, but want some redundancy before I make the switch...

    I've found some cheap ATA-133 RAID controllers with the Silicon Image
    680 chipset; initial googling looks good re: linux support. I'm
    thinking one of those, two 120 gb drives, and consolidate all of my
    network storage onto it.

    I'd strongly suggest you configure the controllers to treat the drives as "JBOD" and use Linux software RAID.


    ---
    ■ Synchronet ■ Audio! We're all for it at The ANJO BBS
  • From Glue@VERT/DNSDREAM to Angus Mcleod on Fri Jun 24 10:14:00 2005
    Re: ATA Raid under Linux?
    By: Angus Mcleod to Poindexter Fortran on Thu May 05 2005 10:08 pm

    Re: ATA Raid under Linux?
    By: Poindexter Fortran to All on Wed May 04 2005 17:29:00

    I'm finally going to bite the bullet and rebuild my trusty linux box
    with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
    up my web host/mail host, but want some redundancy before I make the switch...

    I've found some cheap ATA-133 RAID controllers with the Silicon Image
    680 chipset; initial googling looks good re: linux support. I'm
    thinking one of those, two 120 gb drives, and consolidate all of my network storage onto it.

    I'd strongly suggest you configure the controllers to treat the drives as "JBOD" and use Linux software RAID.



    Strongly agreed. Unless you are prepared to spend quite a bit of time finding drivers for that conrtoller and compiling them in to your kernel.. Linux will not take advantage of the controller any more than its default driver allows. This can be a pain when trying to recover.

    I also advise against hosting your own web/email unless you have a burstable connection that can handle the gobs and gobs of brute force spam attacks, apache based ddos attacks (any site running a php/mysql driven application is subject). Your home server could easily handle your web/email needs. But can your connection (and isp) tolerate the amount of abuse it brings ? :)

    For just a hobby domain / bbs / family page you'd be fine. I see every day the workout my production webservers get.. i would not wish that on any cable modem. ;)


    ---
    ■ Synchronet ■ DNS Dreams BBS - telnet://dnsdreams.com
  • From Funar@VERT/ANETBBS to Poindexter Fortran on Tue Jul 5 18:46:00 2005
    Poindexter Fortran wrote:
    I'm finally going to bite the bullet and rebuild my trusty linux box
    with Debian Sarge (once it's out) and ATA RAID. I'm considering giving
    up my web host/mail host, but want some redundancy before I make the switch...

    I've found some cheap ATA-133 RAID controllers with the Silicon Image
    680 chipset; initial googling looks good re: linux support. I'm
    thinking one of those, two 120 gb drives, and consolidate all of my
    network storage onto it.

    Come to think of it, if this is going to be a repository for all my
    data, I should think bigger. 250 GB drives seem to be a nice sweet
    spot.

    I don't have any experience with the SI RAID cards. However, the 3ware
    cards have *excellent* Linux support, including kernel and user-level monitoring of the raidset. 3ware cards are available for both IDE and SATA
    in 2-16 channel boards. The 2-channels are fairly inexpensive considering
    the RAID is a true hardware-based raid, and not some BIOS trickery.

    All of my Linux servers use 3ware cards with two drives mirrored.

    My primary video capture PC (WinXP) uses 4 250gb 7200rpm Maxtors in a Raid-5 configuration with a 3ware 8000 series card. Just can't beat the performance.

    ---
    ■ Synchronet ■ AnotherNet BBS - bbs.another.org (1:229/747)
  • From Angus Mcleod@VERT/ANJO to Funar on Tue Jul 5 21:41:00 2005
    Re: Re: ATA Raid under Linux?
    By: Funar to Poindexter Fortran on Tue Jul 05 2005 14:46:00

    I don't have any experience with the SI RAID cards. However, the 3ware cards have *excellent* Linux support, including kernel and user-level monitoring of the raidset. 3ware cards are available for both IDE and SATA in 2-16 channel boards. The 2-channels are fairly inexpensive considering the RAID is a true hardware-based raid, and not some BIOS trickery.

    Funny -- In always considered that "some BIOS trickery" had definate advantages over any RAID card that considered a 2-disk mirror to be
    the ultimate in advanced technology.



    ---
    ■ Synchronet ■ Audio! We're all for it at The ANJO BBS
  • From Funar@VERT/ANETBBS to Angus Mcleod on Thu Jul 7 17:09:00 2005
    Angus Mcleod wrote:
    Re: Re: ATA Raid under Linux?
    By: Funar to Poindexter Fortran on Tue Jul 05 2005 14:46:00

    I don't have any experience with the SI RAID cards. However, the 3ware cards have *excellent* Linux support, including kernel and user-level monitoring of the raidset. 3ware cards are available for both IDE and SATA
    in 2-16 channel boards. The 2-channels are fairly inexpensive considering the RAID is a true hardware-based raid, and not some BIOS trickery.

    Funny -- In always considered that "some BIOS trickery" had definate advantages over any RAID card that considered a 2-disk mirror to be
    the ultimate in advanced technology.

    Not at all. If you're relying on the BIOS and special drivers to handle the configuration and operation of the RAID, you're wasting valuable processor cycles on I/O that could be used for CPU intensive applications such as
    video capture, editing, etc. If you're going that route, you may as well
    use straight software-based RAID. True hardware-based cards take all the mapping, data striping, and configuration away from the main system CPU.

    I used to use Promise FastTrak cards. Their 2 and 4 channel cards are BIOS mapped cards. I was seeing frame drops when encoding HDTV w/5.1 sound -
    this on a dual Opteron. When I went to a 3ware card of a similar class, the problems went away and my I/O throughput nearly doubled. That was enough
    for me.

    I've since installed 3ware cards in every server I've installed.

    ---
    ■ Synchronet ■ AnotherNet BBS - bbs.another.org (1:229/747)
  • From Angus Mcleod@VERT/ANJO to Funar on Fri Jul 8 01:57:00 2005
    Re: Re: ATA Raid under Linux?
    By: Funar to Angus Mcleod on Thu Jul 07 2005 13:09:00

    Not at all. If you're relying on the BIOS and special drivers to handle the configuration and operation of the RAID, you're wasting valuable processor cycles on I/O that could be used for CPU intensive applications such as video capture, editing, etc. If you're going that route, you may as well use straight software-based RAID. True hardware-based cards take all the mapping, data striping, and configuration away from the main system CPU.

    I will admit that hardware RAID offers a potential performance advantage.
    But that is the only advantage that anyone has ever been able to convince
    me actually exists. But many people install RAID systems for reasons
    *other* than increased performance.

    You were responding to someone who said he was setting up a linux box and "wanted some redundancy", so that doesn't speak to me about someone overly concerned with performance as a primary consideration. It speaks to me of
    low cost, reliability and flexability.

    Given that even if the 3ware controllers you recommend cost as little as a dollar, that would be a dollar more than required to install and run
    software raid on a Linux machine. And in fact, all but the cheapest 3ware cards cost a couple of hundred dollars -- money that could be spent on a faster CPU chip if performance was actually compromised. (You can buy a
    Xeon 3.6GHz chip for less than many of the 3ware controllers.)

    Given the many advantages of software RAID, I have to say that unless you
    have money to spare and need the peroformance, I'd recommend staying away
    from hardware RAID solutions.

    I used to use Promise FastTrak cards. Their 2 and 4 channel cards are BIOS mapped cards. I was seeing frame drops when encoding HDTV w/5.1 sound - this on a dual Opteron. When I went to a 3ware card of a similar class, the problems went away and my I/O throughput nearly doubled. That was enough for me.

    I have used Promise FastTrak cards as well, but always in JBOD mode. I
    used software ERAID on top of these to give me the redundancy that I
    wanted. I've also used "big-iron" RAID solutions with multiple ranks of multiple disks. And I've experienced RAID failure, which was a load of
    fun. Wanna guess which failed -- hardware or software?

    Eventually, it took the combined resources of people in four countries
    (two continents) to get tyhe problem solved. I spent so much time on the phone with the guy who designed the RAID controller hardware, he sent me
    a Christmas card. And ditto the guy who wrote the RAID firmware for the
    same device.

    I've since installed 3ware cards in every server I've installed.

    I'll stick with software RAID unless what I want a performance boost.


    ---
    ■ Synchronet ■ Audio! We're all for it at The ANJO BBS
  • From Tracker1@VERT/TRN to Angus Mcleod on Fri Jul 22 12:00:00 2005
    Angus Mcleod wrote:
    Given the many advantages of software RAID, I have to say that unless you have money to spare and need the peroformance, I'd recommend staying away from hardware RAID solutions.

    Dunno, I think RAID should be handled outside of the OS myself, just a personal view, allong with performance, and eliminates the need for the OS to have that additional layer of overhead running.

    I used to use Promise FastTrak cards. Their 2 and 4 channel cards are BIOS >> mapped cards. I was seeing frame drops when encoding HDTV w/5.1 sound -
    this on a dual Opteron. When I went to a 3ware card of a similar class, the >> problems went away and my I/O throughput nearly doubled. That was enough
    for me.

    I have used Promise FastTrak cards as well, but always in JBOD mode. I
    used software ERAID on top of these to give me the redundancy that I
    wanted. I've also used "big-iron" RAID solutions with multiple ranks of multiple disks. And I've experienced RAID failure, which was a load of
    fun. Wanna guess which failed -- hardware or software?

    I generally only want a single raid-1 setup for things, and usually the cheaper cards work fine for this.. I haven't had a bad raid card, but actually did have an OS issue with raid before, don't remember the OS in particular (was redhat or suse, I didn't admin the box), I just did the db dumps from another machine... I've also seen windows software raid eat itself when one of the drives crashed as well... in my own experiences hardware solutions deal with a drive failure better...

    The SATA raid card in one of my servers supports hotswap & rebuild.. I don't have hotswap bays setup, but if I did, I could swap a drive out while running etc.. and this is on a <$100 card...

    --
    Michael J. Ryan - tracker1(at)theroughnecks(dot)net - www.theroughnecks.net icq: 4935386 - AIM/AOL: azTracker1 - Y!: azTracker1 - MSN/Win: (email)

    ---
    ■ Synchronet ■ theroughnecks.net - you know you want it
  • From Angus McLeod@VERT/ANJO to Tracker1 on Fri Jul 22 16:37:00 2005
    Re: Re: ATA Raid under Linux?
    By: Tracker1 to Angus Mcleod on Fri Jul 22 2005 08:00:00

    Angus Mcleod wrote:
    Given the many advantages of software RAID, I have to say that unless you have money to spare and need the peroformance, I'd recommend staying away from hardware RAID solutions.

    Dunno, I think RAID should be handled outside of the OS myself, just a personal view,

    Why?

    allong with performance, and eliminates the need for the OS the have
    that additional layer of overhead running.

    Well, "performance" and "the need to have an additional layer running" are
    the same thing, aren't they? Yes, you can get better performance by using hardware RAID, but you can buy a faster CPU to counter that issue, and in
    any event most SOHO-type users are not looking for performance anyway, but reliability. Certainly, the original poster who asked about RAID stated directly that they were looking for redundancy, rather than performance...

    I generally only want a single raid-1 setup for things, and usually the cheaper cards work fine for this..

    Software raid gives you additional options (such as RAID 5) at no
    additional cost.

    I haven't had a bad raid card, but actual did have an OS issue with
    raid before, don't remember the OS in particular (was redhat or suse, I didn't admin the box), I just did the db dumps from another machine...

    The thing is, if you have a RAID card fail on you, and you can't locate a replacement, what do you do? The data on the disks are going to be in a proprietary format, so you will HAVE to find a replacement card or lose
    the data. With Software RAID this is not an issue. You can hook up the
    disks to ANY machine that supports the RAID software, and reacquire the
    data.

    I've also seen windows software raid eat itself when one the drives
    crashed as well...

    Windows software? What a surprise!

    in my own experiences hardware solutions deal with a drive failure better...

    My experiences are quite the opposite. A very expensive RAID unit failed
    and cost nearly $100,000 to rejuvenate, whereas I've never experienced a problem with software RAID.

    The SATA raid card in one of my servers supports hotswap & rebuild.. I don't have hotswap bays setup, but if I did, I could swap a drive out while runnin etc.. and this is on a <$100 card...

    SATA drives make life much easier, of course. But I'm sure you know that Linux software RAID supports hot-spares with automatic rebuild on standard
    IDE drives at absolutely no additional cost. And the *software* will also manage hot-swap as well, although most people do not have hot-swappable
    IDE drives and bays...

    One thing software RAID will allow is the use of dissimilar drives, and
    the slicing of these drives to suit.

    For instance, you could have a 30 gig drive and a 40 gig drive, with the
    40 gig drive sliced into a 10 gig partition and a 30 gig partition. You
    can use the 10 gig partition to boot the machine and get it running, and
    then RAID the other (30 gig) partition with the separate 30 gig drive.
    Ok, this may be questionable, but it is *doable* and low-budget
    installations may require questionable practices like this. I don't know
    a hardware RAID solution that allows you to mix dissimilar drives, at
    least not without reducing the capacity of all drives to that of the smallest.....

    And here is another good thing about software RAID: You can set up installations for testing, that simply can't be set up without buying
    extra hardware. Example:

    Suppose you were thinking of setting up a rank of four drives in RAID-5 configuration. You want to try it out to get a feel of it. You can cut
    four 1-gig slices off the SAME drive and RAID them with software. OK, you
    get no performance *OR* reduncancy benefits, but for the week that you
    will be using this setup for testing, and since no production data will be committed to the test rank, you can proceed without spending a cent.
    Hell, if you have some unsliced space on the disk, you can proceed without even *rebooting* the machine.

    No, there *are* reasons for using hardware RAID solutions, but those are largely reasons of performance. And the cost of the RAID hardware could
    be applied to a faster CPU to maintain reasonable performance levels with software RAID. And given the flexibility of softeare RAID, and the
    benefit of not being held to ransom by a particular brand of RAID card, I think that *most* of the time, software RAID is a good choice for the guy building a RAID box at home or for a small business.

    One thing which I don't know, and maybe you can tell me is this: When you
    use a hardware RAID solution, can you access the drives with SMART
    monitoring software to be on the lookout for potential drive degradation
    which might lead to a failure? In the software RAID solutions I've implemented for small (and not-so-small) business, I've had the RAID
    drives continually monitored, with any departure from established norms logged, e-mailed to the SysAdmin, announced verbally in a loop via a home-brewed "digi-talker" on the machine, and sent as an SMS to my celly.
    I don't know of anyone else who was able to do that with their hardware
    RAID installations, but perhaps the capability exists and they just didn't bother?



    ---
    ■ Synchronet ■ Audio! We're all for it at The ANJO BBS
  • From Tracker1@VERT/TRN to Angus McLeod on Sun Jul 24 12:55:00 2005
    Angus McLeod wrote:
    Given the many advantages of software RAID, I have to say that unless you >>> have money to spare and need the peroformance, I'd recommend staying away >>> from hardware RAID solutions.

    Dunno, I think RAID should be handled outside of the OS myself, just a
    personal view,

    Why?

    Dunno, I've always found hardware raid to be simpler, imho than software raid, again this is more opinion than anything..

    allong with performance, and eliminates the need for the OS the have
    that additional layer of overhead running.

    Well, "performance" and "the need to have an additional layer running" are the same thing, aren't they? Yes, you can get better performance by using hardware RAID, but you can buy a faster CPU to counter that issue, and in any event most SOHO-type users are not looking for performance anyway, but reliability. Certainly, the original poster who asked about RAID stated directly that they were looking for redundancy, rather than performance...

    Generally hardware raid is generic, and the os layer is separate from how the virtual drives function to the OS... whereas with software raid, it's another potential place for bugs, that are non-generic in nature... I know hardware is subject to bugs/failures as well, but I find more comfort in a hardware solution.

    I generally only want a single raid-1 setup for things, and usually the
    cheaper cards work fine for this..

    Software raid gives you additional options (such as RAID 5) at no
    additional cost.

    You still have to buy the drives, and hardware raid controllers that support raid-5 are not too bad on the SATA side, though increasingly expensive for scsi...

    I haven't had a bad raid card, but actual did have an OS issue with
    raid before, don't remember the OS in particular (was redhat or suse, I
    didn't admin the box), I just did the db dumps from another machine...

    The thing is, if you have a RAID card fail on you, and you can't locate a replacement, what do you do? The data on the disks are going to be in a proprietary format, so you will HAVE to find a replacement card or lose
    the data. With Software RAID this is not an issue. You can hook up the disks to ANY machine that supports the RAID software, and reacquire the data.

    True, but depending on how a drive fails, you could wind up with corrupt data in any case, this is where a good backup plan is necessary.. I have raid on my servers, and do cross-system backups myself, there are lots of other solutions...

    I've also seen windows software raid eat itself when one the drives
    crashed as well...

    Windows software? What a surprise!

    LOL, note above I've seen it in linux too.. ;)

    in my own experiences hardware solutions deal with a drive failure
    better...

    My experiences are quite the opposite. A very expensive RAID unit failed and cost nearly $100,000 to rejuvenate, whereas I've never experienced a problem with software RAID.

    Don't know about this, as said, generally do cross system backups in addition to raid, so rarely loose anything (sometimes on my desktop I'm less fortunate though, had about 4 HD's fail on my desktop in the last 7 years)

    The SATA raid card in one of my servers supports hotswap & rebuild.. I don't >> have hotswap bays setup, but if I did, I could swap a drive out while runnin >> etc.. and this is on a <$100 card...

    SATA drives make life much easier, of course. But I'm sure you know that Linux software RAID supports hot-spares with automatic rebuild on standard IDE drives at absolutely no additional cost. And the *software* will also manage hot-swap as well, although most people do not have hot-swappable
    IDE drives and bays...

    Dunno, afaik most IDE/PATA controllers don't support hotswap anyway... SATA it is becomming more standard.

    One thing software RAID will allow is the use of dissimilar drives, and
    the slicing of these drives to suit.

    For instance, you could have a 30 gig drive and a 40 gig drive, with the
    40 gig drive sliced into a 10 gig partition and a 30 gig partition. You
    can use the 10 gig partition to boot the machine and get it running, and then RAID the other (30 gig) partition with the separate 30 gig drive.
    Ok, this may be questionable, but it is *doable* and low-budget installations may require questionable practices like this. I don't know
    a hardware RAID solution that allows you to mix dissimilar drives, at
    least not without reducing the capacity of all drives to that of the smallest.....

    Many hardware raid solutions will allow dissimilar drives, but you loose the extra on the bigger drive, I will give you that.. on the flip side, the varying drive speeds tend to bring down performance (it keeps coming back to that doesn't it.. ;) )

    And here is another good thing about software RAID: You can set up installations for testing, that simply can't be set up without buying
    extra hardware. Example:

    Suppose you were thinking of setting up a rank of four drives in RAID-5 configuration. You want to try it out to get a feel of it. You can cut four 1-gig slices off the SAME drive and RAID them with software. OK, you get no performance *OR* reduncancy benefits, but for the week that you
    will be using this setup for testing, and since no production data will be committed to the test rank, you can proceed without spending a cent.
    Hell, if you have some unsliced space on the disk, you can proceed without even *rebooting* the machine.

    True enough...

    No, there *are* reasons for using hardware RAID solutions, but those are largely reasons of performance. And the cost of the RAID hardware could
    be applied to a faster CPU to maintain reasonable performance levels with software RAID. And given the flexibility of softeare RAID, and the
    benefit of not being held to ransom by a particular brand of RAID card, I think that *most* of the time, software RAID is a good choice for the guy building a RAID box at home or for a small business.

    Possibly, but as I said before, a good backup plan is always a good idea. :) sometimes harder to impliment than others.

    One thing which I don't know, and maybe you can tell me is this: When you use a hardware RAID solution, can you access the drives with SMART monitoring software to be on the lookout for potential drive degradation which might lead to a failure? In the software RAID solutions I've implemented for small (and not-so-small) business, I've had the RAID
    drives continually monitored, with any departure from established norms logged, e-mailed to the SysAdmin, announced verbally in a loop via a home-brewed "digi-talker" on the machine, and sent as an SMS to my celly.
    I don't know of anyone else who was able to do that with their hardware
    RAID installations, but perhaps the capability exists and they just didn't bother?

    Actually, yes, it varies by the controller.. there are generally controller specific drivers/software to handle this. I've seen IBM servers setup that will actually invoice a replacement with overnight shipping when this happens, and emails the admin in a number of ways... was pretty cool (this was about 6 years ago on a quad-xeon server at the time) so sure there are more advanced at the high end, and more of this functionality at the low end.. one of my server's raid cards has a lot of features I wouldn't expect from a low end card (live hotswap, active monitoring etc) iirc the card was $68 on newegg (SATA)... I'm using two Seagate SATA drives with NCQ with it.. runs pretty damned well actually, in raid-1, they perform better than my single drive on my desktop (mainly because the seek/read speed is better, because it can read from either drive, and NCQ probably helps), my desktop's controller doesn't support the NCQ extension.

    --
    Michael J. Ryan - tracker1(at)theroughnecks(dot)net - www.theroughnecks.net icq: 4935386 - AIM/AOL: azTracker1 - Y!: azTracker1 - MSN/Win: (email)

    ---
    ■ Synchronet ■ theroughnecks.net - you know you want it
  • From Angus McLeod@VERT/ANJO to Tracker1 on Mon Jul 25 01:24:00 2005
    Re: Re: ATA Raid under Linux?
    By: Tracker1 to Angus McLeod on Sun Jul 24 2005 08:55:00

    Generally hardware raid is generic, and the os layer is separate from how th virtual drives function to the OS... whereas with software raid, it's anothe potential place for bugs, that are non-generic in nature... I know hardware subject to bugs/failures as well, but I find more comfort in a hardware solu

    In fact, "hardware" RAID is actually "firmware" RAID, so the potential for software bugs exists on each type of configuration. I suspect patches are easier to apply in software RAID systems. And the potential for a
    physical fault boils down to the same as for any IDE drive, since that is
    all thay you are actually using. Naturally, you avoid single-point, multi- disk failure configurations, like having an IDE master and slave drive on
    the same cable a part of the same rank. (One cable fault takes down both drives, and your RAID rank dies...)

    Software raid gives you additional options (such as RAID 5) at no additional cost.

    You still have to buy the drives, and hardware raid controllers that support raid-5 are not too bad on the SATA side, though increasingly expensive for s

    You have to buy the drives anyway. But anyone with a Linux box with a
    boot drive and a CD/DVD drive can also run a RAID-Linear, RAID-0 or RAID-1 configuration at no cost *other* than the drives (no extra hardware
    needed).

    The thing is, if you have a RAID card fail on you, and you can't locate a replacement, what do you do? The data on the disks are going to be in a proprietary format, so you will HAVE to find a replacement card or lose the data.

    True, but depending on how a drive fails, you could wind up with corrupt dat in any case, this is where a good backup plan is necessary..

    Ah! Yes, some people look on RAID as the solution to ALL disk-related failures. But RAID is not a replacement for backups; RAID can't help you
    if your application freaks out and decides to eat your data!

    I've also seen windows software raid eat itself when one the drives
    crashed as well...

    Windows software? What a surprise!

    LOL, note above I've seen it in linux too.. ;)

    :-) Yeah, OK, but I've only ever run RAID on *nix.

    A very expensive RAID unit failed and cost nearly $100,000 to
    rejuvenate, whereas I've never experienced a problem with software
    RAID.

    Don't know about this, as said, generally do cross system backups in additio to raid, so rarely loose anything (sometimes on my desktop I'm less fortunat though, had about 4 HD's fail on my desktop in the last 7 years)

    We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to
    rejuvenate the rank, no matter what it cost. Fortunately for me, I'd
    memo'd The Pointy-haired Idiot only eight days before, reminding him that
    I'd been telling him we had a problem with the tape drive for the last two years. So I was able to dodge that particular bullet.....

    SATA drives make life much easier, of course. But I'm sure you know that Linux software RAID supports hot-spares with automatic rebuild on standard IDE drives at absolutely no additional cost. And the *software* will also manage hot-swap as well, although most people do not have hot-swappable IDE drives and bays...

    Dunno, afaik most IDE/PATA controllers don't support hotswap anyway... SATA is becomming more standard.

    I believe they *are* available, but pricy. Anyone with that sort of cash probably is not building a rank for domestic use.

    Many hardware raid solutions will allow dissimilar drives, but you loose the extra on the bigger drive, I will give you that.. on the flip side, the varying drive speeds tend to bring down performance (it keeps coming back to that doesn't it.. ;) )

    Yes, and I said right off, I'll give you that hardware RAID solutions will
    run faster than software RAID solutions. But the original poster IIRC had just built a Debian box and wanted some redundancy for storage of digital media. If performance was important enough, a faster CPU chip could
    probably neutralize the difference.

    I think that *most* of the time, software RAID is a good choice for
    the guy building a RAID box at home or for a small business.

    Possibly, but as I said before, a good backup plan is always a good idea. :) sometimes harder to impliment than others.

    The biggest mistake you can make when setting up *any* RAID solution, is thinking it relieves you of the need to backup.

    One thing which I don't know, and maybe you can tell me is this: When you use a hardware RAID solution, can you access the drives with SMART monitoring software to be on the lookout for potential drive degradation which might lead to a failure?

    Actually, yes, it varies by the controller.

    Okay. I know the high-end RAID solutions can do that stuff, but I wasn't
    sure about the <$100 RAID card bought over the counter.

    ---
    ■ Synchronet ■ Audio! We're all for it at The ANJO BBS
  • From Tracker1@VERT/TRN to Angus McLeod on Mon Jul 25 04:14:00 2005
    Angus McLeod wrote:
    We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to rejuvenate the rank, no matter what it cost. Fortunately for me, I'd
    memo'd The Pointy-haired Idiot only eight days before, reminding him that I'd been telling him we had a problem with the tape drive for the last two years. So I was able to dodge that particular bullet.....

    Yeah, it's funny when people don't consider that.. I backup most stuff to another system, so that I can recover quicker in the short term.. main webserver down, setup backup to serve until the main is backup, same for db etc... not a live redundancy, but enough to cut down time a bit...

    Dunno, afaik most IDE/PATA controllers don't support hotswap anyway... SATA >> is becomming more standard.

    I believe they *are* available, but pricy. Anyone with that sort of cash probably is not building a rank for domestic use.

    Yeah, I can't beleive anyone would spend *THAT* much for PATA technology compared to scsi, and more recently sata.. it simply crosses the line, though PATA drives are typically in much bigger sizes available, so that may have something to do with it.

    Many hardware raid solutions will allow dissimilar drives, but you loose the >> extra on the bigger drive, I will give you that.. on the flip side, the
    varying drive speeds tend to bring down performance (it keeps coming back to >> that doesn't it.. ;) )

    Yes, and I said right off, I'll give you that hardware RAID solutions will run faster than software RAID solutions. But the original poster IIRC had just built a Debian box and wanted some redundancy for storage of digital media. If performance was important enough, a faster CPU chip could probably neutralize the difference.

    True enough... I also find the setup easier myself.. but that is more of a personal thing...

    I think that *most* of the time, software RAID is a good choice for
    the guy building a RAID box at home or for a small business.

    Possibly, but as I said before, a good backup plan is always a good idea. :) >> sometimes harder to impliment than others.

    The biggest mistake you can make when setting up *any* RAID solution, is thinking it relieves you of the need to backup.

    yeah, but it can ease the slack a little bit... I don't worry as much about daily backups a lot of the time for some things, simply because the raid is there, but do backup a bit, especially in serious program changes, etc.. and usually have nightly db dumps done..

    One thing which I don't know, and maybe you can tell me is this: When you >>> use a hardware RAID solution, can you access the drives with SMART
    monitoring software to be on the lookout for potential drive degradation >>> which might lead to a failure?

    Actually, yes, it varies by the controller.

    Okay. I know the high-end RAID solutions can do that stuff, but I wasn't sure about the <$100 RAID card bought over the counter.

    Surprised the hell out of me on this one.. but as with all computer tech, more features at a less pricey point of entry over time...

    --
    Michael J. Ryan - tracker1(at)theroughnecks(dot)net - www.theroughnecks.net icq: 4935386 - AIM/AOL: azTracker1 - Y!: azTracker1 - MSN/Win: (email)

    ---
    ■ Synchronet ■ theroughnecks.net - you know you want it
  • From Angus McLeod@VERT/ANJO to Tracker1 on Mon Jul 25 13:51:00 2005
    Re: Re: ATA Raid under Linux?
    By: Tracker1 to Angus McLeod on Mon Jul 25 2005 00:14:00

    Angus McLeod wrote:
    We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to rejuvenate the rank, no matter what it cost.

    Yeah, it's funny when people don't consider that.. I backup most stuff to another system, so that I can recover quicker in the short term.. main webserver down, setup backup to serve until the main is backup, same for db etc... not a live redundancy, but enough to cut down time a bit...

    Well, after that particular incident, I was given the go-ahead to build a machine specifically for doing backups. We had our production databases
    each in a slice of the RAID cabinet. I duplicated these slices on the new backup box, and periodically did a "cold backup" of the entire slice onto
    the other machine. Then the backed up slices were tar'd and gzip'd, so I
    had twelve days worth of backups available, and the latest one not
    archived. In the event of a database loss the idea was to mount the
    backed-up slice via NFS and be running again ASAP.

    Yeah, I can't beleive anyone would spend *THAT* much for PATA technology compared to scsi, and more recently sata.. it simply crosses the line, thoug PATA drives are typically in much bigger sizes available, so that may have something to do with it.

    Again, depends WHO and WHAT is doing it. I'd not buy into a big SCSI
    array for home use. I'd buy two IDE disks and go with a simple mirror
    (using software RAID). My cost would be only the drives themselves, which would be low buck-per-bit in comparison. But the Linux Software RAID implementation *will* support hot-swap if you feel like spending the cash
    for the apropriate IDE or SCSI units. And the Hot-Spare option is
    perfectly viable with low-cost IDE.

    SATA does make the whole problem moot, though, don't ya think? :-)


    ---
    ■ Synchronet ■ Audio! We're all for it at The ANJO BBS
  • From Sniper@VERT to Angus McLeod on Wed Jul 27 01:52:00 2005
    To: Angus McLeod
    Angus McLeod wrote to Tracker1 <=-

    Re: Re: ATA Raid under Linux?
    By: Tracker1 to Angus McLeod on Mon Jul 25 2005 00:14:00

    Angus McLeod wrote:
    We had two Seagate Baracudas go in a big, external RAID unit that cost something like $30K. Our DDS2 backups were faulty, so we *had* to rejuvenate the rank, no matter what it cost.

    Yeah, it's funny when people don't consider that.. I backup most stuff to another system, so that I can recover quicker in the short term.. main webserver down, setup backup to serve until the main is backup, same for db etc... not a live redundancy, but enough to cut down time a bit...

    Well, after that particular incident, I was given the go-ahead to build
    a machine specifically for doing backups. We had our production
    databases each in a slice of the RAID cabinet. I duplicated these
    slices on the new backup box, and periodically did a "cold backup" of
    the entire slice onto the other machine. Then the backed up slices
    were tar'd and gzip'd, so I had twelve days worth of backups available, and the latest one not archived. In the event of a database loss the
    idea was to mount the backed-up slice via NFS and be running again
    ASAP.

    Yeah, I can't beleive anyone would spend *THAT* much for PATA technology compared to scsi, and more recently sata.. it simply crosses the line, thoug PATA drives are typically in much bigger sizes available, so that may have something to do with it.

    Again, depends WHO and WHAT is doing it. I'd not buy into a big SCSI array for home use. I'd buy two IDE disks and go with a simple mirror (using software RAID). My cost would be only the drives themselves,
    which would be low buck-per-bit in comparison. But the Linux Software RAID implementation *will* support hot-swap if you feel like spending
    the cash for the apropriate IDE or SCSI units. And the Hot-Spare
    option is perfectly viable with low-cost IDE.

    SATA does make the whole problem moot, though, don't ya think? :-)

    Actually, there are relatively cheap IDE-Raid 0, 1, 5, 10, JBOD cards
    out there. Right now I'm looking at a $89 dollar (US) card (1), that
    does 4 IDE drives in any configuration. There are cards ranging from
    about $80 to over $300, that do both IDE and SATA. Basically, 8
    drives, 4 IDE and 4 SATA drives in any configuration you want. And from
    being in the business for a long time, the price is alot cheaper than
    the old SCSI Raid days... The SCSI Raid card and the expensive ass SCSI drives... Let alone if you get the hotswap backplane and drives...
    Mucho Dinero!

    Oh, and another point... most of these cards can do hot swap.

    Another point... I have found some really neat little add-on's... a 3 or
    4 drive bay backplane (2) that fits into 2 or 3 (5 1/4) slots, with hot swappable IDE and or SATA trays. Not to shabby on the prices either...
    $80's for the 3 drive bays and $100 for the 4 drive's.

    http://www.newegg.com

    (1) "Computer Hardware/Accessories/Hard Drive - Raid Cards".
    (2) "Computer Hardware/Accessories/Hard Drive Accessories".


    Sniper
    Killed In Action BBS, telnet://kiabbs.org
    Home of the Unofficial SynchroNet Support Network.
    download the info pack at any of the below sites: http://www.chcomputer.net/USSNET.ZIP or http://www.ussnet.org

    ... For sale - Large hourglass for timing Windows
    --- MultiMail/Linux v0.45
    --- Synchronet 3.12a-Win32 NewsLink 1.83
    * Killed In Action - Valdosta, Ga - telnet://kiabbs.org
    ■ Synchronet ■ Vertrauen ■ Home of Synchronet ■ telnet://vert.synchro.net