On Oct 3, 6:20 am, Heynony wrote:
> dg wrote:
> > If your power supply
> > has a high "peak load" capacity but a significantly lower "sustained
> > load" capacity, it will overheat and burn out
> As I mentioned, the power supply is the one piece I'd plan to be
> superior, and obviously properly rated, with margin. Air conditioned
> room; dedicated fan over the whole works, in addition to what's built
> into the box and it's components. Right now I have most of my
> jerry-rigged maze of dozens of drives exposed, tops removed from the
> external cases, with this fan playing over the whole mess, and mostly
> everything is cool to the touch, warm at the worst.
Been there (especially with old SCSI drives I bought cheap).
> The ICH9R looks like it will keep things running fast enough. Lots of
> buffering in my playback client software. Really not a whole lot of
> demand on the server. A constant library, just sits there 85% of the
> time doing nothing but generate heat; titles added a few times a week.
> We don't really watch a lot of TV, just want the diversity there, when
> we want it.
> Kids, when they watch at all, watch late afternoon/early evening. Wife
> & I usually watching same thing, late evening.
O.K., sounds easy. Maybe look through your options, and see if the
server can be set for wake-on-LAN and spend its off-time sleeping.
Lots of arguments over it, but most wear on the drives occurs during
spin-up and spin-down (and OS & program crashes), but on the other
hand the amount of electricity RAID arrays consume has made most of
the home-media-server crowd I've helped with assembly eventually swear
off them; but most of those were pre-SATA.
> For my basic notion of a $100 motherboard running RAID 5 with 6 $250 1
> TB drives (with a spare drive always ready to insert on failure), I
> find zero support among my technical friends; your reservations are
> certainly main stream.
I've done this sort of thing (home-brew file servers, home-brew RAID)
before, and have some experience with commercial (IBM, Compaq, and
less common names) for-real servers, with large-volume NAS & SAS RAID
arrays. One thing I've learned, differentiating the real servers from
most home-built file servers, is that true server motherboards will
constantly monitor themselves, their on-board and add-in devices,
processors, and RAM for failure, and allow hot-swap of any device
short of the motherboard itself. That monitoring and redundancy are
what the money pays for; for a home-built file server most of these
options are out-of-price, but I'd suggest looking for a motherboard
which, besides doing what you want otherwise, also allows registered
error-correcting memory use. This is especially important in long-
term use, since incremental errors can lead to wide-spread problems.
Enabling the full SCSI / SATA-II SMART on your new drives is also
important (as is the monitoring software that lets you know of
impending drive failures well in advance, besides when the system is
booting!). You may want to load your manufacturer's hard-drive
firmware options controller before installing your Operating System
software, to choose up what options you want there. This can be an
important step in preparing drives for RAID arrays!
> But there was zero support eight years ago when I started building the
> video library with small drives strung together in an endless firewire
> chain off a low-end eMac, and it's worked perfectly until recently when
> some of the older drives have started to get balky and some flaws in my
> backup procedures (and maintaining the source files) were exposed.
Been there, too.
> Even with a top grade power supply, seems like this would be a cheap
> experiment. Risk is primarily that an error might occur just at the
> moment when a faulty drive has just been replaced and is being
> restored. That would be lights out, but odds seems remote, and most of
> the library is always restorable from original DVDs and other
> distributed off-line sources. A pain, but recoverable.
Yup. BTW, the difference between RAID 5 and RAID 6 is that with RAID
5, your spanned volume can recover from the loss of one drive--and
even remain in use meanwhile. RAID 6 allows for the loss of two
drives, thereby overcoming the possibility you mention. Paying
attention to your choice of file system is very much worth your while,
depend on it. I hear good things for IBM's JFS and, especially, for
SGI's file system (I momentarily forget the name of it, but it's
especially designed to maximize use of storage and to handle
comparatively very large files better than anyone else's.), there are
actually several non-default but available optional file systems when
you use Linux. I've had good results with both those. You may want
to search through some white papers, bug reports, and GNU-Linux-
oriented magazines' online back-issues to spot the differences.
> If I'm wrong about this thing working well, seems to me I'll just find
> myself replacing a _lot_ of cheap SATA drives, one at a time as they
> fail, and I'll soon come to understand that everybody else was right
> and my notion was faulty. But still no substantial risk of data loss
> even so.
Seems like a proper application of technological devices for intended
purposes, to me. I'd be quite cautious about the choice of hard drive
model; take care in selecting a file system suited to my file sizes,
as well as making sure suitable management application software (such
as drive status monitoring) is present and enabled when I'm done; and
I'd preferentially use error-correction coding RAM.
> All the approved solutions I check out look like many thousands for a 5
> TB server.
Nah, you can buy a Dell small business server for US $500.00 or less
(they start at about $350, last I checked, but the extra RAM and drive
slots were worth the extra money), then throw in your drives and
operating system; I've had to form a generally grudging liking for
their server motherboards for their dependability. They'll even pre-
install and support SuSE and Red Hat Linux, though these are the
"Enterprise" versions, not free. Also not free, are Dell-compatible
RAM upgrades. Check Crucial, etc. before trying to pay Dell's
prices! Still, your DIY file server with a free Linux distro will
cost less in parts and software, you're just trading "sweat equity"
for $. If possible, read reviews and download the manual for every
last component you plan to use, and read them so that you can plan
around hiccups. Make scale (or at least line-) drawings of where each
component will be. Much cheaper than finding out that RAM you bought
doesn't work with the board, or those hard drives you bought have
earned a reputation for failing if used in a RAID configuration, or
that your cables/drives/fans bump/displace each other when you go to
put the screws in.
All of this has been done before by others, and the fact you're
bothering to ask questions first shows you have a good chance of
getting it right. >> Stay informed about: Large RAID Mac Solution?