Microsoft iSCSI Software Initiator

Neat to see you all having a good conversation.

My needs are simple, and this system is built way beyond my needs.

My order of thinking for RAID configs are:
1. RAID6, if I have enough.
2. RAID5, if I have need for that extra one drive of performance.
3. RAID1, if I have two drives with important data.
4. RAID0, if I have two or more drives and an external backup with greater or equal capacity.

Now again, this is just me, but RAID 10 seemed like overkill for a RAID0 setup with backup. I feel the same could be achieved as a RAID10 config in a RAID5, while keeping things simple.

Before SSDs were mainstream, I took 4x Seagate Hybrid 2.5" drives that were 500GB (with 4 or 8GB of the ssd cache) each and combined them into a RAID0. I think this was back in 2011 that I did this. Never encountered an issue with performance or lost information, and had no backups. I exercised more confidence in the technology back then, lol. I had the performance marks of SSD, but with 2TB of capacity, which back then 2TB SSDs were super expensive.

A side update (conversation kind of went sideways): The iSCSI link (according to CrystalDisk Mark) gets about the same performance as I get out of a USB3 Samsung flash drive I keep plugged in a USB 3.0 slot for images and backups.

I have only ever had disk problems with Seagate's 3TB drives from way back, external harddrives, external harddrives removed from their enclosures, flash drives wearing out, and SSDs losing some integrity as standalone disks.

I reclaimed two Samsung 1TB desktop HDDs from my NVR, which I've had for nearly a decade now, and using them in a RAID0 on my desktop for video processing/conversion and short term storage. I think I paid about $100 for each of them at a the time, and they're still strong!

I have periodic check disks scheduled thru out the week to help mitigate the issue with SSD integrity, which has been helpful. Integrity issues creeped in when using an SSD partition in conjunction with a software created RAMDISK, where the image that the ramdisk saves to would be corrupted over a 6 month period. Check disks on SSD partitions and the RAMDISK have eliminated the problem.

The Synology NAS I am using, I'm not actually sure if it uses a software RAID or a hardware RAID, but I am inclined to believe it be Software since adding disks to array and changing RAID5 to RAID6 was painless and didn't require complete and total backup, and I never lost any of the function of the NAS while it was going on.

RAID 10 to me is like a RAID5 that's more expensive, with very little benefit. And again, this is a home setup with some robustness. RAID6 and RAID10 require a four disk minimum, and RAID10 might make more sense in a four-disk config. We're talking about double the minimum.

My goal isn't to fill up the entire storage space, but provision the hardware in such a way that I can use it for years and years to come without major infrastructure changes or redesigns.
I wouldn't call it a good conversation--more like trolling on what I was advising on, lol.

So the idea behind 0+1 is that you can have more than 2 drives fail and still have no data loss, where you can't do this 5 or 6. The only reason to add 0 on top of 1 is for performance which would be like 0, but the safety of 1 since each stripe is also mirrored. You could lose 2x drives in a single mirror and it would kill the stripe and the raid. But only 6 is able to survive 2x failure scenario. However losing both drives in a mirror is rare, and more than likely you would lose a drive in a mirrored pair. With the number of drives like you have, you can set up a stripe of 4 pairs and can actually lose 4 drives as long as they are part of a mirrored pair. This is far greater than any other raid level, and also the highest performance besides pure 0. Plus, a rebuild WON'T kill the raid as it's only one other drive that's being stressed, and even not really since drive to drive raid1 copies are easy. And the only reason I recommended it is because you don't need the space, and want longevity--a raid0+1 should survive more drive failures than raid6 due to less rebuilds.

Pretty cool raid0 of the Seagate hybrids. I always wondered how well that would work. :)

So performance is the same as a local usb 3 flash drive? That could be pretty good depending on the flash drive.

Lol! That's a lot of different drives that you've had failure experiences with!

Some of the older drives, especially when put in enterprise or 'critical use' scenarios, really did shine far past their original use. :)

Strange that there's corruption like that. Have you tried changing the sata cable? I know I've seen issues solved with cable changes.

Synology and most other NAS units are software raid. And this is a good thing for the points that you noted as well as the fact that if the unit itself physically fails while the drives are good, you can usually migrate the drives to a computer and boot up a linux live cd and access the data again without an issue.

10 isn't at all like 5 because of the increased resistance to total failure, faster rebuild times, and superior performance--all at the cost of more storage. 10 actually hurts too much when you have just a few drives. It's when you have a large number of drives that it makes more sense--like your setup.

I think a 10 setup would last longer than a 6 setup, but time will tell on that one. 64TB volume rebuilds are going to be brutal on the drives and the more times you have to do this, the more likely another drive is to fail. However, you've got good drives with the exos so maybe you'll be okay for 6-7 years--after that I would expect the raid to fail completely.
 
Luckily in the real world I don’t get people shouting to me that I am an idiot who doesn’t know what she is talking about.

I use HighPoint and LSI I used to use Areca, but got tired of their oddball bios menu’s and it’s nomenclature.

You data integrity issue, is something that would give me a heart attack. I haven’t used a Synology NAS though. But one rule of thumb is don’t use software RAID. I run Workstation s, so it is pointless having a discrete NAS setup.
No one was shouting or calling you an idiot. However, I still think you don't really know as much about using RAID as you claim you do.

LSI is pretty much the defacto standard now. Even the oems like Dell and HP are actually LSI based cards. LSI's history goes back to the same era as Adaptec, so they've been around since nearly the beginning of SCSI.

There are very few tower NAS units that use hardware raid. Most hardware raid based NAS units are also rack mounted units. The way you usually know if they're hardware is that they'll support SAS and SATA drives. Hardware raid is too expensive for today's common NAS units, but linux software raid is quite robust and has become a bit of a standard on its own.
 
Back
Top