Yes here – all flavours
https://www.download82.com/get/download/windows/microsoft-iscsi-software-initiator/
You're Welcome
Sweet--now I can play with iscsi on my nas units.![]()
Nice! I've got about 20 different tower units.Same here. RS1221!
Nice! I've got about 20 different tower units.![]()
After a few tests, iSCSI is nominally faster, but not by much. On my 10GBE connection, I can touch 10% utilization of the link. With just SMB shares, I get 6-7% tops on the regular.
I would not have thought to use RAID6 until it was mentioned to me by my dad, who at one point was a network engineer. He's smart about a lot of things, and not so smart about others.
Anyway, the RAID6 of 8x 8TB Enterprise grade drives (Exos 8TB) gives me extra room for failure. I wouldn't have considered it before, but I'm already way above my storage needs. ~43TB is still way more than I will ever need or use.
Also, h265 encoding makes for videos to use much less space.
If drive sizes had not have gotten so big, I would agree with you. But today, the likelihood of a second (and third) drive failure on large sized arrays makes RAID6 by itself not so desirable.RAID 6 is fine. Banks use it, Universities use it and most importantly industry uses it. Any card with RAID 6 is usually good and will have decent onboard ECC memory. Any card without RAID 6 is "welfare RAID". Although I know Mr Siamese Cat has used RAID 5 for 25 years without one bit loss even with the option for RAID 6. ECC memory and onboard Battery if you can get one and RAID 6 is superb. RAID0+1 is something they include for free on really cheap cards and is pointless.
If drive sizes had not have gotten so big, I would agree with you. But today, the likelihood of a second (and third) drive failure on large sized arrays makes RAID6 by itself not so desirable.
ECC ram helps, but the entire SAS bus is much more robust than SATA to begin with, so it's not a matter of data rotting in transit, but on the disk itself. And no raid system will detect this and property correct for it since it doesn't know which bits are the correct ones unless it is scrubbing the disks, which again is wear and tear. RAID1 is brain dead simple, yes, but it is also the most effective. RAID0 helps for performance, but that's about it. All of the other parity varieties of RAID--3,4,5--all really shouldn't be used except for high availability, and even then, just to keep something online before a second storage server is brought online and cut over.
No one is building production RAID setups for reliability anymore, just this stop-gap before another server is brought online. Parity won't save you, even with 2x parity drives. And this is after I've worked with RAID from the 1990s in the SCSI days. I've seen it work, I've seen it fail, and I've seen it's strengths and weaknesses. Parity RAID is no match for today's drives areal density.
1. I've already explained this before. Areal densities and bit rot--look it up and do some reading on how this works.I will cover your response point by point:
1. If drive sizes had not have gotten so big, I would agree with you. But today, the likelihood of a second (and third) drive failure on large sized arrays makes RAID6 by itself not so desirable.
Please explain why? Perhaps you could provide links to a reputable site written by a Computer Engineer. Try using Samsung EVO professional or Seagate Ironwolf
2. ECC ram helps, but the entire SAS bus is much more robust than SATA to begin with
I am talking about the up to 2GB of ECC Ram on the Raid Card itself. It isn’t a matter of it helping it. is intrinsic along with the battery. SAS was initially a term coined to refer to SATA and SCSI combined. In reality it’s just more expensive and uses the double cable in one connection. Is always built better and more reliable.
3. “so it's not a matter of data rotting in transit, but on the disk itself. And no raid system will detect this and property correct for it since it doesn't know which bits are the correct ones unless it is scrubbing the disks, which again is wear and tear.”
I am sorry this is just wrong in every way. For instance a really cheap card like an Areca ARC-1160/ML will check the disk data for integrity. Err, you see that’s the point of non “welfare RAID”
4. RAID1 is brain dead simple, yes, but it is also the most effective. RAID0 helps for performance, but that's about it. All of the other parity varieties of RAID--3,4,5--all really shouldn't be used except for high availability, and even then, just to keep something online before a second storage server is brought online and cut over.
Wow one learns something new every day. All that expensive kit that all those companies bought was useless. They should have all bought cheap cards that have RAID 1, RAID 0 and RAID 0-1 from AliExpress for $10. They wasted hundreds of thousands of dollars on expensive RAID kit.
5. No one is building production RAID setups for reliability anymore, just this stop-gap before another server is brought online. Parity won't save you, even with 2x parity drives. And this is after I've worked with RAID from the 1990s in the SCSI days. I've seen it work, I've seen it fail, and I've seen it's strengths and weaknesses. Parity RAID is no match for today's drives areal density.
Hmm – please provide some links to professional Computing and Engineering sites for this. I will pass on your useful information next time I am seconded to BAE Systems, GlaxoSmithKline, Jaguar or JCB. I am sure their IT departments will want to pick your brain on some of the important points you have raised.
I do remember a few years ago people twittering on about huge drives causing data loss in hobby magazines. But at the same time people who worked professionally with RAID and it was almost always SCSI, but some companies did use IDE. Knew that decent RAID Cards checked the disks against one another. Lloyds never lost any data in the days when they were spanning hundreds of drives together, because they were checked against each other.
I should imagine that very large companies use multiple servers simply because of ease. You just slot them in and they work –well - you know - hmm just like RAID. However RAID 6 for the home or small company is excellent.
Nowadays I am using 20TB Ironwolfs in RAID 6 and I haven’t had one instance of data loss. I have never come across anyone in any company who has had data loss due to large disks. I think the story was overblown hysteria for something to say.
I joined this site due to data loss and corrupted data and Mr Siamese Cat helped me out. My corrupted data was entirely due to the greed of software developers. And Cheapness of Network cards. I will leave the link to that discussion below. It was nothing to do with RAID or large disks. I would seriously check and read the thread below. You will see that using Servers as a RAID system and relying on a Network can have it’s own pitfalls
https://www.xpforums.com/threads/fi...went-over-to-thunderbird.934056/#post-3264432
First and foremost, this is not a competition. And RAID6 by its very nature and design will never beat a RAID1 or RAID 10 system in terms of speed or failure resistance. Again, study the RAID levels and how they work for a refresher.So lets summarise this discussion. Secpar has an 8 drive system with RAID 6 of 8x 8TB Enterprise grade drives (Exos 8TB)
Yet Samir recommends going to RAID 0+1 (What I would call RAID 1+0 or RAID 10) or even RAID 1. With 8 drives!!!
Madeleine sensibly counter recommends sticking with RAID 6.
I would stick with RAID 6 Secpar. It is far more reliable than RAID 1 or RAID 10
Even low priced decent cards check hard disk data when it is mirrored on a RAID 5 or 6. You can do this manually at boot, set it to run in background with alarm or like most IT professionals watch and handle it over the LAN or software on the server. It is only the really cheapo cards that don’t have an option for a LAN connection. I used to use mine with software on the operating system. But since I have never had a fall-over in a quarter of a century, I just let the hardware monitor do it’s stuff now.
However I do note that I once worked for a company where the IT manager was a complete wanker, so occasionally we used to pull 2 hard-drives out and quickly reinsert them. As you can imagine, with a 2 drive fall over, alarm went off and it used to send him into a panic, so they used to clear the office; And we used to go to the pub for a drink and a smoke for an hour, while the system rebuilt.
Regarding those Servers you saw people install in companies Samir. Those black boxes you watch them heft onto the racks or cabinets all have RAID 6 inside, to deal directly with the hard disks. The Server is then used as redundant storage within the server farm. All those SuperMicro’s or Dells have RAID cards inside Samir. I have never once seen any server whether it was a 10 disk, 14 disk, or 20 disk set up as RAID 10 or RAID 1. Even the small 1u chassis are sporting 8 drives and are set up as RAID 6.
Let me help you with some simple mathematics Samir.
We will use Secpar’s system as an example
He has 8 drives at 8TB each.
Using RAID 6 that gives him 48TB and after formatting 43TB.
Using RAID 1 he gets 8TB after formatting about 7.5TB.
That’s a lot of money to spend on 7.5TB
Now I admit. It may be that Secpar has installed his system in a High Voltage electricity switching station with pitted contact plates in an area renowned for electrical storms. The only place I could think of for a 7 drive failure. However in this case I would recommend a Faraday Cage and perhaps a rethink on the pro’s and cons of cheap accommodation.
Neat to see you all having a good conversation.
My needs are simple, and this system is built way beyond my needs.
My order of thinking for RAID configs are:
1. RAID6, if I have enough.
2. RAID5, if I have need for that extra one drive of performance.
3. RAID1, if I have two drives with important data.
4. RAID0, if I have two or more drives and an external backup with greater or equal capacity.
Now again, this is just me, but RAID 10 seemed like overkill for a RAID0 setup with backup. I feel the same could be achieved as a RAID10 config in a RAID5, while keeping things simple.
Before SSDs were mainstream, I took 4x Seagate Hybrid 2.5" drives that were 500GB (with 4 or 8GB of the ssd cache) each and combined them into a RAID0. I think this was back in 2011 that I did this. Never encountered an issue with performance or lost information, and had no backups. I exercised more confidence in the technology back then, lol. I had the performance marks of SSD, but with 2TB of capacity, which back then 2TB SSDs were super expensive.
A side update (conversation kind of went sideways): The iSCSI link (according to CrystalDisk Mark) gets about the same performance as I get out of a USB3 Samsung flash drive I keep plugged in a USB 3.0 slot for images and backups.
I have only ever had disk problems with Seagate's 3TB drives from way back, external harddrives, external harddrives removed from their enclosures, flash drives wearing out, and SSDs losing some integrity as standalone disks.
I reclaimed two Samsung 1TB desktop HDDs from my NVR, which I've had for nearly a decade now, and using them in a RAID0 on my desktop for video processing/conversion and short term storage. I think I paid about $100 for each of them at a the time, and they're still strong!
I have periodic check disks scheduled thru out the week to help mitigate the issue with SSD integrity, which has been helpful. Integrity issues creeped in when using an SSD partition in conjunction with a software created RAMDISK, where the image that the ramdisk saves to would be corrupted over a 6 month period. Check disks on SSD partitions and the RAMDISK have eliminated the problem.
The Synology NAS I am using, I'm not actually sure if it uses a software RAID or a hardware RAID, but I am inclined to believe it be Software since adding disks to array and changing RAID5 to RAID6 was painless and didn't require complete and total backup, and I never lost any of the function of the NAS while it was going on.
RAID 10 to me is like a RAID5 that's more expensive, with very little benefit. And again, this is a home setup with some robustness. RAID6 and RAID10 require a four disk minimum, and RAID10 might make more sense in a four-disk config. We're talking about double the minimum.
My goal isn't to fill up the entire storage space, but provision the hardware in such a way that I can use it for years and years to come without major infrastructure changes or redesigns.
Then you should have posted so that was clearly understood. Sarcasm doesn't translate well to text.Yes I know RAID 1 can only handle 2 hard drives. That was the point. I was being facetious.
Your fantasy system of 4 separate RAID 1’s is a ludicrous waste. It also has less redundancy than RAID 6. A two drive failure on a single stripe could lead to catastrophic failure. Where as RAID 6 you just rebuild regardless of which 2 drives fail.
You then go on to suggest striping your 4 separate RAID 1’s with RAID 0. So a massive system overhead for the controller. Your fantasy system would be really slow.
Finally I wasn’t talking about Parity checks for the disks. I was talking data checking and disk data integrity. You say you have an old Dell R710 (I use Supermicro Workstations myself) I wouldn’t touch old Dell kit even though you can pick up an old Dell R710 on ebay for £150, it isn’t worth it as Dell are dreadful to try and upgrade.
If you bothered reading the manual for your PERC 6. That’s the raid Controller you apparently have in your Dell R710. You will see even that budget Raid card has Patrol Read, which checks data on the disks. More advanced RAID cards have even better data checking.
You’re correct it isn’t a competition. You will mostly be ignored; But you will be corrected, when you give other people dreadful advice.