Microsoft iSCSI Software Initiator

Discussion in 'Windows XP Networking' started by secpar, Jan 31, 2022.

  1. secpar

    secpar

    Joined:
    Jan 31, 2020
    Messages:
    166
    Likes Received:
    89
    Anyone have a copy of the "Microsoft iSCSI Software Initiator Version 2.08" or know where it can be downloaded from?

    M$ website doesn't offer it anymore.
     
    secpar, Jan 31, 2022
    #1
  2. secpar

    Madeleine Takam

    Joined:
    Feb 2, 2020
    Messages:
    91
    Likes Received:
    76
    Location:
    Alderley Edge
    Madeleine Takam, Jan 31, 2022
    #2
    Mr Siamese Cat and secpar like this.
  3. secpar

    secpar

    Joined:
    Jan 31, 2020
    Messages:
    166
    Likes Received:
    89
    secpar, Jan 31, 2022
    #3
  4. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    Sweet--now I can play with iscsi on my nas units. :D
     
    Samir, Feb 5, 2022
    #4
  5. secpar

    secpar

    Joined:
    Jan 31, 2020
    Messages:
    166
    Likes Received:
    89
    Same here. RS1221!
     
    secpar, Feb 5, 2022
    #5
    Samir likes this.
  6. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    Nice! I've got about 20 different tower units. :D
     
    Samir, Feb 6, 2022
    #6
  7. secpar

    secpar

    Joined:
    Jan 31, 2020
    Messages:
    166
    Likes Received:
    89
    I wanted to see how much faster I could get it a storage link between my XP computer and the NAS. Still trying to determine.

    I just recently upgraded storage in my NVR, maxed out with 8x 8TB drives in a raid6.

    The NVR had a couple of drives that were going to move into the NAS. I just recently added those two drives to the NAS, plus another drive I had bought late last year.

    Now the NAS, along with the NVR, are both populated with 8x 8TBs in a RAID6.

    More storage than I'll ever need, but that's kind of the point of a NAS.

    The next near term upgrade would be a couple of high capacity NVME sticks for caching power. And much, much further down the line, would be upgrading to 8TB SSDs to replace the mechanical drives.

    I just recently figured out how to get the iSCSI stuff working. Got one volume setup.
     
    secpar, Feb 6, 2022
    #7
    Madeleine Takam and Samir like this.
  8. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    Yeah, iscsi isn't necessarily faster, just a different use if you want to use a network device as a block device.

    I wouldn't use raid6 if you have enough space anyways, I'd use raid0+1 for striping and redundancy. I stopped using raid in the 1990s with raid5 as bit rot started to raise its ugly head even back then. Today, the only thing that can help with bit rot data corruption is zfs.
     
    Samir, Feb 7, 2022
    #8
  9. secpar

    secpar

    Joined:
    Jan 31, 2020
    Messages:
    166
    Likes Received:
    89
    After a few tests, iSCSI is nominally faster, but not by much. On my 10GBE connection, I can touch 10% utilization of the link. With just SMB shares, I get 6-7% tops on the regular.

    I would not have thought to use RAID6 until it was mentioned to me by my dad, who at one point was a network engineer. He's smart about a lot of things, and not so smart about others.

    Anyway, the RAID6 of 8x 8TB Enterprise grade drives (Exos 8TB) gives me extra room for failure. I wouldn't have considered it before, but I'm already way above my storage needs. ~43TB is still way more than I will ever need or use.

    Also, h265 encoding makes for videos to use much less space.
     
    secpar, Feb 7, 2022
    #9
    Madeleine Takam and Samir like this.
  10. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    Yep, that's what I would have expected. The only problem with iscsi is that you can't just use it like a normal network share.

    RAID6 helps a bit, but it still only takes a 2 drive failure to take out an array. in a 0+1, you would need 2x drives in a raid1 pair to fail at the same time to take down the array. Otherwise, you could lose up to half the drives since you could lose a drive in each raid1 pair that is striped to make the raid0.

    Nice drives. I have 2x of them. :D As well as ones by HGST and WD of all sorts of sizes. Enterprise drives are the best for sure. If you have that much free space, I would consider moving to a RAID0+1 or even just straight RAID1 setup. RAID1 is the ultimate for safety as far as RAIDs go.
     
    Samir, Feb 8, 2022
    #10
  11. secpar

    Madeleine Takam

    Joined:
    Feb 2, 2020
    Messages:
    91
    Likes Received:
    76
    Location:
    Alderley Edge

    RAID 6 is fine. Banks use it, Universities use it and most importantly industry uses it. Any card with RAID 6 is usually good and will have decent onboard ECC memory. Any card without RAID 6 is "welfare RAID". Although I know Mr Siamese Cat has used RAID 5 for 25 years without one bit loss even with the option for RAID 6. ECC memory and onboard Battery if you can get one and RAID 6 is superb. RAID0+1 is something they include for free on really cheap cards and is pointless.
     
    Madeleine Takam, Feb 8, 2022
    #11
    Mr Siamese Cat likes this.
  12. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    If drive sizes had not have gotten so big, I would agree with you. But today, the likelihood of a second (and third) drive failure on large sized arrays makes RAID6 by itself not so desirable.

    ECC ram helps, but the entire SAS bus is much more robust than SATA to begin with, so it's not a matter of data rotting in transit, but on the disk itself. And no raid system will detect this and property correct for it since it doesn't know which bits are the correct ones unless it is scrubbing the disks, which again is wear and tear. RAID1 is brain dead simple, yes, but it is also the most effective. RAID0 helps for performance, but that's about it. All of the other parity varieties of RAID--3,4,5--all really shouldn't be used except for high availability, and even then, just to keep something online before a second storage server is brought online and cut over.

    No one is building production RAID setups for reliability anymore, just this stop-gap before another server is brought online. Parity won't save you, even with 2x parity drives. And this is after I've worked with RAID from the 1990s in the SCSI days. I've seen it work, I've seen it fail, and I've seen it's strengths and weaknesses. Parity RAID is no match for today's drives areal density.
     
    Samir, Feb 8, 2022
    #12
  13. secpar

    Madeleine Takam

    Joined:
    Feb 2, 2020
    Messages:
    91
    Likes Received:
    76
    Location:
    Alderley Edge

    I will cover your response point by point:

    1. If drive sizes had not have gotten so big, I would agree with you. But today, the likelihood of a second (and third) drive failure on large sized arrays makes RAID6 by itself not so desirable.

    Please explain why? Perhaps you could provide links to a reputable site written by a Computer Engineer. Try using Samsung EVO professional or Seagate Ironwolf

    2. ECC ram helps, but the entire SAS bus is much more robust than SATA to begin with

    I am talking about the up to 2GB of ECC Ram on the Raid Card itself. It isn’t a matter of it helping it. is intrinsic along with the battery. SAS was initially a term coined to refer to SATA and SCSI combined. In reality it’s just more expensive and uses the double cable in one connection. Is always built better and more reliable.

    3. “so it's not a matter of data rotting in transit, but on the disk itself. And no raid system will detect this and property correct for it since it doesn't know which bits are the correct ones unless it is scrubbing the disks, which again is wear and tear.”

    I am sorry this is just wrong in every way. For instance a really cheap card like an Areca ARC-1160/ML will check the disk data for integrity. Err, you see that’s the point of non “welfare RAID”

    4. RAID1 is brain dead simple, yes, but it is also the most effective. RAID0 helps for performance, but that's about it. All of the other parity varieties of RAID--3,4,5--all really shouldn't be used except for high availability, and even then, just to keep something online before a second storage server is brought online and cut over.

    Wow one learns something new every day. All that expensive kit that all those companies bought was useless. They should have all bought cheap cards that have RAID 1, RAID 0 and RAID 0-1 from AliExpress for $10. They wasted hundreds of thousands of dollars on expensive RAID kit.

    5. No one is building production RAID setups for reliability anymore, just this stop-gap before another server is brought online. Parity won't save you, even with 2x parity drives. And this is after I've worked with RAID from the 1990s in the SCSI days. I've seen it work, I've seen it fail, and I've seen it's strengths and weaknesses. Parity RAID is no match for today's drives areal density.

    Hmm – please provide some links to professional Computing and Engineering sites for this. I will pass on your useful information next time I am seconded to BAE Systems, GlaxoSmithKline, Jaguar or JCB. I am sure their IT departments will want to pick your brain on some of the important points you have raised.

    I do remember a few years ago people twittering on about huge drives causing data loss in hobby magazines. But at the same time people who worked professionally with RAID and it was almost always SCSI, but some companies did use IDE. Knew that decent RAID Cards checked the disks against one another. Lloyds never lost any data in the days when they were spanning hundreds of drives together, because they were checked against each other.

    I should imagine that very large companies use multiple servers simply because of ease. You just slot them in and they work –well - you know - hmm just like RAID. However RAID 6 for the home or small company is excellent.

    Nowadays I am using 20TB Ironwolfs in RAID 6 and I haven’t had one instance of data loss. I have never come across anyone in any company who has had data loss due to large disks. I think the story was overblown hysteria for something to say.

    I joined this site due to data loss and corrupted data and Mr Siamese Cat helped me out. My corrupted data was entirely due to the greed of software developers. And Cheapness of Network cards. I will leave the link to that discussion below. It was nothing to do with RAID or large disks. I would seriously check and read the thread below. You will see that using Servers as a RAID system and relying on a Network can have it’s own pitfalls

    https://www.xpforums.com/threads/fi...went-over-to-thunderbird.934056/#post-3264432
     
    Madeleine Takam, Feb 8, 2022
    #13
    Mr Siamese Cat likes this.
  14. secpar

    Mr Siamese Cat

    Joined:
    Nov 22, 2019
    Messages:
    99
    Likes Received:
    45
    Location:
    Glossop UK
    So lets summarise this discussion. Secpar has an 8 drive system with RAID 6 of 8x 8TB Enterprise grade drives (Exos 8TB)

    Yet Samir recommends going to RAID 0+1 (What I would call RAID 1+0 or RAID 10) or even RAID 1. With 8 drives!!!

    Madeleine sensibly counter recommends sticking with RAID 6.

    I would stick with RAID 6 Secpar. It is far more reliable than RAID 1 or RAID 10

    Even low priced decent cards check hard disk data when it is mirrored on a RAID 5 or 6. You can do this manually at boot, set it to run in background with alarm or like most IT professionals watch and handle it over the LAN or software on the server. It is only the really cheapo cards that don’t have an option for a LAN connection. I used to use mine with software on the operating system. But since I have never had a fall-over in a quarter of a century, I just let the hardware monitor do it’s stuff now.

    However I do note that I once worked for a company where the IT manager was a complete wanker, so occasionally we used to pull 2 hard-drives out and quickly reinsert them. As you can imagine, with a 2 drive fall over, alarm went off and it used to send him into a panic, so they used to clear the office; And we used to go to the pub for a drink and a smoke for an hour, while the system rebuilt.

    Regarding those Servers you saw people install in companies Samir. Those black boxes you watch them heft onto the racks or cabinets all have RAID 6 inside, to deal directly with the hard disks. The Server is then used as redundant storage within the server farm. All those SuperMicro’s or Dells have RAID cards inside Samir. I have never once seen any server whether it was a 10 disk, 14 disk, or 20 disk set up as RAID 10 or RAID 1. Even the small 1u chassis are sporting 8 drives and are set up as RAID 6.

    Let me help you with some simple mathematics Samir.

    We will use Secpar’s system as an example

    He has 8 drives at 8TB each.

    Using RAID 6 that gives him 48TB and after formatting 43TB.

    Using RAID 1 he gets 8TB after formatting about 7.5TB.

    That’s a lot of money to spend on 7.5TB

    Now I admit. It may be that Secpar has installed his system in a High Voltage electricity switching station with pitted contact plates in an area renowned for electrical storms. The only place I could think of for a 7 drive failure. However in this case I would recommend a Faraday Cage and perhaps a rethink on the pro’s and cons of cheap accommodation.
     
    Mr Siamese Cat, Feb 8, 2022
    #14
    Madeleine Takam likes this.
  15. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    1. I've already explained this before. Areal densities and bit rot--look it up and do some reading on how this works.

    2. Ram is actually separate from the battery in most cards, and in write through mode, you don't even use the battery. Again, if you had some experience with these devices, I wouldn't have to spell this out. SAS stands for 'Serially Attached SCSI' and has nothing to do with SATA except that SATA drives can work on an SAS controller by design.

    3. No it is not. Again, do some reading on how parity based RAID systems work so you can understand what I wrote.

    4. That's not what I said, and type of RAID capabilities don't imply that a card is more or less superior to another, although variations that are RAIDx+y are typically only found on quality cards.

    5. Do some reading on 'homelabbing' where people take enterprise equipment and use it at home for enterprise level stuff. Serve the Home is a good place to start. Be careful with your snark because your ignorance is showing.

    You can try to remember and imagine whatever you want--I've actually worked with these systems and built them so I don't have to imagine anything.

    Good for you and your raid6--the day you have a multiple drive failure event all your data is toast. Real redudancy is in RAID1, not a parity RAID. And there is also the 3-2-1 rule (look it up if you need to).

    It's great that you were able to recover your corrupted data with some help from members on here. I'm not going to read the thread as I don't want to be slapping my forehead anymore.
     
    Samir, Feb 8, 2022
    #15
  16. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    First and foremost, this is not a competition. And RAID6 by its very nature and design will never beat a RAID1 or RAID 10 system in terms of speed or failure resistance. Again, study the RAID levels and how they work for a refresher.

    The parity checks by RAID parity systems will never catch bit rot. Again, read how parity RAID works and how bit rot works to understand why what I'm saying is correct.

    I didn't watch people install these black boxes--I own them and have them at both my companies and at home. In my R710 I have a nice little 8 drive RAID0+1 setup for speed and reliability. Anyone who works with this type of equipment regularly won't be using RAID6 across the entire array and certainly not for boot where most people use RAID1.

    Holy cow your math is off on RAID1. 8x drives is 4x pairs as RAID1 cannot be used with more than 2 drives. So 4x drives at 8TB each is 32TB or exactly 1/2 of the full raw capacity of 64TB. This is expected as RAID1 halves the available storage. The only reason I recommended RAID1 is because the OP doesn't need the current 48TB of storage in RAID6. And the RAID1 pairs can be striped with an additional RAID0, hence RAID1+0/0+1. This would offer a combination of speed and redundancy that he will not get with the double parity RAID6 setup at the minor cost of storage space which is currently being unused.

    It's obvious that everything I'm posting is over the heads here. And that's okay as it's not possible to really know this stuff until you personally work with the equipment. But don't spread false ideas--do your homework.
     
    Samir, Feb 8, 2022
    #16
  17. secpar

    Mr Siamese Cat

    Joined:
    Nov 22, 2019
    Messages:
    99
    Likes Received:
    45
    Location:
    Glossop UK
    Yes I know RAID 1 can only handle 2 hard drives. That was the point. I was being facetious.

    Your fantasy system of 4 separate RAID 1’s is a ludicrous waste. It also has less redundancy than RAID 6. A two drive failure on a single stripe could lead to catastrophic failure. Where as RAID 6 you just rebuild regardless of which 2 drives fail.

    You then go on to suggest striping your 4 separate RAID 1’s with RAID 0. So a massive system overhead for the controller. Your fantasy system would be really slow.

    Finally I wasn’t talking about Parity checks for the disks. I was talking data checking and disk data integrity. You say you have an old Dell R710 (I use Supermicro Workstations myself) I wouldn’t touch old Dell kit even though you can pick up an old Dell R710 on ebay for £150, it isn’t worth it as Dell are dreadful to try and upgrade.

    If you bothered reading the manual for your PERC 6. That’s the raid Controller you apparently have in your Dell R710. You will see even that budget Raid card has Patrol Read, which checks data on the disks. More advanced RAID cards have even better data checking.

    You’re correct it isn’t a competition. You will mostly be ignored; But you will be corrected, when you give other people dreadful advice.
     
    Mr Siamese Cat, Feb 8, 2022
    #17
    Madeleine Takam likes this.
  18. secpar

    secpar

    Joined:
    Jan 31, 2020
    Messages:
    166
    Likes Received:
    89
    Neat to see you all having a good conversation.

    My needs are simple, and this system is built way beyond my needs.

    My order of thinking for RAID configs are:
    1. RAID6, if I have enough.
    2. RAID5, if I have need for that extra one drive of performance.
    3. RAID1, if I have two drives with important data.
    4. RAID0, if I have two or more drives and an external backup with greater or equal capacity.

    Now again, this is just me, but RAID 10 seemed like overkill for a RAID0 setup with backup. I feel the same could be achieved as a RAID10 config in a RAID5, while keeping things simple.

    Before SSDs were mainstream, I took 4x Seagate Hybrid 2.5" drives that were 500GB (with 4 or 8GB of the ssd cache) each and combined them into a RAID0. I think this was back in 2011 that I did this. Never encountered an issue with performance or lost information, and had no backups. I exercised more confidence in the technology back then, lol. I had the performance marks of SSD, but with 2TB of capacity, which back then 2TB SSDs were super expensive.

    A side update (conversation kind of went sideways): The iSCSI link (according to CrystalDisk Mark) gets about the same performance as I get out of a USB3 Samsung flash drive I keep plugged in a USB 3.0 slot for images and backups.

    I have only ever had disk problems with Seagate's 3TB drives from way back, external harddrives, external harddrives removed from their enclosures, flash drives wearing out, and SSDs losing some integrity as standalone disks.

    I reclaimed two Samsung 1TB desktop HDDs from my NVR, which I've had for nearly a decade now, and using them in a RAID0 on my desktop for video processing/conversion and short term storage. I think I paid about $100 for each of them at a the time, and they're still strong!

    I have periodic check disks scheduled thru out the week to help mitigate the issue with SSD integrity, which has been helpful. Integrity issues creeped in when using an SSD partition in conjunction with a software created RAMDISK, where the image that the ramdisk saves to would be corrupted over a 6 month period. Check disks on SSD partitions and the RAMDISK have eliminated the problem.

    The Synology NAS I am using, I'm not actually sure if it uses a software RAID or a hardware RAID, but I am inclined to believe it be Software since adding disks to array and changing RAID5 to RAID6 was painless and didn't require complete and total backup, and I never lost any of the function of the NAS while it was going on.

    RAID 10 to me is like a RAID5 that's more expensive, with very little benefit. And again, this is a home setup with some robustness. RAID6 and RAID10 require a four disk minimum, and RAID10 might make more sense in a four-disk config. We're talking about double the minimum.

    My goal isn't to fill up the entire storage space, but provision the hardware in such a way that I can use it for years and years to come without major infrastructure changes or redesigns.
     
    Last edited: Feb 8, 2022
    secpar, Feb 8, 2022
    #18
    Samir and Madeleine Takam like this.
  19. secpar

    Madeleine Takam

    Joined:
    Feb 2, 2020
    Messages:
    91
    Likes Received:
    76
    Location:
    Alderley Edge

    Hi Secpar

    Hopefully I will be able to give you some advice from my experience in the Industrial chemical, manufacturing, architectural and design engineering industries: and what their IT professionals say with out being rebutted by someone. Luckily in the real world I don’t get people shouting to me that I am an idiot who doesn’t know what she is talking about.

    If you have it – stick with Raid 6 it nearly has the performance of RAID 5 and is rock solid. But most importantly and above all else. If a card has RAID 6 it is usually good quality.

    Obviously branded cards are the best way to go. You and I have had long discussions in the past about the operation parameters of various brands of card. And some expensive ones can be a disappointment. Adaptec used to be the brand to go with until Steve Jobs and Host Raid destroyed them, possible along with all the law suites over failing systems caused by Apples greed.

    I use HighPoint and LSI I used to use Areca, but got tired of their oddball bios menu’s and it’s nomenclature. The Cat I believe uses Areca and Adaptec! (But wont have anything Apple in house or office)

    As for choice of disks and why: Your Exos are fine in my opinion. However here is a little known company with a very good overview of Disks and what you should choose for a particular purpose.

    https://www.intel.co.uk/content/www...ry-storage/solid-state-drives/ssd-vs-hdd.html

    Funnily enough I have a vague recollection that ISCSI was actually first implemented by Cisco a company I have worked for, to help out JP Morgan , way back in 1996 or 1997 to replace the vast RAID 1 arrays they used. It was recognised that you had better data integrity when you spread the mirroring of drives across numerous States. As you will probably realise - As a side note although not common in the home or medium sized businesses. RAID 1 can theoretically have thousands of disks members all mirror copies. And in actuality this is exactly how it was implemented in earlier days. When you are a multi billion dollar bank dealing with billion dollar transactions every second you really do want more than 2 copies of live real-time data. I believe you can still pick up some older RAID cards that can have multiple members of RAID 1.

    They made me read this for work and we had to take a test on it for qualifications. It is OK, but laughably basic. To be honest, not my field though.

    https://www.snia.org/sites/default/files/ESF/Evolution_of_iSCSI_Final.pdf

    You data integrity issue, is something that would give me a heart attack. I haven’t used a Synology NAS though. But one rule of thumb is don’t use software RAID. I run Workstation s, so it is pointless having a discrete NAS setup.

    I have had one problem with Data integrity and that was network and software company greed related. Mr Siamese Cat solved the problem for me. It was seriously obscure, but devastating until solved.

    This thread below but way way down when I first ask the Cat the direct question. Answer was just so obscure and weird, but solved everything. Read through and be aware as you are going to be running ISCSI. And just remember Checksum Offload it saved my bacon.

    https://www.xpforums.com/threads/fi...went-over-to-thunderbird.934056/#post-3264432

    All the best

    Madu
     
    Madeleine Takam, Feb 10, 2022
    #19
  20. secpar

    Samir

    Joined:
    Apr 1, 2021
    Messages:
    276
    Likes Received:
    108
    Location:
    HSV and SFO
    Then you should have posted so that was clearly understood. Sarcasm doesn't translate well to text.

    You're living the fantasy if you think a RAID6 of the OPs size will survive a rebuild without another drive failure. Again, do your homework.

    Hmmmm...slow...so is SSD speeds slow? Probably not. Again, do your homework.

    Riiiiiiight. While supermicros are great (I have several of these as well), Dells and HPs are the workhorses that power the whole world. Again, do your homework.

    If you know about controllers as much as you think, you'd know you can put a different controller in. Regardless of what card though, no standard RAID will ever discover bit rot, only zfs can. Again, do your homework.

    You treat it as one. And the truth is out there if you seek it out. Your ignorance on this topic is blatantly obvious to anyone that knows the truth.
     
    Samir, Feb 13, 2022
    #20
Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.