S2d Failover Cluster Permissions

184 Views Asked by At

I have just setup a new 3 node cluster with s2d using a storage pool of 6 drives per node but when I reboot any of the nodes, one or more of the attached drives become detached from the pool and listed as "communication failure" and the only way to reconnect them is to unplug the sata cable and reconnect on each drive.

I have found a solution to change a variable called IsManualAttach to false as its current set to TRUE on both volumes but when I run the following powershell command as admin I get Access Denied errors...

set-virtualdisk -FriendlyName "multiresilient" -ismanualattach $false

I am runing powershell as mydomain\administrator as well as just administrator with same outcome. I have given the 3 cluster nodes and the cluster name permissions of ALL ON for EVERYONE under the security tab in users and computers on the domain controller and each node is a member of the domain run by the DC.

Here is the output I get exactly when I try and run this PS command...

set-virtualdisk : Access denied

Extended information: Access is denied.

Recommended Actions:

  • Check if you have the necessary privileges to perform the operation.
  • Perform the operation from Failover Cluster Manager if the resource is clustered.

Activity ID: {a7a8b0da-9b4a-4d53-b3e4-41b195339e44} At line:1 char:1

  • set-virtualdisk -FriendlyName "multiresilient" -ismanualattach $false
  •   + CategoryInfo          : PermissionDenied: (StorageWMI:ROOT/Microsoft/..._StorageCmdlets) [Set-VirtualDisk], CimException
      + FullyQualifiedErrorId : StorageWMI 40001,Set-VirtualDisk
    
    

I am stumped as how to fix this. If there is a way to edit the value of IsManualAttach using a registry edit or using a GUI I cant seem to find it. The best solution would be to solve the access denied problem.

Until I solve this I cant afford to shut down any of my cluster nodes as it breaks my storage pool :( The whole purpose of a failover cluster is to keep running when a node goes down and it does this but it doesnt resume where it left off when the node returnes to the cluster as one or more of its drives get detached and stay that way until I manually reconnect them.

0

There are 0 best solutions below