[Guide] How to get DataCenter 9.x to recognize a "Cluster node" to take advantage of Exchange or Hyper-V Cluster backup jobs (Version 9.x)
In order for DataCenter 9.x to recognize your Exchange or Hyper-V Cluster, there are some items you will need to do outside of the DataCenter Web UI beforehand, after your Cluster is even defined in the Windows Failover Manager app, and after the required "Clustered shared volume path" is created and in place on a UNC share in your environment (which DC writes some miscellaneous Cluster node related info to that is independent / separate of the Windows Clustered Shared Volume (CSV) to be clear on that, with Full Control read/write access for the Domain Admin user will be required to assign the DC RCmd service to utilize for the service Log On As setting of that service on every Cluster member node that has the DC client installed already on it), and this would all need to be done prior to being able to have a support session with us for the matter, to get DataCenter to recognize a node as a "Cluster node", so in that case it could be an Exchange based cluster (DAG) or a Hyper-V based cluster. When the word "cluster" is mentioned that is synonymous with "DAG" when it comes to an Exchange cluster, the term can be swapped because as far as DC is concerned Exchange and Hyper-V cluster, as far as the Cluster setup is the same. This article is meant for and shows how to do this in DataCenter 9.x, and contains screenshots only for 9.x, but the items that you have to do perform in pre-setup aer very similar in prior DC versions; the difference is when you go to addd the "Cluster node" (which is a virtual node as there is no "machine" behind it and no "Client" installed on it) as a new node in Nodes Management, the prior DC 7 and 8 versions used to prompt right away if it detected that a Cluster node is being added, but it no longer prompts you right away like that, you need to "Manage" it after the node is added which is a button that can be clicked on to do that Manage Cluster action, and it can be edited and used again without having to delete the Cluster node and re-add it again which is nice.
First of all it is important to know that all of your Cluster member nodes and the "Cluster name" itself, which is a name that is defined for the Cluster when initially setting up the Cluster in the "Microsoft Failover Cluster Manager (MSFCM)" app, where you defined a "name" for the Cluster, normally that is a FQDN address, and that FQDN address for the Cluster name (I.E. in this example and seen in the screenshots, the FDQN - Fully Qualified Domain Name address = JF-HVCluster11.sv.novastor.local, which is not an actual Windows computer it is just the name of a Computer object in AD that was created when the cluster server name was created) will need to be added as nodes inside the Nodes Management area of DataCenter, including one node for the "Cluster name", in that case the later is not an actual Windows Server with any DataCenter software on it. Again the "Cluster node" is just a virtual node (meaning it doesn't have DC installed on it), as there is no actual client software installed and running on that node, although that virtual node does count as 1 hypervisor server license allocation I believe. The Version displayed in Nodes Management when you expand that node will always be "N/A", starting in v9.2.x that is, unlike the Cluster member nodes which will report the actual version of the installed DC client. In this case as well in Nodes Management list of nodes, for any "Cluster node" node in the list, the "Connected" column will always show "green" circle icon to indicate it is online (unless you were to delete the cluster in the "Microsoft Failover Server Manager (MSFSM)" app that should be true), and the "Status" column will always show as "ok" and green color for a "Cluster node", due to there is no client software installed on that virtual node so there is no client verison to check for and report on. Note: I believe that each "Cluster node", even though that is a virtual node, that is added as a node in Nodes Management will count as a single Hypervisor node / seat in your Nodes based DC licence allocation (which does not matter if your DC license is Storage Capacity model based, it would only matter for Nodes based DC license model, which limits you to X number of nodes to be able to add to in Nodes Management in that case, just be aware that it will take up a license allocation slot per each Cluster node you do add in that case, even though that node in reality is just a virtual node).
Make sure to set the service Log On As setting for the "NovaStor DC Remote Command Executor" (RCMd) service to a Domain Admin who is also added to the Local Admin group on all of the Hyper-V cluster members or Exchange DAG cluster members (which have the DataCenter client installed on those member nodes that matches the Command Server version), otherwise DataCenter will not be able to communicate via that service to Hyper-V or Exchange when it comes to if they are clustered, as it won't have the rights with the normal "Local System" user that all of the DC services utilize by default during the installation; this user will also need to be defined on the Clustered shared volume path that is required to be specified as a UNC path in the Cluster setup of the "Cluster node" after you add that virtual cluster node in as a node (once it is detected as a "Cluster node" you will get a "Manage..." (Cluster) button to utilize and click on in the expanded node details of that virtual "Cluster node"; the node will also show as "This node is a cluster" with a green checkmark to the left of that if the virtual cluster node is truely detected as a cluster; the "Manage ..." button won't appear unless that is true. You can set the service account user via the Nodes Management dialog for all of the Cluster member nodes. The Local System user will never access to query the Exchange DAG/Cluster to get all of the Exchange databases across all member nodes. Here is a KB article that we have on how to set the services, you can do that in one of two ways, and always the later method will work for that, versus doing it inside DC Web UI (which sometimes can't work):https://support.novabackup.com/hc/en-us/articles/5477437382173--Guide-How-to-configure-the-4-x-DataCenter-9-x-services-to-specify-the-Log-On-As-user-that-matches-the-credentials-of-the-network-device-that-will-be-configured-as-a-network-location-for-backupcheck to make sure that the "NovaStor DC Remote Command Executor" service that will be installed on the Exchange clustered nodes (DC client nodes) are setup with a Log On As property (seen in "services.msc" Windows Services on the far right column there) which is a domain admin user that has full control to the "Clustered shared volume path" setting, which is independent / seperate from the Windows Clustered Shared Volume (CSV), and only utilized by DC, that you do specify (in the "Manage ..." button, Add Cluster dialog), on all member nodes (DC client nodes) that are part of the Cluster, and that the DC client is installed on all of the cluster member nodes, and the service mentioned above on all of the Cluster member nodes is set to Log On As the same Domain Admin user, that user will also need to have local admin access as well on the Cluster member systems and that same domain admin user will require to be added as Full Control r/w access to the "Shared clustered volume path" folder (which is always a UNC path, as that is the only way that it can be specified in the setup). If those items are not all in place correctly then the Cluster node and the Cluster member nodes, as far as DC Nodes Management is concerned, are not working clustered nodes, and the status when you expand those nodes will probably not be correct to show that in the "Clustered" section at the bottom right of the expanded node details dialog. When you go to add a clustered node in as a new node to DataCenter 9 you will no longer see an additional dialog right away to configure the cluster, as prior versions did show right away as soon as the Cluster node was detected as being added, see https://support.novabackup.com/hc/en-us/articles/360006790654--Manual-System-Nodes-Management. If everything is done properly when you go to add the master Cluster node to DataCenter as a new node it will detect if it is a Cluster and provide a button to click on at the bottom right after the node is added, to "Manage ..." (Cluster)", this is where the UNC path is specified to the one required "Shared clustered volume folder" setting, as well as where you see all of the cluster member servers (DC clients that are part of the Exchange or Hyper-V cluster), as long as they have the DC client installed on them that matches the Command Server version, and they all have the same single Remote Command Executor Log On As setting defined with the same Domain Admin user that is, in list form there in the "Cluster nodes" section.
Notice:There are two ways to set the required DataCenter services to run as a particular user that has full read/write access to a network share path, instead of the default user that the services are installed clean as which is always "Local System". The recommended and preferred method is to directly edit each backup server enabled node via Nodes Management, and use the "Configure service account" function there to specify the user which will automatically set all of the required services to run as that user, or you can set the Log On As value manually by editing each of the services listed above on each backup server node manually via Windows Services (services.msc). It is always recommended that you first try to utilize the "Configure service account" function in Nodes Management, per each backup server enabled node that needs it, and if that does not work you continue reading this how to article here.
In version 9.x you will use the DC Web UI to go to Nodes Management, and in the nodes list locate your "Cluster node" (not a "Cluster member node" but the actual Cluster name that you defined when you setup the cluster in the first place using the Microsoft Failover Cluster Manager (MSFCM) app in Windows prior to this), and if all of the pre-setup steps that you are required to do to get to this point are done (detailed further below), you can then first check to see if this is in fact the "Cluster node", and if that is true it will show "This node is a cluster" in green at the bottom right of the dialog, and it will also display a "Manage ..."(Cluster) button to utilize in that case, to see the Cluster member nodes as well as define the Clustered Shared Volume (CSV) UNC path among other things. If that is all true then the Manage ..."(Cluster) button at the bottom right is where you will configure the "Cluster node", that is if everything is done properly on the Cluster configuration (in Windows) as well as all of the cluster members are in place and the DC services are all set to the domain user as explained in this reply, including further info below. In the "Cluster info" section of the expanded "Cluster node" details, it will show "This node is a cluster" at the bottom right of the expanded node (this is DC 9.1.x showing, but it should look very similar in 9.2.x our latest version series), if it does not then it won't show the "Manage ..." (Cluster) button at the bottom right like this does show:
In the "Cluster info" section of the expanded "Cluster node", after clicking on the "Manage ..." (Cluster) button, it should show the "Add cluster" dialog (in v9.x) please verify the Cluster nodes show up there, and all of them show the "available" status (meaning that DC client node is detected as a Cluster member and the services are running currently and reporting back valid state) as well as "RCmd" = installed status (meaning that the DataCenter Remote Command Executor service, which all DC client or CmdSrv nodes install is listening and responding, this is the single DC service that for a Hyper-V Cluster at least (not sure for an Exchange DAG/cluster if that is true) 100% is required to be set for the Remote Command Executor service the Log On As setting assigned to a Domain Admin user, on that client node which is a member of the cluster) of the UNC path share and folder permissions are defined properly for the shared folder you had to specify when adding the "Manage ..." Cluster function inside the "Cluster node" node, you also have the ability to change many of the items on this dialog if you need to edit and save it again, or just view the information contained in that "Add Cluster" dialog:
In order for this to work you'd need to configure the services Log On As on all member nodes of the DAG/cluster, on the DC Remote Command Executor (RCmd) service that exists on all of those member nodes, for a Domain Admin user (the reason being is that Local System and Local Admin Accounts don't have enough permission to be able to read Hyper-V Cluster info, unsure about a Exchange DAG/cluster), that is also added to the local admin group of all member nodes on the computer itself (lusrmgr.msc), and that same domain admin user has to be added for Full Control permissions to the shared clustered volume folder, only a domain admin will work for this it cannot be a local user, and then remove any of the member nodes that are already listed as nodes in Nodes Management, as this won't work, once they are removed as nodes you will then add the single DAG/cluster host name in as a DataCenter node, it should detect that it is a cluster node and present an additional pop-up dialog at the time of adding the cluster host node to DC, to configure at that time, which contains a setting that defines the Clustered Shared folder there, that you have to specify what that cluster folder is, and shows a list of all member nodes inside that dialog, which should be similar to how it looks in that KB article, if it doesn't then let us know. You can retry adding the cluster host name or you can retry with the fully qualified hostname with the domain appended to the end of that hostname, and then if not finally can retry with the IPv4 address of the cluster machine. Again when adding the clustered node make sure that all member nodes are removed from DC Nodes Management, as if they are nodes there it won't work to detect the clustered node that you are trying to now add.
Here is the guide on how to and what services in DC 9.x to set for Log On As service setting: https://support.novabackup.com/hc/en-us/articles/5477437382173--Guide-How-to-configure-the-4-x-DataCenter-9-x-services-to-specify-the-Log-On-As-user-that-matches-the-credentials-of-the-network-device-that-will-be-configured-as-a-network-location-for-backup
Opening the dialog
1. Go to System > Nodes Management.
⇨ The Nodes Management dialog box opens.
1. At the top right will be a [+] button to click to perform an Add Node.
⇨ The Add Node dialog box opens.
2. Enter the FQDN name of the "Cluster" that you already defined to be the name of your Cluster via Microsoft Failover Cluster Manager (MSFCM) app in Windows, this is the name you defined for the Cluster itself. It has to be entered exactly as you stated it for the server address here.
If it is a "Cluster node" then you will see indivcation of that later when expanding the node properties after it the "Cluster node" node is added in Nodes management, by expanding that saved node once again, and look at the very bottom right of the expanded details section of the node. If DataCenter did detect that the entered host name is a Cluster node then it will display "This node is a cluster" with a green checkmark to the left of that, to denote that, if it does not then something is not being detected properly here. Version 7 and 8 used to prompt to setup the Cluster if it was detected, but DataCenter 9.x leaves you to configure (Manage) that later after the node is added and saved. DataCenter 9.x no longer shows a prompt right away, like prior versions did show once the node you were adding was detected as a cluster node, at this point asking if the cluster configuration should be called. You can however configure the cluster after the node is added, via the "Manage ..." (Cluster) button at the very bottom of the expanded node details after the node is already added. Note: As long as it is detected in the Cluster section of the expanded node details showing "This node is a cluster" with a green checkmark to the left of it, that is. The "Manage ..." (Cluster) button will not be displayed in that section otherwise!
1. After the node is added and saved, and shows in the Nodes list in Nodes Management, expand the node details for your "Cluster node", and at the bottom right of the expanded details look for "This node is a cluster" with a green checkmark to the left of that, which denotes if in fact this node was detected as a "Cluster node" meaning your Cluster (I.E. not a "Cluster member node" but a "Cluster node", which is a virtual node as there is no "computer" or DC Client behind it). You will also in that case, if this node is detected and verified to be a "Cluster node", a "Manage ..." (Cluster) button to click on at the very bottom right, underneath the "This node is a cluster" with green checkmark statement. Click on that "Manage ..." button.
⇨ The "Add Cluster" dialog box opens.
The "Add cluster" dialog displays various detected items to do with this "Cluster node" and it will show at the top the Cluster name and Domain name, below that in "Cluster nodes" section it will display all of the detected Cluster member nodes that are detected in the Cluster, as well as next to each Cluster member node it will show if the service is listening, the "available" status should be green if the system is up and running as a DC client node and it will show "RCmd = installed" status if the "NovaStor DC Remote Command Executor (RCmd)" service is listening per each node in the "Cluster nodes" section list, and if that Cluster member node is running and responding with a Domain Administrator user specified on the service Log On As setting, and below that in "Cluster resource information" it provides some address related information for the cluster such as IP address, DNS name, and at the bottom it allows you to define the required "Clustered shared volume path" setting (the Directory entered here must be accessible from all cluster nodes), which is always specified as a UNC path. In that case the UNC share is required to have the Domain Admin that you set the DC Remote Command Executor (RCMd) service Log On As setting to for the Domain Admin user to also have Full Control r/w access to that UNC path share (see notes in this article on how to do that).
1. The only required setting here is to set the Clustered Shared Volume (CSV) path (which can be changed later if needed) it basically just holds some information regarding your DC cluster node configuration and should not contain very many files or size of data stored there. Please enter a UNC directory into the field. The directory must be entered in accordance with UNC conventions and must be accessible by ALL of the Cluster member systems.
2. Click on [Add] to save the entries and changes. Note: You can always click the "Manage ..." (Cluster) button later at any point to make future changes to click [Add] again to save any changes, or you can use that button just to look at the configuration later to see what member nodes are being detected, like for any new cluster members you add later to that cluster in Windows Failover Cluster Manager, and install the DC client and configure the Log On As setting for the same RCmd service on that new client, etc.
Make sure that your target UNC path share, which could be a share on a NAS or a Windows share, has the share permissions setup for the SAME Domain Admin account you are using for the DC services on all of the Cluster member nodes and on the UNC share, and the folder permissions inside the share have that user added with Full Control (Read/Write) permissions. If the NAS doesn't support Active Directory, which is where your Domain Admin user exists, then you would need to either add that same user to the share permissions that you defined your Backup Tier 1 as, to the NAS, by adding that same Domain Admin user to the share on the NAS, if the NAS is AD aware, but if it is not AD aware you can create a local user on the NAS and utilize the same username and password as your Domain Admin user on the other DC client services. Then retest the backup. You could also make the share wide open for permissions to allow everyone access to the NAS share, and retest it. You could also for now just create a new Image Disk Pool and instead of assigning the target directory as a UNC path, then assign a local drive letter to it on the Windows backup server node; and make sure that local folder has the Domain Admin added with Full Control permissions on the share as well as the share folder permissions.
In addition confirm that all nodes are on the same DataCenter 9 version (the latest DC 9.2.6 release as of 11/03/2022), once the CmdSrv node is upgraded to you can go to Nodes Management and select all nodes and do an Update nodes function there), which you can do by viewing the second "Updated" column in Nodes Management, or use Windows Control Panel > Programs & Features. You would also want to make sure that the "Cluster server" node (the one node that doesn't report that the version is up to date when viewing the node details in Nodes Management (this was fixed in 9.2.x versions), inside the "Cluster" button function has the Cluster Shared Volume (CSV) defined for a UNC path and that same domain admin user is added to that share with full R/W permissions. Outside of that you can try creating the Exchange backup job again, choose the Cluster server as the node, then select at least 2 x Exchange databases that exist on that Exchange DAG on a single cluster member server and run that backup job to see if it completes with success. If it fails then confirm that Exchange database that did fail is able to restore somewhere, like to an alternate folder location, and if it can then we will need to get the Diag info .zip files for those affected nodes and the Command Server and the target backup server node for that backup job sent to us, instructions on how to collect and upload the Diag info .zip files, the guide on how to collect the Diag info in 9.x is here: support.novabackup.com/hc/en-us/articles/5303331067933--Guide-How-to-collect-individual-job-logs-and-the-Diagnostic-info-in-DataCenter-9-x.