Hi @adam I have started testing with clustered SQL on v3. Regarding .universal settings this works really well across the cluster, but i need to understand how $cache: variables work in this concept.
Currently I have automation scripts that gather data and set $cache variables. These are available for the dashboard in the integrated environment - all good, but what happens if the job is run on another node, is that cache variable only updated on that node, or across the cluster?
In this scenario, if i then do a ‘Sync-PSUComponent -Id “element1”’ will that sync the elements on the local integrated environment (where the script ran), or is there a way to get it to sync across the cluster?
I am guessing the integrated environment is unique for each node, would i need to then call endpoints on each other node for them to gather data, if this is the case, then can we target a node to run the script? I guess this could be pushed to other nodes in various ways (API being one), just asking if there is an easy way!
We use a load balancer in front of the nodes, wsfed to authenticate against single SSL name.
$Cache variables are process specific and are not shared across the cluster. We are looking at making Set-PSUCache use a distributed cache but that’s not implemented yet.
Sync-PSUComponent won’t automatically sync across the nodes but you can use
Sync-PSUComponent -ComputerName http://nodename.
Thinking through this more, it seems like we need to further enhance the scheduling component to accommodate. The problem is that you can call Invoke-PSUScript and it will start to run a script but there isn’t currently a way to tell it which node in the cluster to run it on. All nodes belong to the same queue and the round robin will just pick one. This means the scheduled script could just run on one node over and over again and the other nodes wouldn’t populate the local cache.
I think the best solution here would be to use a distributed cache and then it wouldn’t matter which node ran the schedule since they would all be using the same cache.
I’ll open a couple issues around this. You’re making me think of lots of enhancements to our clustering\HA setup.
This all sounds good! Im trying to create a workaround for now, but i am falling over at the first hurdle with API’s. Please see my post: V3.0.0-beta7 - API Troubleshooting