Jobs take time/resources to spin up a new session so I am wondering what the most efficient way is to query a cluster of sql servers every 15 seconds. Would scheduled endpoints in the dashboard themselves using the standard sqlserver cmdlet (not DBATools) be the best way? Is their anything else I can do to guarantee threads are closed out properly so I don’t run into a leaked connection issue… perhaps putting a connection timeout of 3 seconds?
Currently, I have a Function which I use to pass a query to, which invokes Invoke-SqlCmd2 to return the Query.
I have seen some users using dbatools however, the devs say it is not designed for Azure and I can see this module does a lot of platform related stuff which could result in a longer load time for the module and more load for PSU.
I have heard of some users writing their own session and query commands in PowerShell.
I have very keen personally to hear what everyone else does and if any methods have been benchmarked, as I plan to come back to SQL querying at some point to reduce load and improve response times where I can.
I am less concerned about DBATools vs Invoke-SqlCmd2 and more interested on how to schedule the scripts/collections to be used in a PSU dashboard. It seems like scheduled endpoints within the dashboard itself are the most performant option available. Scheduled jobs take too long to spin up since a new powershell session is created each time. Perhaps using the integrated environment is better, but I am looking for the best way and would like some other opinions to weigh in.
I believe either the integrated environment or the scheduled endpoint is going to be the most efficient methods for scripts that run every 15 seconds.
so I have been running some very simple tests all day long utilizing the scheduled scripts under the integrated environment and see some odd discrepancies in run time. Sometimes the job reports 100ms, but randomly it will report 1+ second. The test itself just runs an invoke-sqlcmd2 script 15x in a foreach loop that selects top 1 * from a table. I attached a measure-command to the sql script did a select of milliseconds so that PSu does not have a whole lot to serialize and can clearly see that each loop runs in ~2-10 ms, but something in PSU is holding it up every few runs for 800ms- ~3 seconds.
The profiler might be able to identify why that is happening.
Also note that even in the integrated environment, the PowerShell serializer is being used to store that job data. In a UD scheduled endpoint, it’s stored directly in memory as objects so no serialization is taking place.
ha… I was actually just reading about the profiler, but will need to update my instance of PSU.
However, based on your response, it sounds like the discrepancy derives from the serialization. If the data is only being used in a single dashboard and additional telemetry is not required, would it be best to utilize a scheduled endpoint? Should the query/code exist in the scheduled endpoint within the dashboard or is it more proper to have the code in an endpoint of itself and have the scheduled endpoint within the dashboard call that?
I think that you’d want to put the query in the scheduled endpoint and use the $Cache scope for the fastest possible storage and retrieval. Your dashboard would never leave the process to query that data since it would just read it from the $Cache variable.