What's normal usage?

Hi Everyone!
I just wanted to take a quick survey on what’s “normal” usage with PSU. I feel like I’m beating the crap out of PSU, the server hosting it, and the database. I’ve had PSU support fix a bug for me in almost every monthly release of PSU since September…Love the support because it feels like the PSU dev team works for me. :sweat_smile:
I got about 80 scheduled scripts with about 10 scripts running every 15 to 60 minutes.
Managing about about 3000 users and 4000 devices.
I’ve had to limit the logging to 7 days to keep the database at a manageable size. Limit the number of concurrent scripts to keep the PSU server from stealing resources from other vmware servers. Just want to check with others to see if this is normal, or if I’m abusing the server. I don’t feel like I’m at the point where I should be running multiple instances for scaling, but maybe I’m there and I don’t know it.

Product: PowerShell Universal
Version: 5.4.4

That’s awesome! What kind of things are you doing with your users and devices? I’m in the beginning of our PSU realization, but I hope to grow it.

It’s a fairly large org, so lots of red tape and reporting… like daily new users created report, contractors that are expiring report send to their managers, report on devices that haven’t been logged in for 30 days, disable devices not logged in after 60 days, delete devices not logged in after 90 days. Also automate when windows 10 devices are upgraded to 11, they get our much more strict windows 11 security policies. Creating standardized user folders for new users.

Basically try to automate everything that’s repetitive or that someone might screw up if done manually.

What are the resources allocated? And what is a “manageable size” to you? I run PSU 4.x on VmWare, about 12GB RAM and 8 vCores I think, not quite sure. I don’t have as many scripts running but a handful of them that are run every minute. A perfomance monitor is run as often as 2-3 times a minute. I moved to MSSQL for the database for other reasons than performance.

I don’t have any real issues, the memory footprint for the VM is about half the allocated size. The CPU consumption generally much lower than allocated. On the other hand I do have some very heavy scripts run on demand, but they are mostly off-loading most of the work on other servers, virtual and physical nodes, with remote powershell.

I keep relevant logs as long as I can, while I prune others after just a handful of executions, especially the frequent jobs don’t produce any output I need to keep for long. I make sure the scripts aren’t producing a lot of output I don’t need and tune their performance. But the heavy lifting of my scripts are on other machines in a network, and the PSU server is mostly controlling these other nodes and providing the end user interface.

Our organization is a lot larger, but the number of users with access to PSU is only a couple of hundred, and PSU is, so far, used for a very specific purpose. Our client management team is looking into using PSU for more general self service purposes and they would have many more users as customers.

Thanks for your reply. I run a similar setup, but my PSU server is doing all the work instead of using remote powershell. I’ll try converting my scripts to offload some of the work to improve performance.

My scripts also product a lot of output because I’m paranoid and basically put in debugging/logging steps into every line. That’s probably part of the issue.

Thank you, you’ve given me some great ideas!

We have 126 schedules and roughly 210 in-use scripts. We have 3 Linux worker nodes and a single Windows worker node for a few AD cmdlets needed. We also have a separate web node that sits in AWS that only runs dashboards, of which is running ~12 dashboards at any given time. We run a single PostgreSQL server that doesn’t seem to have any issues handling things, which is nice. We have configured 180 days data retention and 7 days log retention.

We’re also with you on having lots of bugs reported/fixed; their support has been fantastic :ok_hand:.

We use PSU and have a large environment with many domains and isolated environments… requiring individual instances. I don’t think there is an easy answer for workload. A single task can consume high bandwidth and resources. The vast majority do not. I have not run into a logging issue, but we keep the PSU logging to a minimum and create our own detailed logs where needed. We use PSU for all sorts of purposes: 1) Custom reports on demand. 2) Automated processing (I.e. backup printers on print servers, decommissioning, updating DFS file shares, etc.), 3) Highlight potential issues (I.e. stale servers, servers with errors, unresponsive servers, etc.). If a server is maximizing its resources, add another. In our use-case we justified an Enterprise license with unlimited instances. We have Dev and Prod versions of PSU and many environments, so a lot of PSU servers. Worth every penny.