Windows Service: Recommended hardware

Product: PowerShell Universal
Version: 4

We run roughly 500-1000 jobs in a day and when you have a bunch of jobs running at a given time ive noticed that it eats a lot of CPU.

What is everyone running for specs on their PSU servers?

Bonus question: for those running sql, what are you sql server specs?

I have about 860 jobs running per day. Here’s my performance when 12 quick jobs are actively running.

First off… we needed to set “Module Discovery Frequency” to once per day on our system. It was a feature that I asked for.

Without reference to what the actual jobs and code does… it’s a guess on what’s taking all the time. How long does it take to run? Can it go faster? which part of the code is slow?

From experience and looking at the screenshot… Import-Module can eat up lots of CPU and slow down the system a lot.

You also might not need to use Import-Module in a script. You can let powershell do the command inferencing. That can work with VMWare and Exchange modules which are slow to load.

You can architecturally move parts of the scripts into a long lived PSU environments and then internally call via API.

We have over 4000 jobs daily, some really tiny, some that last over an hour, and our CPU is largely fairly low. However, when I had PSU running on a new dev server that I had provisioned with only 4 cores up front, it was… laggy… Bumping to 8 in that environment completely fixed the issue. I saw CPU spikes with the 4 cores a lot even when not running jobs.

Now… is that because of PSU or something environmental? I didn’t dig into it enough to say… but experience over the years has taught us that when spinning threads up in PS that there is a much bigger impact on the CPU than threads in like c#.