RAM Usage with New-UDEndpoint

I previously had 3 scripts that were scheduled to run every 5mins, when I found out about New-UDEndpoint and the ability to use $Cache, it seemed like a great way to remove the pollution from the Jobs area and ensure data is as up to date as we need. However, since making the change, I’m noticing RAM usage increase over days where eventually Universal.Server exhausts all available RAM (7+ GB). The transition from scheduled scripts to New-UDEndpoint is the only recent change I can think of. Could the RAM usage be due to the scripts running too fast and not allowing garbage collection to process or something? The $Cache should be refreshing the data, not ever growing. Below is the code snippet I’ve added to the main dashboard:

$EndpointSchedule5Mins = New-UDEndpointSchedule -Every 5 -Minute
New-UDEndpoint -Schedule $EndpointSchedule5Mins -Endpoint {
    # Caching all enabled user objects
    $Cache:AllUsers = Get-ADUser -Filter "Enabled -eq '$true'" -SearchBase 'OU=Staff,DC=contoso,DC=com' | Select-Object -ExpandProperty SamAccountName | Sort-Object
    # Caching all hostnames
    $Cache:AllDesktops = Get-ADComputer -Filter "Enabled -eq '$true'" -SearchBase 'OU=PCs,DC=contoso,DC=com' | Select-Object -ExpandProperty SamAccountName | Sort-Object
    # Cache File
    $Cache:ReferenceFile = Get-Item "D:\Files\PrinterInventory.csv"
    $Cache:AllPrinters = Get-ChildItem '\\contoso\share\VDISystems' | Foreach-Object {
        Import-Csv $_.FullName
    }
} | Out-Null

For reference the $Cache:AllUsers and $Cache:AllDesktops is ~4500 objects each (total 9000). There are ~2500 CSV files being processed by the $Cache:AllPrinters line.

EDIT: I pulled memory usage of the server hosting PowerShell Universal and you can see the steady/consistent climb until the service is recycled or server restarted:

Product: PowerShell Universal
Version: 2.8.1

Have you tried adding manual garbage collection after those calls are completed? I’ve had some success doing this when I have resource issues.

[System.GC]::Collect()

I added that line and restarted but doesn’t seem to be making any difference, the Universal.Server process is growing steadily. That said, I just found this tweet that might explain it. I forgot as part of migrating to a single dedicated service account (away from IIS AppPool running as SYSTEM), I updated the integrated environment to be Pwsh 7.2.3. We do a lot with AD/Exchange, so it’s entirely possible this is the problem. I’m going to see about reverting the Integrated environment to be PS 5.1

1 Like

Unfortunately that didn’t resolve the issue. Still seeing steady climbing of RAM consumption by Universal.Server.exe process. @adam anything else worth looking at to see if this could be from a configuration choice in my PSU environment?

Hmm. That’s a good question. I’m not sure why this would cause it to grow continually. One thing you could try is using Set-PSUCache\Get-PSUCache because you’ll have more control over memory cache expiration and stuff. That said, it should just be overwriting the values in the cache so there should be anything hanging around.

If you want to upload a memory dump to dropbox, I can take a look at what’s consuming the memory to give us a better idea where to look.

@adam, is there a preferred way to generate the memory dump (our PSU instance is running on Windows Server 2016)?

Edit: I just realized I can do a dump of just the Universal.Server.exe process via Task Manager. I’ll get that file created and uploaded in a bit.

1 Like

@adam, confirming the file has been uploaded.

Much of the memory usage is due to PowerCLI and dbatools. Idk why changing from jobs to scheduled endpoints would cause this since you were just using ad cmdlets in there.

Yeah, I don’t think I’ve done anything new with scripts/jobs related to PowerCLI and dbatools. I have two jobs that run every 30mins (separate jobs but using the “every 30 minutes” scheduling option) that gathers some VM/Host details, but I would expect the session/runtimes to be thrown away. I issue a “Disconnect-VIServer” at the end of each and pretty sure there are no other jobs leveraging cmdlets from the PowerCLI module. Is there anything else I can do within PSU or the scripts to keep that memory under control?

@adam, any other ideas? Currently I’ve added a scheduled AppPool recycle to at least somewhat control it as this is causing failure of pages and scripts due to resource exhaustion on the server.

I’m curious if you’ll be able to update to 2.9 tomorrow. We have added a different runspace pool model (optional) that is much faster in terms of response times for APIs and Dashboards and seems to better handle memory usage. That said, I’m not 100% sure it will be a fix for this particular issue.

It actually uses the PS SDKs built in runspace pooling rather than our home brew one. Back in the early UD days the runspace pools were slower than our custom one but that is no longer the case. Since it’s the built-in runspace pool, I also wonder if it will be better on memory. It does have a couple limitations but it might not matter.