I’ve got a raw installation, using IIS, and configured with a single Endpoint in use. That Endpoint is called about 100 times an hour. I’m seeing the PowerShell process ramping up to 100% CPU and RAM utilization and causing process crashes within 20 minute or so.
The Endpoint’s script is very, very simple, doing an Invoke-Command on the given computer to check for the presence of a Windows service, then returning a little text to indicate the result.
I’ve tried both PowerShell 5.1 and 7.x as an environment. I’m looking for suggestions on how to troubleshoot a single PowerShell process consuming 16GB of RAM based on such minor activity. With these kinds of results, the product is not usable.
We are aware of some performance issues with 2.9 and those have been resolved in 2.10. Feel free to give a 2.10 build a shot to see if that resolves them for you.
This is behavior we’ve observed in version 1.x, then in version 2.9 when we upgraded. Now also in 2.10. The memory consumption while running is completely normal for hours, then slams up to 100% within several minutes. There’s no difference in API calling behavior during the several hours of nominal functioning compared to when the system falls apart. The rate of calls to endpoints is about 5 per minute.
Interesting. Can you please open a support case for this? I want to get some more info that you might not want to share publicly (support@ironmansoftware.com).
What I’m looking for:
Contents of the endpoint
Memory dump (I can provide info on how to do this)
Authentication configuration
IIS Hosting Method (InProc\OutOfProc)
As a base line, we tested the performance of PSU 2.10 with the following configuration:
Single GET endpoint that returns about 1kb of data per request with no authentication
Run siege in benchmark mode with 150 clients from a Linux container
Tested PS7.2, Integrated and WinPS environments running on Windows 11 hosted as a service
Machine Specs: 32 GB RAM, 8 cores, 1 TB SSD
Our results were:
Base Line Memory: ~600MB
Maximum Memory: ~3000MB
Memory Post Test (within 5 minutes): ~1000MB
We saw an average of about 400 requests per second and ran the test over 8 hours.
Thanks, adam. I will supply the info you asked for tomorrow. For what it’s worth, here’s an exception report when IIS crashes.
Application: w3wp.exe
CoreCLR Version: 5.0.921.35908
.NET Version: 5.0.9
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AggregateException: One or more hosted services failed to stop. (The operation was canceled.)
—> System.OperationCanceledException: The operation was canceled.
at System.Threading.CancellationToken.ThrowOperationCanceledException()
at Hangfire.Processing.TaskExtensions.WaitOneAsync(WaitHandle waitHandle, TimeSpan timeout, CancellationToken token)
at Hangfire.Processing.BackgroundDispatcher.WaitAsync(TimeSpan timeout, CancellationToken cancellationToken)
at Hangfire.Server.BackgroundProcessingServer.WaitForShutdownAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StopAsync(CancellationToken cancellationToken)
— End of inner exception stack trace —
at Microsoft.Extensions.Hosting.Internal.Host.StopAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.WaitForShutdownAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at Universal.Server.Program.<>c__DisplayClass4_0.b__0(Options o) in D:\a\universal\universal\src\Universal.Server\Program.cs:line 68
at CommandLine.ParserResultExtensions.WithParsed[T](ParserResult1 result, Action1 action)
at Universal.Server.Program.Main(String[] args) in D:\a\universal\universal\src\Universal.Server\Program.cs:line 49
Thanks. It seems like IIS might be killing the app pool because of the out of control memory usage and this is a symptom of that. I think there are settings for that in the advanced app pool options to prevent this behavior. That said, it shouldn’t be running away with memory like this. After I check out the details you provide a screen share might be in order just to poke around a bit.
The issue is with ConvertTo-Json. Occasionally, certain objects will cause it to just go crazy. By default, anything returned from a PSU REST API is converted to JSON with ConvertTo-Json -Depth 100.
The error I’m seeing in in this dump is “{Cannot validate argument on parameter ‘ComputerName’. The argument is null or empty. Provide an argument that is not null or empty, and then try the command again.}”
This is causing the catch block to be called and then as you return the error record, it attempts to serialize it to JSON and the cmdlet loses its mind. I was actually able to reproduce this in PowerShell directly.
This took about 2 minutes on my machine to eclipse 6 GB of memory.
In 2.10, we’ll be timing out the serialization process in order to prevent this issue. If you use ConvertTo-Json yourself, make sure to set a -Timeout on the endpoint.
Thank you to Adam for a very fast and effective analysis of this issue. Returning a string from an endpoint call instead of a potentially deep object such as $Error[0] resolved our memory usage issue.