Performance - Web Interface

Product: PowerShell Universal
Version: 5.1.2

I have been working with PSU for a couple of months now, but cannot seem to figure out random performance issues in the web interface. At times, just clicking on a menu item will freeze the screen from 5-30 seconds. Sometimes, it does not recover and I have to restart the service. I use several other web-based products in the same VLAN and do not experience these so it does not appear to be network related. I present this to our software selection committee next month, but I’ll be toast if it locks up while I am demoing.

Server 2022, 16 GB RAM, 4 CPUs
SQLite DB, 8.09 GB

This could be due to database size, but I’m not totally sure. Is it any action your take or something more than others?

If you do experience this, I suggest collecting a memory dump of the Universal.Server.exe process and sending it my way so I can review it. We should be able to see exactly what the process is doing.

I just noticed it when using the “Logging” tab. It occurs at different times with different tabs/actions. Here is the memory dump.

Universal.Server-12-20-24-15-13.DMP

1 Like

You have 25 million log messages in the LogEntry table. I’m not sure if this is just a result of a lot of logging or a failure by the groom job to properly trim the table.

Either way, I have to imagine that this would cause lots of issues if SQLite is attempting to insert into the table, query the table or prune the table. Pruning the table likely requires locking the table and that could cause delays when attempting to write to the log, which would explain why clicking around would be hanging. I kinda wonder if the groom job starts running, tries to prune the table but it takes a long time, and then the whole system is hung because of the locked table.

My recommendation would be to try to truncate the log table, unless you have a need for a subset of the log entries.

If you navigate to http://localhost:5000/admin/support and then go to the tools pane, there is a database tool. Use the Execute table to run the following:

DELETE FROM LogEntry

I’m trying to reproduce this but it’s gonna take a while before I have my environment populated with 25 million records.

1 Like

This would have been nice to know of many times in the past, especially the Git tool to force pushes, pulls, or reset. I’ll be sure to remember this in the future.

I think we added these support tools in 5.1. I don’t even think we document them yet. But yeah, these would have been nice years ago.

I would be curious if there are other tools like this that would be helpful.

1 Like

We’re still on 5.0.16 on the production server and I see them. I can’t think of any other tools that’d be nice to have at the moment.

Ah ok. Well, let me know if any other come to mind.

1 Like

I was able to run this. There was no real indication anything was happening other than it seemingly locking up for a good few minutes. I went digging on what might be generating so many log files and found git sync was set to occur every minute. That seems excessive and I do not remember doing that on purpose, but perhaps I mistyped/clicked at some point. I’ll continue using and report back if issues continue.

We have Git set to sync every 1 minute and don’t have performance issues related to that, but we also have PSU set to only keep 7 days worth of log entries in the database. I looked in the database and don’t even see 1 log entry there, so I’m not sure what your 25 million entries were (if they were even related to Git at all).