Optimizing UD for Best Performance

Hi All -

We will be doing a final UAT before RTP. I would appreciate if you could share your experience in improving the UD performance and user experience. Just a background, I created a self-service support tool for our business users - they should now be able to perform most of manual tasks by themselves (normally done by us) - eg. add, edit / update, delete users.

What I have right now is a 9 UD pages, with alot of endpoints within each page (for UDtimeout, AuthorizationPolicy, UDinput, UDgrid, UDelement, UDbutton - multiple in PS customobject and etc.) as well as 5 to 6 $session variables for each page. However, total count of endpoints in admin diagnostics is 26 - not sure if it only counted main endpoint block discarding any sub endpoint within.

Our server spec is 4 cores, with 16GB of RAM. We have done testing with a couple of users, performance was great - not exceeding 1-2 seconds on each large requests. No PST done as this initiative is not funded. FYI, we have about 200-300 users, and is expected to serve roughly 50 users concurrently at any given time.

However, I still have concern if I should re-write codes to ensure UD able to serve these users without any hiccup hence my questions below.

  1. How many endpoint is considered excessive? Or does this not matter, as long as no long running or large script block?

  2. We used a lot of session variables (5-6 session variables for each page) - for udgrid display (it can grow to 1000 rows), tracking changes made by users for each tasks. Will this affect user’s experience considering 50 users x 6 session variables?

  3. Would introducing a local DB improve performance - in a sense of replacing session variables? I would assume constantly writing and reading to DB would somehow affects performance as well.

  4. Do I need to perform GC.collect at the end of each session / page processing - eg. after the user finally submit their requests in each page?

Please share your thoughts on what should I have in my UD implementation, and how to effectively improve our user experience. Your feedback is much appreciated.

There is some discussion around capacity here: UD Load Testing

But basically, if you’re expecting 50 concurrent users I think it’s do-able with the server spec you have.
Just consider the following:

  • Using session vars is fine for small bits of data, but for any large datasets I would consider using SQL. You can also leverage -serversideprocessing on udgrids which makes things super speedy (since it lets SQL do the pagination).
    You just have to consider how you’re reading and writing, I wouldnt say you need to perform collect/submit at the end, but be wary of using the right indexing in your tables and ensuring you dont get any deadlocks - maybe make sure you’ve got the relevant error handling, try/catch blocks and retry functions to handle this if you do.

  • It maybe also worth putting in a scheduled app pool recycle to have a ‘fresh start’ daily, I do this for mine to prevent the memory building up too high, I’ve got a scheduled endpoint which checks the app pool recycle time and broadcasts a message to all users to let them know this is happening (although not expecting people on at 3am)

I found that the bottle neck will usually be CPU before memory.

Thank you for your input.

Yes, I read the discussion you had - basically from your test with the current spec it should be able to handle 100 concurrent users. Again, our dataset is different that what makes me worry.

I was reading about server side processing, from my test the UDgrid with direct output from a session variable (pscustomobject) is faster. I query directly from application DB, that already handles the filtering and sorting. Not sure if the server side processing configuration is required, or how I configured was wrong.

My other session variables data is quite small, I would expect each user to populate these sessions with less than 10 rows at a time. But the main session variable is what populate UDgrid (1200 rows max) with the extra button which query other set of data (called from other functions), and user populate these small sessions with update / deletion etc.

Anyway, I have my UD cycle every 6AM which I think we are good here.

I will keep an eye on the CPU usage. Do you have a UD page that monitors memory and CPU usage?

Sounds like you should be good then, I guess just keep a close eye on it.
We use solarwinds but i’m pulling in the cpu and memory indicators directly onto my admin page with:

new-udimage -Url "https://solarwindsURLhere/Orion/NetPerfMon/Gauge.aspx?Scale=100&Style=Elegant%20Black&Property=CPU+Load&NetObject=N:12345&GaugeName=CPULoad&GaugeType=Radial&Units=%20%&Min=0&Max=100&t=123456789123456789"

basically right click copy the img url and it will load on page refresh directly from solarwinds if you have permission to view it. :slight_smile:

Hi @insomniacc -

I was just wondering, at any given time - how was your CPU usage? How about with 50-100 user accessing concurrently? Did it ever hit 100% CPU? How was the page response like? Was it crawling?

I am seeing CPU spikes on every a page load, DB / AD query or large session var data to be spiking 10-30% (for 1-3 seconds) on 4 cores VM. Memory usage is quite low - ranging 300-400MBs out of 16GB. We are currently still under UAT testing, I am a little worried about CPU usage and user experience after we RTP.

Once I’d upped my memory to 16GB I saw no issues on that front.
CPU usage is fine/doesnt cause any problems but spikes and maxes out when I start upping concurrent connections to 450+.
I dont really have large amounts of session data, everything is pretty direct with invoke-sqlcmd2, my grids are server side, they let sql do the filtering and pagination.