Hi, I have a dashboard that does a number of lookups in scheduled endpoints against AD or Azure Tables and caches the data either directly in $Cache: variables or in CSV then imported to $Cache: variable. There is a fairly quick refresh rate for some of these variables but nothing huge. When running a single user session, using IIS on a D8s_v3 (8 core, 32gb ram) the CPU is fine, then all of a sudden CPU goes to 100% with spikes down to 0% but both dashboard and server unresponsive - RAM is not climbing at all, stays consistently low).
Now with ~80 users connected, upgraded to D32s_v3 (32 CPU, 128GB RAM) and its peaky on CPU, never 100% more 20-80% but even when it is constantly running at 20% the response of the dashboard is super slow, then iisreset and its quick again until the users start smashing it then slow again. I also have garbage collection and variable clean-up at the end of each endpoint.
I am using UD2.9, i never saw this in 2.8.2(or 3)… I can send logs if needed… Hoping someone has some thoughts here…
@adam - have you any thoughts on this? I have basically re-written the entire dashboard endpoints and sync element but still getting this random 100% CPU. Last night it happened at around 11pm UK time when there would have been near on 0 users actively using the dashboard.
I have UD warning logs, it seems to start with the following but i cant be sure on timings.
[Warn] Microsoft.AspNetCore.Server.Kestrel Connection processing ended abnormally.
Hey @neo - I see you mentioned you are using Cache Vars… I turned away from this back in September of last year. I had the same problem (and still do with a production UD 2.3.2) when using cache vars - I have to reboot the web app every day in order for it to load pages, but by 3-4pm it takes minutes to load some pages.
What we did is move all our dynamic endpoints into Azure Functions - so we dont have to cache anything we can pull it live either in the endpoint itself or in a $session var which expires at logoff.
I don’t know why, or what causes this - maybe its size of $cache or maybe its having a lot of them… no idea but I don’t use $cache vars anymore.
For the same reason as @tonyb I have very frequent lookups, one of the parts of the dashboard could be thought of as a live ticker that looks up from Azure tables which are slow (much faster not using REST rather than PS commands).
Sadly i have just done the opposite and moved away from on-demand lookups to all being in $Cache and removal of all unnessecary $Session var’s. Something strange is happening here. Is there anything in the admin tools that i can see what endpoint is using what CPU to maybe track the lock?
Yeah I feel your pain on wanting to cache everything, that’s where I started back last year.
But now with some great updates to UD like loaders in the Grids, Set and New Endpoint dynamically loading info is not as bad once you get the kinks sorted out.
I also do things like Session Cache - Say a list of customers that will be used across the pages. then I also have some Session Vars that carry page state and data across to other pages. so I can redirect from one page to another, create a Session Var and have the second page just use that data, or use it with another REST call.