Hi, but doesn’t it mean that you are going to double the API calls ?
Yes but the problem is that all the API runspaces are currently monopolized by the long running requests. These API calls would return very quickly to allow for the runspace pool to process new requests.
I think it’s not going to be a good idea
- It a breaking change to how API work ( if one would want to upgrade from old version to a new one he will have to rewrite all the endpoints ).
- Debugging this will be HARD ( what to debug the response endpoint or request endpoint / how go debug internal job queue )
- Maintenance going to be a hard af ( change / remove one endpoint , you will have to change / remove the response endpoint too ).
As i see it, in that point Api’s it’s just jobs with a URL.
maybe I’m wrong, idk.
You’re right in supposing I’m making calls from an environment where splitting them up won’t work.
It seems playing with timeouts might produce interesting results. Were I to tighten up the pretty loose timeouts I currently have in place, I’m wondering whether it would be best to do so on the API calling client side or on the New-PsuEndpoint definition. If the client cuts off the call vs. PU cutting off the call, which event will return the runspace to availability the fastest?
This is what we do and it works well