Just curious to know if anyone can think of any pros/cons of having code being in a script versus being directly in the API. Previously, we had the API set to Invoke-PSUScript
to run the script with the code, but I think it’s just adding another “layer” of complexity by having to trigger the script that way, as opposed to just running the same code directly in the API without the need to execute a script. Can any of you think of reasons to keep the process as the original design?
The only real “con” I can think of, is this would potentially make the endpoints.ps1 file huge, and cause the PSU service startup times to be longer than normal due to having to load hundreds of lines of code rather than just lines calling scripts.
I’ll throw another wrench into this for you - We try to put as much as possible in modules, as we have some functions that see reuse between API endpoints and scripts
One advantage of this is that both our scripts and our endpoints files are smaller now
1 Like
Pretty much all we use PSU for (so far) is as a middle-ware, to “connect” systems in ways we need them to. Most of the functions we use are parts of modules some of the vendors have made available for their products, such as Hudu, Microsoft Azure, etc. The scripts we use them with are for things such as pulling specific items in Hudu for use in accessing XML or REST APIs on client devices (such as firewalls) to download their configurations. Others may be things like importing AD users into Hudu to associate them with the client entry, so they can be tied to their respective devices, etc. Some of the scripts, as I mentioned, are hundreds of lines long. I could technically put all of that code directly in the Endpoint (API) in PSU, or keep it in a separate .ps1 file. Just trying to weigh the pros and cons of either scenario.
I don’t think there’s necessarily a right or wrong way to do this. Just depends on your needs and your use case. A couple reasons you might have the endpoint trigger a script:
- Each run of a script produces a job that can be more easily monitored for failure states. Jobs also retain the console and pipeline output for easy review if a job fails. Endpoints have a log as well, but it’s not as easy to review/manage in my opinion.
- When the endpoint triggers a script and that script run fails for any reason, you can address the failure cause and then easily re-run the failed script job using the same parameter input.
- Having the endpoint trigger a script can allow asynchronous execution of the processing while returning a timely response to the client that sent the API request. If you have long-running processes, you may not want to synchronously process the request and risk the client timing out on the API request.
These are all reasons that I’ve used the Endpoint + Script approach in the past. But it does add a layer of complexity that may not be warranted in more simple use cases.
1 Like
I take a balanced approach to this, like others have mentioned it’s very much a case-by-case basis.
First off, I do not keep anything in endpoints.ps1 except the declaration of the endpoints. Create separate files for the code of every endpoint, and put them in a folder inside the .universal folder. Declare your endpoints like so:
New-PSUEndpoint -Url ‘/api/systemName/enpointName’ -Description ‘My description’ -Method @(‘GET’) -Authentication -Role @(‘Administrator’, ‘MyThirdPartyAPIRole’) -Path ‘Endpoints\systemName\endPointName.ps1’ -Tag ‘SystemName’ -Documentation ‘SystemName’
I wouldn’t consider this much of an abstraction, as you’re still maintaining the code “close by.”
Second, I like to abstract out the code that is common among all of your APIs to handle data structure and formatting. For instance, if the procedure fails, you can include an application/problem+json response. Also, I like to include a wrapper around API responses that includes the count, pagination details, etc.
The body of your endpoint code can then become as simple as this:
# comment based help
# parameter declaration
$NewPUDApiResponseSqlParam = @{
'ServerInstance' = 'MyServerInstance'
'Database' = 'MyDatabase'
'Query' = 'select * from dbo.MyTable'
'Credential' = $SQLInfo.Credential
'UseResponseBodyWrapper' = $true
'ReturnEmptyArrayOnNoResults' = $true
'ErrorAction' = 'Stop'
}
$Response = $null
$Response = New-PUDApiResponseSql @NewPUDApiResponseSqlParam
# Logging
$Response.Response
Third, there are three reasons why I would abstract the data gathering or processing to another function:
- The logic is complex enough that I want to be able to easily test it without running through the API.
- The code needs to be re-used somewhere else.
- It’s a SQL call; use a stored procedure where it makes sense, which it often does to prevent SQL injection attacks. While I’m able to write dynamic SQL to prevent SQL injection without a sproc, I have a custom written Invoke-SqlCmd2 and even then prefer to have a sproc.