Change port for a single endpoint or the admin console

Product: PowerShell Universal
Version: 3.9.15

Would it be possible to change the listening port for a single endpoint?

The reason for my questions is that we currently are using port 443 with IP restrictions, but now I need to create an endpoint which I cannot IP whitelist because the IP source is not known.
And I would like to avoid opening up for 443 from any address so the admin console will be exposed as well.

Maybe it could also be solved by changing the port for the admin console, but I cannot see how that is possible.

Hope someone might help. :slightly_smiling_face:

Can you clarify what you mean by endpoint here? Do you mean app/dashboard?
If you do, I doubt that you could independently configure separate apps in this way, unless you had separate instances of PSU. But I suppose it entirely depends on your hosting/setup, if you’re using something like IIS you could maybe use something like rewrite rules depending on web address, and I guess you could do something similar in cloud if you used front door or app gateway layer infront of the web app.
For linux web apps you can use env vars to configure the ports that PSU uses too.

No I don’t mean app/dashboards, I mean API endpoints.

Would be nice if it was possible to solve without creating rewrite rules - currently I don’t use IIS.

ahh, gotcha, then I misunderstood.
I mean it depends on what level of whitelisting you want, e.g if this needs to be server/web host config then you’ll only be able to do it globally unless you have separate instances of PSU, because you’re applying it at that level and all API endpoints are hosted under the same service. But it would be still useful to understand more details on your setup, if you’re not using IIS are you hosting in azure web apps?

if you want, you could probably achieve this in code by using the client IP within the endpoint, and you can just use some code to reject/return nothing or an error if this doesnt match your rules. Of course you’d need to define that code in each endpoint, but you could modularise it.
But thats not ‘true’ whitelisting in the sense of the connection being rejected before it hits the application, it all depends on your requirements I guess.

For the latter option the variable $RemoteIpAddress will hold the client ip address

I am running PSU as a Windows Service installed by the MSI package.

My main objective is to protect the admin console so it is not exposed externally. If I by code in the endpoint itself create any IP verification checks, I still need to open for everything in my firewall to PSU and then the admin console will be exposed.

I can see it is possible to hide the console through the appsettings.json file, but that is not exactly what I want, would be nice to still be able to access it.

But it could be workaround to hide the console, it’s still better then nothing.
But @insomniacc may I ask if you run with any of your PSU instances in public without any IP restrictions? Or du you mainly use it for internal purposes?

The PSU instance I want to expose is my external facing one, which has almost no secrets or sensitive code in it, I have that stored in my internal PSU instance. But I still think it would be nice to avoid exposing the console.

I do, I run mine in an azure web app, the org has remote users in various locations with no real vpn solution in place therefore it’s hard to lock it down via whitelist.
It’s not really used in any context currently outside of internal business use, so doesn’t have much exposure and I’m comfortable with the application security in place currently using OIDC auth + MFA & roles etc. There’s nothing really sensitive in the platform also. We use keyvault and azure sql which is restricted by firewall rules.
As it’s still in it’s infancy I kept the login form enabled along with OIDC for times where I needed it as a backup method to get in, but I’ll be switching it off it so that OIDC is the only method to get into the admin console and the login form is disabled.

I’m not actually staying with my current org, but longer term my plan if scaling the solution would have been to sit it behind something like app gateway / front door, lock everything into private vnets, and potentially use whitelisting, with specific routes/paths for each of the dashboards/apps depending on the external client accessing it and the type of data thats held, that way I’d also be able to redirect endpoints to a static maintenance page should I have needed to take the whole thing offline for upgrades, maintenance etc (another one to explore would have been deployment slots too), currently I’ve not been considering a HA setup as there’s no immediate requirement for that internally so we’re just running a single node setup with a storage account.