Product: PowerShell Universal
Version: 5.6.8
I am trying to create a sandbox or dev environment which closely mirrors our production cluster. At first I figured I could just create a dev database on our sql instance and then clone a dev branch of the git repository, and have the server sync from the dev branch, however…
As I started to think about it more, I realized that certain configurations (such as the schedules.ps1 file) would carry over production settings. The schedules file would tell the dev server to run production scripts using production accounts, etc.
There’s a host of similar issues I run into with this idea, which led me to think I may not be approaching this as intended. Has anyone run into this before, and, if so, what was your solution?
Thank you!
I have prod and dev.
I use a single repo/code base.
Both run under different service accounts that have segregated permissions to prod and dev SQL databases respectively.
Prod is on the main branch while dev is on the next branch.
Some elements you can persist to your SQL, so for schedules for example as you’ve mentioned, these are environment specific for me, in my appsettings.json i have the following section within the data block:
"Persistence": {
"Schedule": "Database",
"ScheduleParameter": "Database"
}
There’s a section somewhere in the docs about this, just cant find it atm.
Here we go: Repository | PowerShell Universal
The bottom section is about database resource persistence, please don’t use git ignore to do this lol. Just from the veiwpoint that if your server dies and you have backups on sql and/or high availability, nothing is lost. Suppose it depends on your environment, but for me it makes for sense to use this mechanism through sql.
Ah, this is the answer I was hoping for! It seemed architecturally flawed to have these kinds of configurations so dependent on .ps1 files, and that they should be dependent as a database resource, like you said.
In my case, I want to run the process as the system account, because I’m primarily using PSU as an API bridge where each endpoint executes as a different gMSA account. I have a few scripts that I am hoping to eventually eliminate, and the execution for those is defined by the schedules that run them, so, as long as the schedule parameters can be environment specific, I should be fine.
EDIT: Looking at this again, I already have my production cluster’s appsettings.json file configured as such:
"Persistence": {
"Schedule": "Database",
"ScheduleParameter": "Database"
}
So I’m not sure why my schedules are still looking at the .ps1 file?
Thanks!
Just remove the schedules.ps1 entirely from your production nodes and it should continue to read from sql. I think (I may be wrong) but when I migrated, before I’d added anything to sql, when I started up my PSU instance, it still executes the schedules.ps1 script and therfore replicated everything into sql. but will probably continue to do that unless you remove the file. (this was a while back and I havent tested since, so I’m assuming that’s still the behaviour)
One thing I do use gitignore for is the authentication.ps1 to ensure that no secrets end up in git, though there may be better ways of doing that now.
Do you have your git sync configured to use a two-way sync? How do you make changes to your scheduled script parameters since I don’t see any way to do that in the database
I use one way sync in prod, while this makes most things read only through the UI, certain resources such as schedules can still be created manually in the UI and updated in the database still.
For example, I wouldnt be able to create scripts, apps or API’s
Oh really? I recall certain aspects of schedules being modifiable in the UI, but the schedule itself still had to be created in the schedules.ps1 file… I’ll have to give that a shot again another time
I think if you switch it to database persistence then it will open up elements to being modifiable since it’s no longer defined in code. In my prod instance I can create new schedules and modify all properties of existing schedules.