Product: PowerShell Universal
Version: 3.2.8
Is there a way to force an instance of powershell universal into ‘read-only’ mode as if it were doing a one-way sync from git without actually configuring a git sync?
Product: PowerShell Universal
Version: 3.2.8
Is there a way to force an instance of powershell universal into ‘read-only’ mode as if it were doing a one-way sync from git without actually configuring a git sync?
What is your end goal here? To prevent changes being done through Powershell Universal Admin?
correct - the goal would be to prevent someone from editing anything in the admin console.
Details: We are using source control for our code, however we’ve got a custom multi-stage process including some tests and approvals that make the basic ‘git sync’ not fit well, and we’d like to ensure all changes follow our deployment process.
I’m also curious why/how you cannot do a one-way git sync? Seems like it is exactly what you need. Really you process can be whatever you want or need, you would just need to have a repo getting the changes committed at the end of your process…
Esentially the problem is that I don’t fully trust the people updating [parts] of the code not to make stupid mistakes and errors. The process we’re currently following goes something like this:
Code Changes → PR/with Policy Enforcment → Merge → Deploy to QA → automated tests-> approve → deploy to Prod.
Stated another way - we want the current code in ‘master’ to be live on QA and then tested before it is live on Production.
unless I’m missing something, with one-way sync there is no way to have the code that is comitted to master ‘live’ on the QA system before it is ‘live’ on the production system…using oneway sync both systems would get the commit to master at the same time, without any approval or review.
I can see how this might be accomplished if the sync could filter on ‘tags’ or ‘labels’ rather than just on branch, but as far as I know that isn’t supported today.
Having the changes comitted at the end of the process doesn’t work, because it introduces the likely possibility that what gets commited isn’t what was tested/approved/reviewed - either because additions get lost, or due to merge errors/confilicts.
Stated another way - we want the current code in ‘master’ to be live on QA and then tested before it is live on Production.
So, two things come to mind?
Get off of master, structure your branches like “QA” and “Prod”. Restrict who can commit to Prod (you and only you). After QA passes you can commit the changes to Prod which will one-way sync to your Production instance of PSU.
Managing a branch protection rule - GitHub Docs
Or…
Create another repo that only you have access to. One way sync. Once QA passes, pull everything from QA into the Production repo.
I personally would choose the first option…
If these aren’t options, then technology cannot fix a broken process lol.
You may need 3 steps really, depending on how many people are touching things… Dev > QA > Prod. No direct commits to QA. QA exists to be more thorough an investigation and review than Dev. Once QA is confirmed, then it gets pushed to Prod.
I’m familar with branching and protection rules, we already have feature branches, and ‘QA’, but instead of ‘prod’ we have ‘master’ - the name is unimportant really, what is important is that active development isn’t happening directly in the branch that will be deployed to the server.
I’m well aware of this - I’ve been building and deploying code for my company for 15 years, I was automating builds, tests, and deployments well before it was considered ‘cool’, or standard practice.
neither of these suggestions resolve the issue that I’m trying to address…
I want a chance to see the code in “master” running someplace other than production, before it is deployed in production.
Even if I’m the only one who can commit to Prod (or master) the name of the branch really isn’t important…and I perform the merge from QA to Prod, that still allows for the fact that I might make a mistake during the merge and that mistake would go directly into production without any delay or ability to catch it.
Call me paranoid and lazy if you want, but I’m working with a super green team of Indian contractors, and to be honest I don’t want to have to be the one to babysit every single merge, I want them to be able to do the merge, have the process expose any mistakes, and allow them to see the mistakes before the code goes live in production.
neither of these suggestions resolve the issue that I’m trying to address…
I want a chance to see the code in “master” running someplace other than production, before it is deployed in production.
Nothing is stopping you from having a QA instance of PSU running off of the QA branch, allowing you to see it and test it.
This grief really stems from the fact that you have “QA” and “QA Deluxe”.
It really seems like you need 3 steps. You need Dev, for active development. Your developers need to work off of this branch, merging into it… The wonderful thing about PSU is that they can run an instance locally, while working off of ‘dev’… After an initial validation, they then need to merge changes to QA. I recommend this happen on some sort of cadence… Daily, weekly, whatever you prefer. This should not be an environment that can change 5 minutes before you commit QA to Prod. Finally, good old reliable Prod which has been fully tested in QA.
your suggested process still doesn’t stop a bad merge from going live directly on the production server. It still requires a merge from QA → Prod and then Prod goes live with no validation. There is no reason not to have a sanity check after the merge, before the updates go live.