Practical question: 2 environments, how do you manage your GIT repositories?

Product: PowerShell Universal
Version: 5.4

I have 2 PSU instances that each exist in a separate (Windows/domain) environment:

  • Prod
  • Dev

The idea is to have 1 repository in Gitlab with 2 main branches :

  • Master branch serves for our PSU instance in the production environment
  • Dev branch serves for our PSU instance in the Development environment

Each environment has its own characteristics and setup. For instance, I have 2 separate O365 environments. This means that claim for roles, are different in Dev than they are in Prod. More broadly, most of the .Universal folder between Dev and Prod would be different.

.gitignore could be a solution to exclude these files or the entire .universalfolder, but it comes with the following issue.

First, in Dev, I create a branch dev-dashboard2.
Next, in PSU Dev, I create a new application/dashboard named dashboard2.
By doing so , the file .universal\dashboards.ps1 was modified and a new line was added. I write my code, test and eventually I merge dev-dashboard2 with the Dev branch.

Since I want this Dashboard2 now to be available in Production, I would merge branch Dev into Master.

But, since I have excluded the entire .Universalfolder, the dasboards.ps1 does not get merged into Master.
Meaning that only the application code that sits in folder under dashboards gets merged to the prod (master ) branch.

Since there’s no reference to this dashboard/folder in the .universal\dashboards.ps1 file, the application won’t be visible in the Prod PSU instance.

What I’m asking is, how do you manage 2 instances of PSU with GIT?
Do you have 2 separate repositories? That would mean that at some point, I have to copy-paste the final code from Dev to Prod. It feels like it beats the purpose of versioning … .

I wouldn’t exclude the entire .universal folder - that’s just going to mean you won’t sync your Prod’s .universal folder either, and that’s risky. You can exclude individual files that need to be unique to each instance, but not with .gitignore. For example, I exclude the authentication.ps1 file because my Dev environment is not set up to use SAML like Prod is, so I can’t have it overwritten without breaking Prod., so, I’m using .git/info/exclude to exclude authentication.ps1 on the Dev server to keep the file from being tracked locally. It uses the exact same syntax as .gitignore but is a local-repo-only exclusion - it doesn’t get synced to other repositories like anything in the .gitignore file would. If you need to keep track of all of the files on Dev, too, though, this wouldn’t be an option.

You could try using Git’s ā€œcherry pickā€ functionality to only merge specific commits, but that could depend on what all is included in each of the commits and what all would be modified.

Another option is called Merge Drivers. You can see an example of how to use them here, though there’s a comment that says they only ā€œtriggerā€ when there’s a conflict during a merge, and that if only 1 side made a change there would be no conflict and thus the custom merge driver would not be used.

If you choose to use the .git/info/exclude option, one additional thing you’d need to do for any files that were previously tracked that you now want to be excluded, is to clear the git cache for that file via the git rm --cached <filename> command

According to the comments on that page, it seems that more than 1 person mentions the fact that you need a conflict in order for it to work. I will test it but if that’s the case, this isn’t a good solution.

The other option .git/info/exclude seems more promising. I will set up a demo for this as well.

git cherry-pick as I understand it requires a very very strict commit hygiene, which frankly is not present (and I speak for myself).

The consideration to fully exclude the .universal, is only possible due to the fact that have daily Veeam snapshots running. So I can always retrieve the .universal from those daily backups or do a full VM restore.

But as I said in my original post, excluding the .universal folder, comes with it’s issues as well. I would be managing certain files manually. If that’s the case, I might as well keep 2 entirely separate repositories, 1 per domain.
This is how I have things set up right now. The problem is that even after a few weeks, it shows how hard it is to keep this maintained properly. The flip side is that the Dev environment is truly a playground.

I appreciate you taking the time to discuss this and offering solutions.
I can’t imagine that I’m the only one struggling with this.

Sure. Good luck. I’ll be interested to see what you end up doing.

I can share how we are doing almost exactly the same thing here in our situation too. I posted this question a while ago for the same type of situation where we have a Prod and dev environment on two branches and a different SAML connector in Okta for each environment: Prod/Dev Variable or logic

I dug around a bit more and have adjusted the logic we are using a bit so that it now looks like the below

Notes for it:

  • In variables.ps1 am using the #region PSUFooter region so if anyone edits the vars on the UI it doesnt overwrite this part of variables.ps1.
    ** The Prod instance is a one-way sync from the main branch so the prod changes only come from Pull Request merges
  • In that region I wrote some logic to determine the environment and set it to a variable
  • If I use $PSUisProduction inside variables.ps1 then it will return the value before this script ran - this tripped me up a boit while trying to set this up
  • We use that variable in scripts, apps, etc to then target dev or prod instances of apps we connect to - eg a different OU path in AD for test accounts, a different instance of ServiceNow for dev
  • variables.ps1 isnt the first script in the startup sequence, but its before most, and earlier ones like initialize.ps1 dont have access to the variables yet
  • Am using the $Repository path variable currently as its one of the few that exists at this point in the process, but could also use the git branch name, environment variables, the computername, or a value from a file that is outside the repository or in the .gitignore - eg an ā€œappsettings.jsonā€ equivalent that stores the data for the decision -

Im still debating if Repository is the ā€œbestā€ item to use for the logic as we work on some CICD processes around this, but it does mean we don’t have to exclude any core things from the Git repo.

Code is in the expander below

Variables and Authentication.ps1 files for those interested

variables.ps1

#... Bunch of variables above here
New-PSUVariable -Name "PSUisProduction" -Value $false -Type "System.Boolean" -Description "Is this a production environment?" 
#region PSUFooter

Write-PSULog -Level Information -Message "Variables loaded - Setting Dynamic Values"

#Be careful in the use of $PSUisProduction directly after this point as it is evaluated before this script loads
Write-PSULog -Level Information -Message "PSUisProduction: Repo path: $($Repository)"
$isProdVariable = Get-PSUVariable -Name "PSUisProduction"
if($Repository -eq "D:\PSUniversal-Prod\Repository")
{
    Write-PSULog -Level Information -Message "PSUisProduction: Repo name matched Production"
    Set-PSUVariable -Variable $isProdVariable -InputObject $true -Integrated
}
else {
    Set-PSUVariable -Variable $isProdVariable -InputObject $false -Integrated
}

# Spit out in the logs what it is
$isProdVariable = Get-PSUVariable -Name "PSUisProduction" -ValueOnly
Write-PSULog -Level Information -Message "PSUisProduction: Result=$($isProdVariable)"
#endregion

authentication.ps1

#Decide prod or not based on variable set at load
Write-PSULog -Level "Information" -Message "isProduction:$($PSUisProduction), isDevelopment:$(-not $PSUisProduction), Path: $($RepositoryPath)"

#set the SAML Setup depending on the environment - prod settings by default in case we have an error lower down - we can always fix dev
$samlOptions = @{
    Type                          = "Saml2"
    #... Secrets be Here
}

if (-not $PSUisProduction)
{
    $samlOptions = @{
        Type                          = "Saml2"
        #... Secrets be here
    }
}

# Set the Saml2 authentication connector
Set-PSUAuthenticationMethod @samloptions

#... Other auth below here
1 Like

Thanks for chipping it.
Let’s see if I understood this correctly. So basically you’re using variables.ps1 to detect if PSU is in Prod or Dev. It checks the $Repository variable to set a $PSUisProduction variable. This variable then dictates which configuration to choose in you authentication.ps1 which has different settings for each environment (SAML or not). This keeps Prod and Dev separate and but allows to keep .univeral within git? Is my understanding correct?

In the assumption that i understood it correctly: a follow up question:
Let’s say you would create 3 more dashboards in DEV. But, you only want (at a certain moment) promote 1 to production.

Your dashboards.ps1 would have 3 additional references to it, but you would only promote 1.
How do you tackle that? Do you create a branch for each dashboard? I can see how that would work because you can then ā€˜cherry pick’ to only merge that branch. Each branch would have just one additional line in reference to the master, right?

But for things like roles.ps1 , tags.ps1 , scripts, terminals, even variables. They would all need to have the PSUisProdcution check ?

no probs at all on the chipping in :slight_smile:

You;re understanding is exactly how we use it yes, we use the $PSUisProduction variable wherever we want to do things different between our production/dev servers - This is predominantly things like - different systems or secrets between systems we connect to, to hide/show parts of an app in Dev - eg we show a debug footer on some apps in dev that expose a lit of data that’s noise to an end user, but useful as we build like the page variables.

In the scenario your describing there with dashboards, etc we haven’t yet struck that scenario specifically. We do use feature branches to build and test, etc, and are able to switch branch on the development environment to test different use cases. Our dev branch is predominantly the testing of the ready to go to production aggregation (almost like a release branch), and we do only have a couple of people working on the system so its easy to coordinate whose testing what.

That said a few ideas do come to mind if the Dev branch has to have stuff merged in (which is totally something I’ve done too):

  • You could use the variable in the roles.ps1, tags.ps1 etc in the same way, but you may need/want to put your bespoke code in the untouched footer region so the UI never overwrites it. eg you could have the ā€œproductionā€ stuff above that then add additional dev bits in the footer behind an environment detection type logic. I’ve tried to limit logic in the system files - they wont show in the settings UI from memory in this situation.
  • The setup we use that’s probably most like the dashboard example is that we have some roles that only contain users in the development environment and then use the role parts of PSU to hide things on production.

Using Roles - example using dashboards

  • In our roles.ps1 we have a role called AppTester - on all environments
  • In our SAML Connectors we have a number of groups assigned to the app in Okta that pass in roles, and in the roles file we use a simple claims match
  • Dave is in the App Tester group thats attached to the Dev connector in Okta so when I log in to Dev I get that claim
  • That group is not attached to the production Connector in Okta, so when log in to Production I dont get that claim
  • In the config of the Dashboard in dashboard.ps1 we then use the Roles assignment to Assign the dashboard thats not ready for production to AppTester
  • When I log in to prod I cant see that dashboard (noone can), when I log in to dev I can
  • When its time to release the dashboard I can change the dashboard assigned roles and PR it to Production
  • We can use the roles functionality built into PSU to show/hide dashboards, portal scripts, parts of pages, etc

I’m certain thats not the only ways to do this too, but hopefully I havent written too much and bored you to death :stuck_out_tongue: (it is saturday morning here4 and the coffee is just starting to kick in)

No worries, I have a newborn, I am on a constant coffee spree myself :sweat_smile:.

So your single repository serves both PSU servers. It either functions for the dev-server where you are experimenting (DEV branch) OR it serves the prod-server by having it hooked up to the master branch.

You make sure that it gets the correct configuration based on a variable system. $PSUisProduction is used to help with that. If $PSUisProduction -is $true, these variables, these secrets.

You define roles based on claims, these can be both. Meaning they can be claims to groups in prod or in dev (or even possibly both for the same group).

When you are in PSU that is synced up to the DEV branch and PSU goes to get its claims, it only finds the dev claims, effectively granting you access to those scripts and dashboards.

If you are on the PSU instance that is hooked up to the master branch, you get those roles and the apps and scripts that go along with it.

When you have a dashboard in DEV, that is completed and ready for master, you then do a cherry-picked? merge for that dashboard?

Another interesting read i’ve found was this: SUBMODULES: A git repo inside a git repo. - DEV Community

Not sure if it would help in my case. Need to read more to fully understand.

The rest of it is on the money, the only diffn is in this bit

Our main branch is what the production server runs from - it is Gitsynced using one-way sync, so whenever the main branch updates the server gets the code in a few minutes and is running the new version

Our develop branch is what the development instance is hooked up to, It is not set for GitSyncing in the PSU setup atm, we update the settings/code and then make commits.

We move the whole develop branch to main via a GitHub Pull request rather than cherry-pick commits so we know that the main branch has all the things we tested before moving to production

1 Like

Right, reading back your and my own post, you don’t need to cherry pick.
In my previous example of the 3 dashboards. Say only 1 was ready for PROD.
Because of the way you work with the variables: Unless a role is assigned to them in PROD , users would never see them.

So going over it in my head:

The benefits are that everything exists in 1 repository and they are a 1:1 copy.
The experience is only different due to the system with variables.

The main drawdown I see is that :

  • Configuration is hidden and in .Universal-files
    It requires team members to really know PSU and how it works.
  • We have to hope that, with an update to say PSU v6, this all keeps working.

yeah i get what you mean there, its probably more hidden in the Identity provider in my example in a way - eg the group/members not passed to production so the role is there at at PSU prod, but empty.

I’m sure there’s other ways too - eg I really wish there was a way to set a variable in appsettings.json - outside of source control - and then I could use that in any of the universal files to set things up - but you arent wrong about needing to have some PSU knowledge - Im learning every day :slight_smile: