Git database vs external

Product: PowerShell Universal
Version: 4.2.17

Hi :slight_smile:

We have a bunch of different PSU servers and containers running, and most of them are configured with using external Git repositories.

But we recently switched our test systems to the internal git (database) instead, and it seems to work and eliminate the need for external git repositories.

What are your thoughts / experience with this setting?

I’m not sure I understand what you mean. What do you mean by “internal Git”? Are you referring to the option to keep the repository inside of the database using the toggle titled “Bundle Git repository in Database” on the Git Settings page?

Hey,

Yes - either use external Git services (Github, etc) or the “Bundle git repository in database” option. Sorry for the bad explaination.

1 Like

I haven’t used it myself, as we only have a single PSU server. We sync to an external Azure DevOps repo.

My question for you, though, is if you’re currently using an external repository, how would you be keeping that repository updated after moving to the “bundle git repository in database” feature? Would you set up a manual sync on the “main” PSU server using something a CRON job to run a Bash script every x minutes/hours/days? Or would you be removing the external Git repository altogether if you moved to using the bundle?

Just chiming in. The bundle git repository in database option is pretty simplistic. You should just consider what you are trying to achieve. The way it works is that the remote becomes the local git bundle. The git sync runs a git bundle create command, You can think of it as a ZIP file for the git repo. Once this is happens, it updates the database with the bundle contents.

We’ve seen issues with large git bundles in a DB where the update can cause problems because of the sheer size of the contents (mostly history and module binaries).

The process is pretty much:

git bundle create

We grab the git bundle contents and store it in the database.

On the other nodes we save the git bundle as a file and the run:

git origin <bundle path>
git pull origin <branch>
1 Like

Thanks for the questions / updates :slight_smile:

We tend to use a PSU server / container per “solution” - so we have one for our self-service (basic ITSM) portal, One for our helpdesk tool, One for our SQL management tool, etc

The reason we want to use Git is:

  1. When we move the solution to a new server / move the container to another host, the solution just works again, because it pulls the Git repo (environments, etc) and uses the SQL database variables

  2. To be able to scale the servers, since they also just pulls the Git repo and uses the same SQL database.

  3. Have a third party tool scan our code for vulnerabilities - we have an existing solution in place for all our Git repos, so PSU is just included in that by default.

How would you do those tasks with a completely internal Git repo? PSU’s repository is not publicly exposable (at least, not as a normal Git repository), so you’d have no way of scanning anything within it unless it was being done on the PSU server directly.

The third option is optional (forgot to mention that). Its a nice to have feature, if it dosent have to many downsides.

1 Like

My response may be a little off-topic, but I’m just interested to see what other people are doing with PSU and Git, because the way PSU uses Git has always seemed a little backwards to me. Making everything from config changes to script changes in PSU and having those committed from PSU to Git seemed backwards from how my brain understands how the workflow should be.

For what it’s worth, we started publishing to our internal github repo and building Github Actions to deploy our code to the PSU server. We have 2 branches, dev & main. When the dev branch is committed we publish code to our dev/test server. Once we confirm everything is working fine there, when we publish to main, the code publishes to our production server.

We just have the 2 servers right now, but may have more in the near future.

I understand the idea of having the config stored somewhere, as that seems to follow the idea of Infrastructure as Code more. But I don’t like the idea of lumping that configs together with the individual scripts that we’re running.

And I mainly wanted to have the separate Test/Development server, because I know the one time I was building some Apps in PSU, since I was learning, mistakes would often bring the PSU service to a halt and I’d have to stop the app and/or restart the service. Since we have many important things running in production now, I didn’t want mistakes like that to take our prod box down.

Just to make sure you’re aware, you can set the mode in PSU’s Git sync to be “Two-Way” (pull and push), “One-Way” (pull only), or “Push-Only” (self-explanatory). The “One-Way” option may be a better fit for how you’re saying you want to use it. See Git | v4 | PowerShell Universal for further details.

I’m currently just using PSU’s Git settings to sync %ProgramData%\UniversalAutomation\Repository with our remote. Config like the settings stored in %ProgramData%\PowerShellUniversal\appsettings.json isn’t currently in my repo. If I ever find the time, I would like to migrate to a solution like this:

  1. Have a repository for the application code.
    1. Any code in the %ProgramData%\UniversalAutomation\Repository folder.
  2. Have a repository for the configuration code.
    1. The appsettings.json file in %ProgramData%\PowerShellUniversal folder.
    2. Environment variables used by application code.
  3. Have a pipeline that builds and deploys the specified versions of the application code and configuration code to the target environments.

I’m thinking the “deploy” step will essentially be a copy/paste and service restart.

You could accomplish at least items 1 and 2 by using an external Git client on the PSU server, and some scheduled tasks to do a stage/commit/pull/push. That would be pretty easy to create.