Github copilot accuracy in VS Code

Anyone have any luck with code suggestions in VS Code PU extension with Github copilot enabled? I have all the PU modules loaded and am connected with the extension to my local PU instance. But the code completion is even worse with Copilot enabled then it was with just intellisense. It often makes up commands or properties and gives me a false sense of completion. Then when i run the dashboard it errors because the command or property doesn’t exist. I ‘asked’ copilot how to improve the accuracy and it spat out two settings i could adjust but neither of them looks like they would help. Or at least it seems stupid to have to manually add all the loaded modules to the copilot setting. I would think that it would just import all loaded modules. Here is what copilot said:

To improve GitHub Copilot’s code assistance with command help, you can add custom instructions for code generation. Use the github.copilot.chat.codeGeneration.instructions setting to provide specific help or documentation about your commands. You can add instructions as text or reference a file containing command help.

  1. Add command help as text or reference a documentation file in the github.copilot.chat.codeGeneration.instructions setting.
  2. Optionally, enable the github.copilot.chat.codeGeneration.useInstructionFiles setting to automatically include instructions from .github/copilot-instructions.md.
{
  "github.copilot.chat.codeGeneration.instructions": "[{\"text\": \"Add detailed help for each PowerShell command used in AppUtilities.ps1.\"}]",
  "github.copilot.chat.codeGeneration.useInstructionFiles": "true"
}

Show in Settings Editor

Co-pilot is only really as good as its training data. Relatively speaking, PSU is still pretty niche and it’s undergone massive changes in function names and documentation over the years, it’s changing all the time. So i wouldn’t personally expect good results with an LLM and asking it for advise coding PSU, I’ve found it to be confidently incorrect when it doesn’t know and it causes more trouble than it’s worth.

I don’t know enough about how copilot works, but I didnt think it was actively looking at things like module documentation/help text in powershell (since every language is different that would be hard to implement I’m sure).

GitHub Copilot documentation - GitHub Docs Looking at this the copilot instructions section just looks like it’s a pre-prompt you can provide in the form of a markdown file, rather than additional training data or documentation for it to leverage.

I find that the PSU documentation is comprehensive enough to give me what I need, and I can leverage co-pilot to save time when I’m recreating similar things I’ve done before as it takes context from existing scripts and the repo I’m working in. I don’t really use it beyond that though.

I use Claude every now and again and just get it to read through the docs. It still makes quite a lot up and often times references the older cmdlets.

I have the same experience with copilot. It absolutely does hallucinate and misunderstand what I’m asking for. It will invent parameters. Sometimes it seems like it’s trying to give me what I want, even if it doesn’t exist. It’s pretty good with simple or repetitive stuff.

Exactly this. I’ve seen this in multiple languages. I’ve spent a significant amount of time tracking down bugs introduced by copilot. I’ve learned not to trust it so much and just give it simple tasks.

I’ve seen quite a few developers run into this same issue, Copilot inside VS Code, especially when paired with specific extensions like PowerShell Universal (PU), tends to generate suggestions that sound right syntactically but miss the mark functionally. What’s happening is that Copilot relies heavily on its training data and local context rather than your loaded modules, so it doesn’t actually “understand” your specific PU commands or imported dependencies.

One effective workaround is to improve the model’s context. You can do that by referencing your AppUtilities.ps1 or relevant module documentation directly in the github.copilot.chat.codeGeneration.instructions setting, but you’re right, it’s not ideal to manually add every module. Another practical step is to make sure IntelliSense is prioritized in your settings or temporarily disable Copilot inline completions when working in PU-heavy scripts. This helps you rely more on language-specific hints than on generalized AI predictions.

It’s interesting to see how this type of integration challenge actually ties into the Microsoft Certified: GitHub Copilot (GH-300) certification. That certification exam focuses a lot on understanding Copilot behavior, customization, and limitations, especially in multi-language or framework-dependent environments like this one. When I was reviewing similar material through Pass4Future, the scenarios really helped me grasp how Copilot reads context and where it struggles without strong API definitions.