
GitHub Copilot in VS Code - Quick Wins for Getting Started
- Michael Stonis
- Development
- March 3, 2026
Table of Contents
A Quick Note on Timing
Everything in this post is relevant as of March 2026. AI tooling is moving extremely fast right now, and some of what I describe here will likely be outdated within months. Take it as a starting point, not a permanent reference.
Stop Treating It Like a Search Engine
The most common mistake I see when people start using Copilot is treating it like a fancier way to look things up. You ask it a question, it gives you an answer, you move on. That is leaving most of the value on the table.
Copilot in VS Code, specifically with Agent mode enabled, is closer to a junior developer sitting next to you who can read the entire codebase, make file changes, run commands, and iterate on feedback. Once that clicks, the way you interact with it changes completely.
Pick the Right Model for the Job
Not all models are equal, and using the wrong one for the task costs you time and money.
Claude Opus 4.6 for planning and research. When you are starting something new and need to understand a large codebase, plan out an approach, or reason through something complex, Opus is the right choice. It is excellent at scanning large contexts, identifying patterns, and producing detailed plans. It is also slower and more expensive, which is why you do not want to use it for the actual implementation work.
Claude Sonnet 4.6 for the real work. Once you have a plan and you are in implementation mode, switch to Sonnet. It handles code generation, refactoring, bug fixes, and feature additions very well, and it moves at a pace that does not interrupt your flow. This is your day-to-day driver.
Think of Opus as the architect and Sonnet as the builder. Use each where it makes sense.
Use Agent Mode
If you are still on inline completions only, you are missing most of what makes this useful. Agent mode lets Copilot read files, make edits, run terminal commands, and work through multi-step tasks with minimal hand-holding.
To use it, open the Copilot Chat panel and switch to Agent mode. From there, give it a task and let it work. It will tell you what it is doing and ask for confirmation before anything destructive.
Agent mode paired with Sonnet for implementation is the combination I use almost exclusively. Planning mode exists as well, but in practice I find Agent mode handles most of what planning mode is designed for. If you are kicking off a very large task where you truly just want to think through the approach before touching any code, planning mode is worth trying. Otherwise, just start in Agent mode.
Start with a Plan and Write It Down
For any non-trivial task, do not just jump in. Spend five minutes with Opus creating a plan first. More importantly, make that plan a living document in your repository.
Here is the kind of prompt I use:
Review the codebase and create a comprehensive markdown checklist of all the work that needs to be done to implement [feature/change]. Organize the checklist into logical phases. Each phase should be something I can review and approve before work begins on the next one. Mark items as complete as they are finished. Include any dependencies, risks, or things that need decisions called out explicitly. Save this as [feature/change].md in the root of the project.
In this example, replace [feature/change] with a descriptive name for the work you are doing, like “add dark mode” or “refactor user service.” or the name of the task you are working on.
That last part matters. A markdown file in your repo means Copilot can reference it throughout the work, you can review it between sessions, and you have a clear record of what was decided. Without something like this, it is easy to drift off course as the conversation grows longer. This can help prevent all kinds of hallucination bullshit and keep the AI from going down rabbit holes that are not relevant to the actual work. It also gives you a clear way to track progress and make sure nothing gets missed.
Update the checklist as work progresses. Keeping it current takes seconds and saves a lot of confusion later.
Be Specific. Super Fucking Specific.
Vague prompts produce vague results. If you say “improve the performance of this screen,” do not be surprised when the output is generic and not particularly useful.
Instead: “The UserListView is slow when the collection has more than 200 items. The items source is being re-bound on every navigation. Fix the binding so it only updates when the underlying collection changes, and add virtualization to the CollectionView.”
The more context and specificity you provide, the better the output. Tell it what the problem is, what you think is causing it, and what a good outcome looks like. If you have already investigated and have a hypothesis, share it.
Provide References in Your Prompts
When asking about a specific library or API, link the documentation. Copilot knows a lot, but its training data has a cutoff and it can hallucinate API details for less common libraries.
Saying “use the TychoDB library to save this object, see https://github.com/TheEightBot/TychoDB for the API” gives it accurate context instead of letting it guess from potentially stale training data. This is especially important for anything released or updated in the past year or so.
Give It the Error
When something is broken and you want Copilot to fix it, paste the full error message and stack trace rather than describing it. “Fix the null reference exception” is much weaker than dropping in the actual exception with the stack trace. It can then trace the exact call path and understand what state the app was in.
Use It for the Boring Stuff
The biggest productivity gain for me is not the clever architecture decisions. It is the stuff I do not want to spend time on. Writing unit tests for a class I just built. Adding XML documentation to a service. Converting a method to be async. Generating a data model from a JSON response.
These tasks are not hard, but they are tedious and easy to skip. Copilot handles them quickly so you can stay focused on the things that actually require your attention.
Keep Your Context Focused
Agent mode works best when the task is scoped. A prompt like “update the app” is going to produce unpredictable results. Break your work into discrete, focused tasks and start fresh conversations for each one.
If you notice responses starting to drift or getting worse as a conversation gets longer, that is a sign the context window is getting cluttered. Start a new conversation, reference the markdown checklist, and continue from there.
You Still Have to Review the Output
This sounds obvious, but it is easy to fall into auto-accept mode. Copilot is good, but it makes mistakes. It might misunderstand a naming convention, miss an edge case, or take an approach that technically works but does not fit how the rest of the codebase is structured.
Read the diffs. Run the code. Test the thing it built. Treat it like a pull request from someone you trust but still verify.