#SharePointProblems | Koskila.net

Solutions are worthless unless shared! Antti K. Koskela's Personal Professional Blog

First experiences using GitHub Copilot Coding Agent

koskila
Reading Time 14 min
Word Count 2358 words
Comments 0 comments
Rating n/a (0 votes)
View

In this article, I will share some of my first experiences using GitHub Copilot Coding Agent (or "SWE Agent" as it was known before), and some quick tips for getting the most out of it.

Or maybe I'll just be sharing some fun and frustrating moments with the truly Agentic version of GitHub Copilot. Because that's almost as much fun as showing everyone the funny pictures I made with AI, right? 😅

Background

Everyone who's selling AI coding tools has been constantly telling us "in 3 months nobody will write any code anymore".

GitHub Copilot has introduced a new "SWE Agent" mode, which allows for more advanced coding assistance. It is designed to understand and generate code in a more context-aware manner... And while it can be a bit of a hit-or-miss tool, I can't deny it making me more productive than before - maybe ever.

I've been using GitHub Copilot Coding/SWE Agent somewhat actively for about half a year now. Using it consumes GitHub Copilot Premium Requests - the same ones that using the Agent-mode locally consumes if you're using a "Premium" model, such as any of the Claude models.

GitHub Copilot models in VS Code

But trust me - the Coding Agent is a completely different beast compared to the extension running locally in your IDE! With it, you can get rid of the endless chatbot BS, and you don't even need to wait for the agent to lock up your IDE while it's doing... Well, whatever it is doing. It's going to be running on someone else's computer (or a codespace), and you can have as many instances running as you want!

What's all this nonsense about Agents again?

In case you've been coding under a rock for the past couple of months, here's an abridged version of what you've missed: Agentic AI is kind of actually here, and your role, as a software engineer, is evolving. Instead of doing the fun stuff (coding), you're stuck writing specification, reading a lot of newly baked documentation to see if it's even remotely correct, and of course reviewing and testing the code and unit tests that AI produces - and it produces a lot of both!

So if you're a senior software engineer, your productivity might be higher than ever, but you also might be feeling very bored. Very, very, very, bored ineed.

Because writing specification, reading documentation, and reviewing code are not exactly the most exciting parts of software development.

But what's this mean in concrete terms?

Experiences

First, a pretty major caveat, or a disclaimer, perhaps. While I work in software development, I have not been a full-time programmer for a few years now. My experience might be different from yours, YMMV, all that stuff.

But I'm approaching this from my usual angle - I ran into problems, I found workarounds, I'm sharing them here.

Anyway. The agent is a lot like any enthusiastic, if very naïve developer. It'll crank out a lot of code, very quickly - often solving the issues in a practical (occasionally even quite elegant) ways, but often rushing the work, skipping steps and then omitting a lot of details when documenting its work.

And like any inexperienced developer eager to please, it will gaslight you when trying to present itself and its work in a more favorable light. So be prepared to review every PR thoroughly, AND run it locally, AND run all of the tests.

So in short, GitHub Copilot Coding Agent is a lot like many of us.

Anyway.

While it occasionally does dumb things, I've still been impressed overall.

Let's go through some of the weirdnesses and the ways around that!

Tips and tricks

This section will explain some issues I've run into, and how I've worked my way around them.

Or how I've tricked the Coding Agent to work around them, rather.

Create instruction file(s) for your Copilots and keep them up-to-date!

These files are the best way to enhance the Coding Agent's system prompt and hence make it behave the way that works best for you. They're the way you can describe your tech stack, coding conventions, architecture, hosting environment and your preferred solutions to the issues the agent might run into (more on that later!)

copilot-instructions.md

  • Location: .github/copilot-instructions.md (at the root of your repository)
  • Purpose: Global, repository-wide defaults

This file:

  • Acts as the baseline for Copilot’s behavior across the entire project
  • Ideal for setting broad coding standards, naming conventions, or architectural preferences
  • Applies universally unless overridden by more specific instructions

.instructions.md

  • Location: .github/instructions/ (folder containing multiple *.instructions.md files, each for different contexts or tasks)
  • Purpose: Task-specific overrides

This file:

  • Lets you define context-aware rules for specific folders, files, or tasks
  • You can create multiple files like:
    • tests.instructions.md for test-related guidance, and
    • docs.instructions.md for documentation-specific behavior
  • These files refine or override the global defaults from copilot-instructions.md when relevant

See this for details.

These files feel like they're overlapping a little, but Copilot merges them during runtime - in theory, *.instruction.md is used to override and provide more specific and better scoped instructions (since you can define an applyTo directive in the file)

I'll share a sample of my file further below.

Avoid the loop of death

One thing I've noticed is that if Copilot misunderstands your original issue, you can't really fix that with a comment on the Pull Request it opens.

The Coding Agent will not "remember" any of your earlier comments outside of the currently active Pull Request - and even within the PR, it seems to struggle greatly to understand the relationship between different comments.

So if it makes a mistake (as it often will!), and you point it out in a comment, be very mindful of what will happen after. Often, the agent is able to address the issue appropriately - but given the chance will revert the fix in favor of the original (buggy) implementation!

This tendency to end up in this "loop of death" is so strong, that sometimes I've just resorted to canceling the PR, updating the original issue description to explicitly tell Copilot NOT to do whatever caused it to go haywire, and reassigning the issue to Copilot, hence starting a fresh PR and a fresh coding session, free of confusion.

And occasionally? Well, occasionally it just starts failing, and won't recover.

Copilot occasionally just fails

Take a break, or start a new PR by unassigning Copilot from the issue and reassigning it again.

Give instructions on how to avoid repetitive issues

Copilot insisting on using .NET 8

I noticed that the Coding Agent had the tendency to downgrade my .NET 9 project.

Copilot Coding Agent happily downgrading my project

Or on some occasions, giving up on building the project altogether, and just blindly editing files.

Copilot can't even build the project but it's still happy to commit its changes

For whatever reason, the Agent really loves downgrading projects. And it kept doing that again, and again, and again.

I like the passive "that was accidentally downgraded", deflecting blame. It really is like that one coworker you don't like sometimes.

While I haven't been able to convince the agent not to occasionally end in the loop of death, I have been able to steer it clear from downgrading my project.

I started to tell it on each issue that it needs to use .NET 9, which lead the agent finally installing the right version after some struggles:

Copilot struggling to install .NET 9

It occasionally still said something along the lines of "User told me to install .NET 9 but I'll install .NET 8 instead."

Nice.

After adding the instructions to both .instructions.md and copilot-instructions.md (adding it to just .instructions.md did not have the intended effect), the Coding Agent was able to avoid installing .NET 8 first. Telling it not to downgrade was NOT enough.

copilot-instructions.md file to install .NET 9

Lo and behold:

Copilot finally listens and installs .NET 9

Don't tell the agent to merge from dev/main

I'm a bit confused as to why this doesn't work - but the few times I've tried, one of 2 has happened:

  1. The agent manually copies some files from the dev/main branch over, not merging anything, but indeed selectively bringing SOME changes to the current branch.
  2. The agent merges, but doesn't deal with a merge conflict - and somehow ends up committing merge conflict markers. Like below.

This was a particularly confusing example, because I have no idea why that "{" caused a merge conflict in the first place.

Perhaps the agent was trying to replicate a merge conflict in the code without knowing how to merge? I don't know, but I'd steer clear from trying to make the agent handle merges.

If you need Copilot to access a file, upload it to the repository

In theory, you can upload a file in a comment on the PR. If you allowlist some domains, the Agent will even be able to read it! But it won't be able to read more than maybe 50 kb of the file - so keep that in mind when working with larger files. That'll probably fail.

Likewise, if you have a long text file, you won't be able to just paste the contents into a comment. The Agent will only be able to read some number of lines before it hits a limit - and this is a silent failure, it won't say anything about it.

So if you want it to have access to a dataset or such file, upload it in the repository, and reference it in the comments.

Don't try to do this:

Watch out for newbie mistakes!

While reviewing some issues the Coding Agent was working on, I noticed that one PR had unexpectedly many changed lines.

Copilot being silly

And then I noticed it had checked in a dotnet install script...

Really? Well - when confronted, the bot really did say:

On web/dotnet-install.sh:This file was added accidentally in my initial commit. It's the official .NET installation script from Microsoft and doesn't belong in this project. I've removed it in commit 70d179c.

And of course, this will happen to a lot of different files it might interact with during the codespace sessions.

If you point this out, make sure the agent adds the file to your .gitignore.

Copilot doesn't care about the stack

Copilot probably cares about your stack much less than you do. And of course, in a perfect world, you probably wouldn't either.

But I constantly ran into the situation where Copilot made assumptions about how the project should be created (which is fair) and where it should be hosted (which I didn't much like), happily adding dependencies and server-side functionality to a project that should be a static site instead.

I ended up adding this to my copilot-instructions.md:

...

The application you're building will be deployed on a static file host (GitHub Pages) and not have a real backend server. Don't build any features that require a backend server.
...

Remember to add "run unit tests after you're done and make sure they pass" to your copilot-instructions.md !

Unless you do, Copilot will be happy to tell you "all tests pass!" without actually running them.

Or it might note in the session's "thoughts" that some tests failed, without mentioning that on the PR it then submits.

Copilot finished working - with unresolved bugs

Oh - and of course, it'll happily remove all tests if too many are failing. That's one way to get rid of the red squiggles - and it's exactly what a junior developer would do during their first Summer job!

Below is from a Pull Request.

copilot deleted all unit tests

And of course, even if it was Copilot who broke the unit tests, it will not admit that unless you explicitly point it out.

... and here's the end result of the Agent fixing all tests:

Copilot is happy to trash the unit tests and commit untested code

I now have the following lines in my copilot-instructions.md, which has improved the agent's behavior, but not fixed it completely:

Verify all tests pass after making changes. If they don't, debug and fix the issues before submitting the code. You may not commit code that has failing tests.

If you change tests, ensure that the changes are meaningful and improve the test coverage or correctness.

Report any changes to tests and your rationale for making them.

Do not remove tests unless you remove the functionality they test.

If you remove functionality, ensure that you also remove the corresponding tests.

It will still occasionally "forget" to run tests, even with these instructions in place, or notice they are failing, but claim they're all passing.

Copilot is lazy

Maybe this has become obvious already, but the Agentic Copilot AI Coding Agent is lazy. It tries to avoid working as much as possible - which I can definitely vibe with!

Like I said, it is very humane in some aspects of its behavior. Relatable, even!

It seems impossible to tell the agent to, for example, go through every single file in a directory and apply a certain change. Instead, it'll take one of the following approaches:

  1. Naively open a few files, see if there are changes to be made, and if not, abandon the task.
  2. Find a file or 2 that actually do have a change, change them, and consider the task done.
  3. Find a file or 2 that actually do have a change, and naively create a script to then apply the change to the rest of the files (simply making assumptions about the structure and content of those files).

And if you tell it to go through every file, it will simply lie - telling you it did.

I haven't figured out a good prompt to solve this one, yet - so it's just something to keep in mind if you try to automate some sort of monotonous work. Remember: The Agent doesn't want to do that either!

Finally

I've run into so many little potholes on my way to becoming a proper Vibe Coder that at some points I've wondered if it would've been easier to be a real developer instead. But getting on with the times means adapting and finding ways to work with the tools at my disposal, including the GitHub Copilot Coding Agent.

And I'm not going to lie - a lot of the time, it's actually kind of impressive. And it's easily worth the 10 USD monthly subscription!

I'm quite excited to see how it evolves over time, since it's likely the worst it's ever going to be!

Comments

Interactive comments not implemented yet. Showing legacy comments migrated from WordPress.

No comments yet.

Whitewater Magpie Ltd.
© 2025
Static Site Generation timestamp: 2025-08-26T05:15:50Z