← Archive

DevAssist Weekly

GitHub Copilot  ·  VS Code  ·  Microsoft AI  ·  Community

March 28, 2026

Week of March 22–28, 2026

VS Code 1.113 Ships — Nested Subagents, MCP in CLI, and Configurable Thinking Effort

Released March 25, VS Code 1.113 is the most agent-focused release yet: subagents can now invoke other subagents for multi-step workflows, MCP servers you configure in VS Code are bridged directly into Copilot CLI and Claude agents, and reasoning effort (low/medium/high) is now controllable from the model picker UI. This release, combined with the Copilot CLI having six point releases this week alone, signals the terminal and editor are converging into a single agentic platform.

VS Code + GitHub Copilot

6 updates

VS Code 1.113 introduces chat.subagents.allowInvocationsFromSubagents, enabling subagents to call other subagents. Previously blocked to prevent infinite recursion, this unlocks genuinely complex multi-step agentic workflows where a coordinator agent delegates specialized tasks to role-specific subagents — all within a single chat session.

Key Takeaway: Build an orchestration layer where a "Project Manager" agent delegates code tasks to a "Java API Agent" and UI tasks to an "Angular Agent" — no manual switching required.
  1. Update VS Code to 1.113 (March 25, 2026 release).
  2. Enable the setting: "chat.subagents.allowInvocationsFromSubagents": true.
  3. Create a coordinator agent file that references your role-specific subagents.
  4. Start a chat session with the coordinator — it will delegate subtasks to specialized agents automatically.
Real Example: A "Full-Stack Feature Agent" receives a feature description, invokes the "Java API Agent" to create the backend endpoint, then invokes the "Angular UI Agent" to scaffold the component — one prompt, two specialized agents, full feature delivered.

MCP servers you configure in VS Code are now bridged into Copilot CLI and Claude agent sessions. Both user-defined servers and workspace mcp.json servers are included. Previously MCP was only available to local agents running inside the editor — now your terminal agent sessions have the same rich tool access as your chat panel sessions.

Key Takeaway: If you have an internal code-analysis MCP server set up in VS Code, your Copilot CLI sessions can now call it directly — no separate CLI configuration needed.
  1. Ensure VS Code 1.113 and the latest Copilot CLI (1.0.13) are installed.
  2. Configure an MCP server in VS Code via your mcp.json or user settings.
  3. Open a Copilot CLI session in the terminal.
  4. Run /mcp show in the CLI — you should see your VS Code-configured servers listed.
Real Example: You have an internal Java dependency-checker MCP server registered in VS Code. Open a Copilot CLI session and ask it to "audit my project's Spring Boot dependencies for vulnerabilities" — it now calls your MCP server to pull the data.

Reasoning models like Claude Sonnet 4.6 and GPT-5.4 now show a Thinking Effort submenu in the model picker — choose Low, Medium, or High without going into settings. The picker label shows the active level (e.g. "GPT-5.4 · Medium"). Note: the old github.copilot.chat.anthropic.thinking.effort setting is now deprecated in favor of this UI control.

Key Takeaway: Use Low effort for quick completions and boilerplate, High effort for complex architecture decisions — switch per-task without leaving the chat panel.

VS Code 1.113 introduces the Chat Customizations editor (Preview) — a single UI for managing all chat customizations: custom instructions, prompt files, custom agents, and agent skills. Includes a built-in code editor with syntax highlighting, AI-generated initial content, and direct browsing of MCP server and plugin marketplaces. Open via the gear icon in the Chat view or Chat: Open Chat Customizations in the Command Palette.

Key Takeaway: No more hunting across separate settings files for instructions, agents, and plugins — everything is in one tabbed editor with validation built in.

You can now fork a Copilot CLI or Claude agent session at any point in conversation history — creating a copy to explore a different approach without losing the original. Enable via github.copilot.chat.cli.forkSessions.enabled. Previously only available for local editor agent sessions, now extended to the full terminal workflow.

Key Takeaway: Stuck at a design decision mid-session? Fork and explore both paths simultaneously instead of losing your work or starting over.

Images in chat — whether you attached screenshots or the agent generated them via tool calls — now open in a full image viewer with navigation, zoom, pan, and conversation-turn grouping. The viewer is also available from the Explorer right-click menu for image files. Both imageCarousel.chat.enabled and imageCarousel.explorerContextMenu.enabled are on by default in 1.113.

Key Takeaway: Review agent-generated UI screenshots at full resolution directly in VS Code — no more downloading and opening files externally.

GitHub.com Copilot

4 updates

GitHub announced on March 25 that from April 24 onward, interaction data from Copilot Free, Pro, and Pro+ users will be used to train AI models unless users opt out. Data includes accepted outputs, code snippets shown to the model, cursor context, comments, file names, and chat interactions. Copilot Business and Enterprise users are not affected.

Key Takeaway: If you are on Copilot Pro or Pro+ and do not want your interaction data used for training, go to your GitHub settings under Privacy and opt out before April 24. Previous opt-outs are honored automatically.

The March 27 CLI release adds a timeline picker to /rewind and double-Esc — you can now roll back to any point in conversation history, not just the previous snapshot. CLI startup is also faster due to V8 compile cache reducing parse and compile time on repeated invocations. MCP registry lookups are more reliable with automatic retries and request timeouts.

Key Takeaway: Accidentally went too far down a refactoring path? Use /rewind and select exactly the conversation checkpoint you want to return to — surgical rollback, not just one step back.

Released March 26, this update opens the model picker in full-screen view with inline reasoning effort adjustment using arrow keys. The model display header shows the active reasoning effort level (e.g. "high") next to the model name. /allow-all (/yolo) now supports on, off, and show subcommands. MCP servers defined in .mcp.json at the git root now start correctly.

Key Takeaway: Adjust model and reasoning effort from inside the CLI picker with arrow keys — no more interrupting your flow to change settings in VS Code or a config file.

Also in the March 27 release: MCP servers can now request LLM inference (model sampling) with user approval via a new review prompt. This allows MCP tools to use Copilot's AI capabilities directly inside their own workflows, unlocking a new class of tool-driven AI interactions where external tools can ask the model questions as part of their operation.

Key Takeaway: An MCP tool that runs your test suite can now ask Copilot to summarize the failure patterns and suggest fixes — AI-in-the-loop tooling, not just AI-adjacent.

Microsoft Copilot for Enterprise

3 updates

The March 2026 Power Platform update brings Microsoft 365 Copilot directly into model-driven Power Apps via a side pane. Developers and users can ask Copilot to summarize table data, visualize what is active, recap record history, and reference related content — all from within the app. Copilot can also invoke specialized agents (including custom org agents) from the same pane and take actions like drafting documents or scheduling meetings.

Key Takeaway: If your team uses Power Apps for project tracking or data management, Copilot can now answer "what is happening with this record" questions and hand off to custom agents without leaving the app.

Copilot Studio's ability to connect agents to external data via custom MCP servers — currently in public preview — reaches general availability in April 2026. Organizations can connect any Copilot Studio agent to external databases, APIs, and internal tools without writing custom connectors. The companion feature allowing MCP-compliant tools in agent workflows reaches preview in April and GA in October.

Key Takeaway: If you are planning an enterprise Copilot Studio agent that needs to call internal Java microservices or a proprietary data store, the custom MCP server path is the production-ready route from April onward.

For enterprise teams on IntelliJ and JetBrains IDEs: custom agents, sub-agents, and plan agent are now generally available. Agent hooks — which run custom commands at key agent lifecycle events like userPromptSubmitted, preToolUse, and postToolUse — are in public preview. Auto-approve for MCP tools is now configurable at both server and tool level. Requires enabling "Editor preview features" in Copilot Business or Enterprise admin settings.

Key Takeaway: Use agent hooks to enforce code quality policies automatically — for example, trigger a linting run via preToolUse before every file edit the agent makes.

Community Picks

3 picks
Reddit · r/GithubCopilot

The community crossed 50,000 members this week, with GitHub and Microsoft engineers joining a live AMA. Key clarification from the team: the Copilot SDK is MIT-licensed and you can bundle the CLI for personal use, internal tools, or basic web projects without commercial licensing. Commercial products require contacting GitHub. The premium request model is staying for now, with no current plans for per-token billing.

Key Takeaway: The SDK is MIT-licensed — you can build internal tooling on top of Copilot SDK without commercial concerns. Only building a product you sell requires a formal agreement with GitHub.
GitHub Gist · burke holland

A detailed work log showing a developer who built a personal AI assistant daemon ("Max") running 24/7 on their local machine using the GitHub Copilot SDK. The assistant handles coding tasks, file management, and orchestrates worker sessions. Key engineering challenge: worker sessions were never being destroyed (~400 MB RSS each), causing overnight out-of-memory crashes — fixed by adding explicit session lifecycle management.

Key Takeaway: The Copilot SDK is production-capable for personal automation tooling, but long-running worker sessions must be explicitly destroyed — the SDK does not clean them up automatically.
  1. Copilot SDK powers the conversational layer and can run coding tasks, manage files, and orchestrate worker sessions.
  2. Each worker session consumes ~400 MB RSS — must be explicitly destroyed when done.
  3. Session resumption is limited by the SDK; the workaround is context injection into new sessions.
  4. See the Gist for the full architecture and lessons learned from running this 24/7.
Real Example for automation builders: if you spawn Copilot SDK worker sessions in a Make.com or similar automation pipeline, ensure each session is explicitly terminated after the task completes to prevent memory accumulation.
YouTube · GitHub

A hands-on session showing how to use Copilot agent mode inside VS Code to generate and refactor GitHub Actions workflows for a .NET app, deploy to Azure with OpenID Connect, and use MCP to bring GitHub issue and PR context into the dev flow. Demonstrates creating custom DevOps agents specialized for workflow optimization and best practices.

Key Takeaway: You can use Copilot agent mode to write, explain, troubleshoot, and refactor YAML pipelines — including multi-step refactors that introduce reusable workflow patterns across your entire CI/CD setup.
Phani's Pick of the Week

Act Now: Opt Out of Copilot Data Training Before April 24

The single most time-sensitive action this week is checking your data policy settings. If you are on Copilot Pro or Pro+, GitHub will begin using your interaction data — including code snippets, accepted completions, and chat inputs — to train models starting April 24, unless you opt out. Go to your GitHub account settings, navigate to Privacy, and disable "Allow GitHub to use my data to improve GitHub Copilot." Takes 30 seconds and protects your codebase from being part of the training corpus. Copilot Business and Enterprise users are not affected.