GitHub Copilot · VS Code · Microsoft AI · Community
March 28, 2026
Released March 25, VS Code 1.113 is the most agent-focused release yet: subagents can now invoke other subagents for multi-step workflows, MCP servers you configure in VS Code are bridged directly into Copilot CLI and Claude agents, and reasoning effort (low/medium/high) is now controllable from the model picker UI. This release, combined with the Copilot CLI having six point releases this week alone, signals the terminal and editor are converging into a single agentic platform.
VS Code 1.113 introduces chat.subagents.allowInvocationsFromSubagents, enabling subagents to call other subagents. Previously blocked to prevent infinite recursion, this unlocks genuinely complex multi-step agentic workflows where a coordinator agent delegates specialized tasks to role-specific subagents — all within a single chat session.
"chat.subagents.allowInvocationsFromSubagents": true.MCP servers you configure in VS Code are now bridged into Copilot CLI and Claude agent sessions. Both user-defined servers and workspace mcp.json servers are included. Previously MCP was only available to local agents running inside the editor — now your terminal agent sessions have the same rich tool access as your chat panel sessions.
mcp.json or user settings./mcp show in the CLI — you should see your VS Code-configured servers listed.Reasoning models like Claude Sonnet 4.6 and GPT-5.4 now show a Thinking Effort submenu in the model picker — choose Low, Medium, or High without going into settings. The picker label shows the active level (e.g. "GPT-5.4 · Medium"). Note: the old github.copilot.chat.anthropic.thinking.effort setting is now deprecated in favor of this UI control.
VS Code 1.113 introduces the Chat Customizations editor (Preview) — a single UI for managing all chat customizations: custom instructions, prompt files, custom agents, and agent skills. Includes a built-in code editor with syntax highlighting, AI-generated initial content, and direct browsing of MCP server and plugin marketplaces. Open via the gear icon in the Chat view or Chat: Open Chat Customizations in the Command Palette.
You can now fork a Copilot CLI or Claude agent session at any point in conversation history — creating a copy to explore a different approach without losing the original. Enable via github.copilot.chat.cli.forkSessions.enabled. Previously only available for local editor agent sessions, now extended to the full terminal workflow.
Images in chat — whether you attached screenshots or the agent generated them via tool calls — now open in a full image viewer with navigation, zoom, pan, and conversation-turn grouping. The viewer is also available from the Explorer right-click menu for image files. Both imageCarousel.chat.enabled and imageCarousel.explorerContextMenu.enabled are on by default in 1.113.
GitHub announced on March 25 that from April 24 onward, interaction data from Copilot Free, Pro, and Pro+ users will be used to train AI models unless users opt out. Data includes accepted outputs, code snippets shown to the model, cursor context, comments, file names, and chat interactions. Copilot Business and Enterprise users are not affected.
The March 27 CLI release adds a timeline picker to /rewind and double-Esc — you can now roll back to any point in conversation history, not just the previous snapshot. CLI startup is also faster due to V8 compile cache reducing parse and compile time on repeated invocations. MCP registry lookups are more reliable with automatic retries and request timeouts.
/rewind and select exactly the conversation checkpoint you want to return to — surgical rollback, not just one step back.Released March 26, this update opens the model picker in full-screen view with inline reasoning effort adjustment using arrow keys. The model display header shows the active reasoning effort level (e.g. "high") next to the model name. /allow-all (/yolo) now supports on, off, and show subcommands. MCP servers defined in .mcp.json at the git root now start correctly.
Also in the March 27 release: MCP servers can now request LLM inference (model sampling) with user approval via a new review prompt. This allows MCP tools to use Copilot's AI capabilities directly inside their own workflows, unlocking a new class of tool-driven AI interactions where external tools can ask the model questions as part of their operation.
The March 2026 Power Platform update brings Microsoft 365 Copilot directly into model-driven Power Apps via a side pane. Developers and users can ask Copilot to summarize table data, visualize what is active, recap record history, and reference related content — all from within the app. Copilot can also invoke specialized agents (including custom org agents) from the same pane and take actions like drafting documents or scheduling meetings.
Copilot Studio's ability to connect agents to external data via custom MCP servers — currently in public preview — reaches general availability in April 2026. Organizations can connect any Copilot Studio agent to external databases, APIs, and internal tools without writing custom connectors. The companion feature allowing MCP-compliant tools in agent workflows reaches preview in April and GA in October.
For enterprise teams on IntelliJ and JetBrains IDEs: custom agents, sub-agents, and plan agent are now generally available. Agent hooks — which run custom commands at key agent lifecycle events like userPromptSubmitted, preToolUse, and postToolUse — are in public preview. Auto-approve for MCP tools is now configurable at both server and tool level. Requires enabling "Editor preview features" in Copilot Business or Enterprise admin settings.
preToolUse before every file edit the agent makes.The community crossed 50,000 members this week, with GitHub and Microsoft engineers joining a live AMA. Key clarification from the team: the Copilot SDK is MIT-licensed and you can bundle the CLI for personal use, internal tools, or basic web projects without commercial licensing. Commercial products require contacting GitHub. The premium request model is staying for now, with no current plans for per-token billing.
A detailed work log showing a developer who built a personal AI assistant daemon ("Max") running 24/7 on their local machine using the GitHub Copilot SDK. The assistant handles coding tasks, file management, and orchestrates worker sessions. Key engineering challenge: worker sessions were never being destroyed (~400 MB RSS each), causing overnight out-of-memory crashes — fixed by adding explicit session lifecycle management.
A hands-on session showing how to use Copilot agent mode inside VS Code to generate and refactor GitHub Actions workflows for a .NET app, deploy to Azure with OpenID Connect, and use MCP to bring GitHub issue and PR context into the dev flow. Demonstrates creating custom DevOps agents specialized for workflow optimization and best practices.
The single most time-sensitive action this week is checking your data policy settings. If you are on Copilot Pro or Pro+, GitHub will begin using your interaction data — including code snippets, accepted completions, and chat inputs — to train models starting April 24, unless you opt out. Go to your GitHub account settings, navigate to Privacy, and disable "Allow GitHub to use my data to improve GitHub Copilot." Takes 30 seconds and protects your codebase from being part of the training corpus. Copilot Business and Enterprise users are not affected.