- Copilot Studio allows you to create tools and actions that connect agents with APIs, data, and applications, with generative orchestration to choose the best option in each conversation.
- The tools are configured by sections (Details, Inputs, Completion), controlling name, description, automatic input collection, authentication, and type of response to the user.
- GitHub Copilot The coding agent automates end-to-end development tasks using GitHub Actions, working as an asynchronous partner that opens and updates pull requests.
- The combined use of Model Context Protocol, declarative tools, and coding agent expands capabilities while maintaining security, auditing, and human control over changes.
If you work with Copilot, GitHub and Copilot StudioYou've surely heard about the famous actions and tools that allow these agents to do "real-world" things: send emails, query APIs, move code between branches, or open pull requests. Here we'll see, calmly and in Spanish (from Spain), how all of that fits under the umbrella of a full Copilot Actions tutorialunifying both the tools section in Copilot Studio and the new GitHub coding agent.
The idea is that you end up understanding what they are. tools, actions and agents, how they are configured, what types exist (connectors, REST, MCP, computer usage, etc.), what role generative orchestration plays, and how GitHub Copilot coding agent uses GitHub Actions and MCP to automate end-to-end development tasks.
What is an “action” or tool for Copilot?
In the Copilot Studio ecosystem, the calls tools These are the basic building blocks that allow the agent to interact with external systems: cloud services, APIs, databases or even desktop applications with a graphical interface. Each tool encapsulates a specific capability that the agent can execute when the conversation or workflow requires it.
For example, you can equip your agent with tools to Send emails with Outlook 365Check the weather forecast, read and write to the Dataverse, or post messages in Teams. All these capabilities are presented as tools that Copilot can automatically select using generative orchestration or that you can explicitly invoke from a specific topic.
With active generative orchestration, the agent is able to choose the tool yourself or the most appropriate topic, or even to draw from knowledge base searchesTo respond to user requests. In classic mode, with orchestration disabled, you can only use topics you've designed, although you still have the option to call tools from those topics in a fully controlled manner.
All of this makes Copilot Studio's tools a kind of “superpower” for your agentsBecause they no longer just chat: they go on to perform real actions against your corporate systems, always respecting the authentication and policies you define.
Types of tools you can use as Copilot Actions
Copilot Studio offers several mechanisms for adding tools to an agent, and in practice, they can all be seen as actions that Copilot can trigger to accomplish a task. Each type of tool is designed for a different scenario, from connecting to existing APIs to automating the use of the computer itself.
The first major block are the Power Platform connectorsThese allow you to connect to proprietary or third-party services. There are two types: prebuilt connectors, which are ready to use with hundreds of known services, and custom connectors, which allow you to define a connection to your own system or an internal API of your organization.
Another key mechanism is the agent flowA workflow is a type of tool that defines a sequence of linked actions. This tool is ideal when you want the agent to execute several steps in order, for example, querying a system, transforming the data, and then returning the formatted result.
You also have tools like Promptwhich are single-turn requests to a model, with the possibility of linking sources of knowledge and of generate code for analyzing data. They are used, above all, when you need the model to perform a well-defined task, and they can work with modes such as Quick Response to adjust the response. They are used, above all, when you need the model to perform a well-defined task, but with access to additional information.
The set is completed by tools based on REST API, Model Context Protocol (MCP) y Computer useThe first two focus on connecting to HTTP APIs or MCP servers to expose extra tools and resources; the third allows the agent to control a graphical interface (web or desktop) by simulating clicks, menus and typing, opening the door to automating applications that do not have an API.
Create a new tool in Copilot Studio step by step
In order for your agent to take full advantage of these capabilities, you need create and configure the tools directly in Copilot Studio. The process is guided, but it's helpful to understand what's being done at each step to avoid misunderstandings and get the most out of generative orchestration.
The first thing to do is open your agent from the section Agents and enter the page of ToolsFrom there, you choose the option to add a tool and, in the corresponding panel, you select create one. New toolCopilot Studio will show you the list of available types: Prompt, Agent flow, Computer use, Custom connector, Model Context Protocol or REST API, depending on what you want to integrate.
Once the tool type is chosen, the following appear: specific setup stepsFor example, if you choose a Prompt, you will need to define the prompt template and instructions to the model, the input parameters, the knowledge sources it can consult, and the format and restrictions of the response.
Once you've finished that initial setup, you can press Save or Publish, as appropriate, so that the tool is created. Then the option is activated Add and configureThis adds the tool to the agent and opens a full configuration page where you can continue adjusting details and changing parameters later as many times as you need.
At any time, from the agent's Tools page, you can edit the settings again of that tool: its name, description, inputs, output behavior, and other fine details that greatly influence how generative orchestration decides when to invoke it.
Configuration sections: Details, Inputs and Completion
The configuration page of a standard tool is divided into three main blocks: Details, Inputs and CompletionUnderstanding what each one does is key to ensuring your Copilot Actions behave reliably and predictably within conversations with users.
In the section Details You define the basic data of the tool. Here you choose the name, which is what you will see in the agent's tool list, and the description, which is even more important because the generative orchestration will depend heavily on that text to decide in what context the tool should be used and when it is not appropriate to invoke it.
The Details section also displays advanced options, such as allow the agent to decide dynamically You can choose whether to use the tool, request user confirmation before executing the action, or configure the authentication type. You can specify whether the tool should run with end-user credentials or with the "maker's" (author's) credentials, and even add a description of what will be authenticated so the user understands what they are authorizing.
The section Inputs It displays all the inputs the tool needs in a table format, with one row per input. By default, each entry is populated with the "Dynamically fill with AI" option, meaning the agent will attempt to deduce the necessary value from the available context, such as recent messages in the conversation.
If the agent does not find a suitable value, it will generate automatically asks the user questions to collect that information. With the customization button, you can adjust the displayed name and description of each input, define how the response is interpreted (free text, predefined entity, etc.), the retry logic, and extra validations for the entered data.
If you want total control, you can change an input to Custom value and assign it a fixed value, a variable, or a Power FX formula. This way, the agent won't ask the user anything about that field, because it already knows exactly what to send to the tool when it runs.
In the section completion You decide what happens when the tool finishes its work. You can let the agent itself generate a contextual response based on the received result, or you can create a custom-formatted response, with the option to insert output variables and apply Power FX formulas to adapt it.
How the tool responds: exit options and adaptive cards
Within Completion, the “After running” option lets you choose between several response strategiesThe simplest is "Don't respond", where the agent internally integrates the tool's output into its subsequent message without the tool directly sending anything to the user.
Another alternative is to activate “Write the response with generative AI”, letting the model handle it. to write a well-structured message Using the output data is very practical when you want rich answers but don't feel like writing a complex template.
If you need millimeter-precise control, you can select “Send specific response” and write the text yourself with placeholders for variables, giving the user a uniform format each time the tool is run, which usually works very well in more formal environments.
Finally, there is the option to “Send an adaptive card”, which allows you to generate interactive responses with buttons and actionsThese are very useful when you want the user to click, confirm, or choose something after the tool's output. In parallel, you decide which output variables will be available to the agent itself or to other tools that follow.
In the specific case of MCP servers connected as tools, the configuration screen is somewhat different: the Details block remains, but Inputs and Completion are replaced by sections of Tools and Resources, where the tools and resources available on that MCP server are listed, giving a quick overview of everything Copilot can do through that connection.
Tool selection and automatic input collection
When you define a tool in Copilot Studio, you don't just tell it what to do, but when and how should it be usedThe name, description, and information associated with the inputs serve as a guide for the generative orchestrator to reserve this tool for the appropriate scenarios and prevent it from being triggered prematurely.
The orchestration takes into account elements such as the current context of the conversationThe agent considers the intent detected in the user's message, the available inputs, previous outputs from other tools, and the history of recent invocations. With all this information, the agent decides whether it makes sense to run a tool, which one, and with what parameters.
One of the advantages is that the agent himself takes care of the input collectionYou don't need to manually design question nodes to cover each required piece of data, which can be very tedious in complex flows. The orchestrator analyzes what's missing to call the tool and asks the user specific questions to complete those fields.
When working in generative mode, tools typically return their output directly to the agent, which incorporates it into the final response the user sees. However, if you prefer, you can configure the tool to always output a explicit answer, both generative and based on a fixed template.
In any case, you still have the option to invoke a tool from a topic explicitly. This allows you to compose hybrid experiences: classic themes with branches, conditions, and nodes, combined with tools that perform specific actions, such as checking the weather or creating a record in an external system.
Calling a tool from a topic: practical example
Imagine you want to build a simple topic like “Get the time”In Copilot Studio you would go to the Topics page, create a new topic with that name and define a series of trigger phrases, such as "Is it going to rain?", "today's forecast", "what's the weather like" or "give me the forecast".
Within that topic, you would add a new node using the button of Add node and you would choose the “Add a tool” option. In the selection box you will see tabs for basic tools, connectors or tools in general, and there you would locate the tool you previously configured to consult. There.
Once the action node has been added to the topic, your flow already knows call the tool at the right timeYou would just need to adjust the outputs, save the topic, and test it in the emulator to make sure the input questions and final answer are what you're looking for.
This pattern can be replicated with any other tool: from sending an email to to report an incidentThis includes reading data from a table or invoking a Power Automate flow to perform a more complex task by leveraging the platform's integrations.
Furthermore, if you combine this with generative orchestration, the agent itself can decide when to use that topic or tool without the user having to follow a rigid script, which translates into a much more natural conversational experience and less “robotics”.
Specific information about MCP and associated resources
In the case of tools based on Model Context ProtocolThe interface displays additional, very useful information. You can see in a table the names of all available MCP tools and the associated resources that the server exposes to the agent, each in its own row for quick identification of its function.
This approach allows a single MCP server group several capabilities (for example, Playwright for end-to-end testing or GitHub's own tools), and have Copilot access them as if they were native actions. The advantage is that you don't depend on a single vendor, but on an open standard for sharing context and tools with LLMs.
These servers are usually configured using JSON files in the repository or in the environment configuration, which fits very well with Git-based CI/CD workflows, where configuration changes are reviewed with the same rigor as code.
Once the MCP server is declared and accessible, your agent can take make decisions autonomously regarding which MCP tool to call and when, greatly expanding the range of tasks it can handle without direct human intervention.
Authentication and security of tools in Copilot Studio
Many tools need some kind of authentication to function securelyespecially if they touch sensitive data or internal APIs: typical cases are Dynamic Prompt, tools that talk to Dataverse or protected corporate services.
The tools always run in the agent runtime in the user context, and They cannot run if there is no configured authentication mechanismCopilot Studio supports two main types of credentials: those of the end user (End user) and those of the creator or administrator of the solution (Maker-provided).
Using end-user credentials, each person only has access to the data For those with permissions, access is maintained while respecting the security boundaries already defined by the organization. With maker credentials, however, the author's identity is used to access shared resources, which is useful when you don't want everyone to have individual, direct access to the system.
In the tool's settings, you can also enable or disable the option to request user confirmation before running the action, which adds a extra layer of transparency so that people know what the agent is about to do and what data will be accessed.
Additionally, from the same settings screen you can turn a tool on or off for the agent. If you deactivate the tool, the agent stops using it, but It can be reactivated later without losing the previous configuration.
If you need a more thorough cleaning, you can always remove the agent toolSimply go to the tools list, open the more options menu, and select the delete option; after confirming, the tool disappears from the list and is permanently unavailable to that agent.
GitHub Copilot coding agent and its relationship with GitHub Actions
Beyond Copilot Studio, GitHub has launched its own Copilot coding agentIt's a software engineering agent that functions as an asynchronous partner and is deeply integrated with GitHub Actions. In practice, it acts as another developer on your team to whom you can delegate specific tasks.
The coding agent starts when you assign it a task, usually through an issue, the agent dashboard, or Copilot Chat in the IDE. From there, it generates a temporary, configurable development environment based on... GitHubActionsExamine the repository for context (related issues, pull request discussions, custom instructions) and start working.
It's designed to handle low- to medium-complexity tasks, such as fixing bugs, improving test coverage, or refactoring time-consuming sections of code. Its goal is to enable you to focus on the most interesting parts of the development, while he takes care of the most tedious things.
Once running, the coding agent Open a pull request in draft mode.It's labeled as "work in progress" and pushes commits as it goes. Every key step is recorded, and you can follow the evolution almost as if you were watching a colleague working live, but without needing to be constantly monitoring it.
Although the agent does the work, you always maintain control: you review the code, request changes, add comments, and decide whether or not to approve the proposal. The goal is that the experience is collaborative and transparent, not an opaque "autopilot" that uploads code to production without review.
Differences between coding agents and traditional code assistants
The classic autocomplete or help tools in the IDE are, basically, real-time attendees that suggest lines or blocks of code as you type, but everything stays on your machine, in your local session and under your immediate control.
With that model, you're still the one who creates the branch, writes the commit messages, pushes the changes, opens the pull request, discusses the comments, reiterates... that whole process It consumes a good amount of time and attention that could be used for more creative tasks.
The coding agent, on the other hand, is oriented towards automate the entire development workflow within GitHub itself: create branches, generate commits, open and update pull requests, run tests and linters in the Actions environment, and leave everything ready so you only have to review and approve.
Furthermore, there is a clear difference with the so-called agent mode in the IDEThe agent mode works synchronously with you, in your favorite editor (VS Code, JetBrains, Eclipse, Xcode, etc.), while the coding agent operates asynchronously in the backgroundas if it were "another person" on the team who handles issues while you do other things.
They both use Copilot premium requestsAlthough the coding agent only needs one per task and relies on GitHub Actions minutes to run all its work, it's worth taking this into account when planning costs and usage in large teams.
Coding agent security: sandbox, permissions, and auditing
GitHub has designed the coding agent with security by defaultby running it in an isolated environment (sandbox) with limited internet access and reduced permissions on the repository. This minimizes the attack surface and protects both the code and the CI/CD infrastructure.
The agent can only push to branches it creates itself, usually with a prefix like copilot/*This way, it doesn't directly affect the main branch or other branches managed by the team. This prevents an agent error from breaking the project's main branch.
Another important aspect is that the coding agent cannot approve or merge its own pull requests. All proposals go through a independent human reviewFurthermore, CI/CD workflows in Actions are not executed until someone authorizes them, adding another layer of protection.
Each commit generated by the agent is marked as co-authored.This improves traceability and clearly shows in the history which changes were driven by Copilot and which by team members. In addition to this, logs audit and branch protections already in place in the organization, which continue to be applied normally.
Taken together, this design ensures that the coding agent work under the same rules and policies than the rest of the team, with clear controls and limits that fit with enterprise security practices.
How to use GitHub Copilot coding agent in your daily life
The workflow for using the coding agent is quite similar to delegating a task to a colleague. You begin Assigning an issue to the user @github on GitHub.com, GitHub Mobile or via the CLI, or by creating a task from the agent panel accessible from virtually any page of the repository.
You can also launch it from Copilot Chat in your favorite IDE, using Hey Copilot or from any tool that supports Model Context Protocol. Thanks to MCP, you can even pass it screenshots or mockups In issues, when you have MCP vision servers configured, you expand the ways you describe what you want the agent to do.
When the task starts, the coding agent opens a pull request in draft mode with a tag. From that moment on, you'll see how It records its progress through commits and PR updates, always within the standard GitHub flow.
When finished, update the title and description of the pull request. It mentions you for review and it will await your feedback. If you need changes, you can tag @copilot again in the PR itself, and the agent will use that feedback to iterate over the code until they achieve the desired result.
Behind the scenes, all of this happens in a secure environment powered by GitHub Actions, where the agent can run tests, linters, external tools, and any action you've included in your catalog of more than 25.000 community actionsThis makes the environment completely customizable to suit the needs of your project.
Enhancing Copilot with MCP: expanded tools and context
If you combine Copilot with the Model Context Protocol and with Copilot LabsThe agent gains access to a vastly broader ecosystem of external tools and data. MCP is an open standard that defines how to share context and capabilities between applications and language models.
The coding agent already includes MCP servers for Playwright and GitHub, allowing it, for example, launch end-to-end tests or interact with GitHub APIs without having to reinvent the wheel. Furthermore, you can define your own MCP servers tailored to your specific systems and workflows.
Configuration is typically done at the repository level, using a JSON file that describes the servers and exposed tools. Once active, the agent can use them autonomously to perform tasks, query data, generate artifacts, and ultimately, reduce the manual load on the equipment.
It is important to keep in mind that the coding agent's internet access goes through a firewall with default rules These only allow certain hosts required for GitHub and for downloading dependencies. If you need additional access, you'll have to adjust it according to your organization's policies.
With this approach, MCP turns Copilot into a much more context-aware development partnercapable of orchestrating diverse tools as if they were Copilot Actions, but with a standard and extensible design that doesn't tie you to a single vendor or technology.
By combining Copilot Studio's declarative tools with the coding agent and MCP on GitHub, you get an ecosystem where your agents They can talk to users, connect to APIs, use the computer, open pull requests, and go through CI/CD almost frictionless, maintaining control, security and traceability at all times.
Passionate writer about the world of bytes and technology in general. I love sharing my knowledge through writing, and that's what I'll do on this blog, show you all the most interesting things about gadgets, software, hardware, tech trends, and more. My goal is to help you navigate the digital world in a simple and entertaining way.