diff --git a/01-intro-to-ai-agents/README.md b/01-intro-to-ai-agents/README.md
index 18085e3d..4e93358c 100644
--- a/01-intro-to-ai-agents/README.md
+++ b/01-intro-to-ai-agents/README.md
@@ -2,7 +2,7 @@
Welcome to the "AI Agents for Beginners" course! This course gives you fundamental knowledge and applied samples for building with AI Agents.
-Join the [Azure AI Discord Community](https://discord.gg/kzRShWzttr){target="_blank"} to meet other learners, and AI Agent Builders and ask any questions you have on this course.
+Join the Azure AI Discord Community to meet other learners, and AI Agent Builders and ask any questions you have on this course.
To start this course, we begin by getting a better understanding of what AI Agents are and how we can use them in the applications and workflows we build.
diff --git a/02-explore-agentic-frameworks/README.md b/02-explore-agentic-frameworks/README.md
index 864cbe5c..a0e44008 100644
--- a/02-explore-agentic-frameworks/README.md
+++ b/02-explore-agentic-frameworks/README.md
@@ -4,13 +4,14 @@ AI agent frameworks are software platforms designed to simplify the creation, de
These frameworks help developers focus on the unique aspects of their applications by providing standardized approaches to common challenges in AI agent development. They enhance scalability, accessibility, and efficiency in building AI systems.
-## Introduction
+## Introduction
This lesson will cover:
- What are AI Agent Frameworks and what do they enable developers to do?
- How can teams use these to quickly prototype, iterate, and improve my agent’s capabilities?
-- What are the difference between the frameworks and tools created by Microsoft ( [AutoGen](https://aka.ms/ai-agents/autogen){target="_blank"} / [Semantic Kernel](https://aka.ms/ai-agents-beginners/semantic-kernel){target="_blank"} / [Azure AI Agent Service](https://aka.ms/ai-agents-beginners/ai-agent-service){target="_blank"})
+
+- What are the differences between the frameworks and tools created by Microsoft AutoGen, Semantic Kernel, and Azure AI Agent
- Can I integrate my existing Azure ecosystem tools directly, or do I need standalone solutions?
- What is Azure AI Agents service and how is this helping me?
@@ -139,7 +140,7 @@ Open-source framework developed by Microsoft Research's AI Frontiers Lab. Focuse
AutoGen is built around the core concept of agents, which are autonomous entities that can perceive their environment, make decisions, and take actions to achieve specific goals. Agents communicate through asynchronous messages, allowing them to work independently and in parallel, enhancing system scalability and responsiveness.
-Agents are based on the [actor model](https://en.wikipedia.org/wiki/Actor_model){target="_blank"}. Which according to Wikipedia states that an actor is _the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received_.
+Agents are based on the actor model. According to Wikipedia, an actor is _the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received_.
**Use Cases**: Automating code generation, data analysis tasks, and building custom agents for planning and research functions.
@@ -243,14 +244,14 @@ Here's some important core concepts of AutoGen:
- **Agent Runtime**. The framework provides a runtime environment, enabling communication between agents, manages their identities and lifecycles, and enforce security and privacy boundaries. This means that you can run your agents in a secure and controlled environment, ensuring that they can interact safely and efficiently. There are two runtimes of interest:
- **Stand-alone runtime**. This is a good choice for single-process applications where all agents are implemented in the same programming language and runs in the same process. Here's an illustration of how it works:
- {target="_blank"}
+ Stand-alone runtime
Application stack
*agents communicate via messages through the runtime, and the runtime manages the lifecycle of agents*
- **Distributed agent runtime**, is suitable for multi-process applications where agents may be implemented in different programming languages and running on different machine. Here's an illustration of how it works:
- {target="_blank"}
+ Distributed runtime
## Semantic Kernel + Agent Framework
@@ -476,8 +477,8 @@ For AutoGen and Semantic Kernel, you can also integrate with Azure services, but
## References
-- [1] - [Azure Agent Service](https://techcommunity.microsoft.com/blog/azure-ai-services-blog/introducing-azure-ai-agent-service/4298357){target="_blank"}
-- [2] - [Semantic Kernel and AutoGen](https://devblogs.microsoft.com/semantic-kernel/microsofts-agentic-ai-frameworks-autogen-and-semantic-kernel/){target="_blank"}
-- [3] - [Semantic Kernel Agent Framework](https://learn.microsoft.com/semantic-kernel/frameworks/agent/?pivots=programming-language-csharp){target="_blank"}
-- [4] - [Azure AI Agent service](https://learn.microsoft.com/azure/ai-services/agents/overview){target="_blank"}
-- [5] - [Using Azure AI Agent Service with AutoGen / Semantic Kernel to build a multi-agent's solution](https://techcommunity.microsoft.com/blog/educatordeveloperblog/using-azure-ai-agent-service-with-autogen--semantic-kernel-to-build-a-multi-agen/4363121){target="_blank"}
\ No newline at end of file
+- Azure Agent Service
+- Semantic Kernel and AutoGen
+- Semantic Kernel Agent Framework
+- Azure AI Agent service
+- Using Azure AI Agent Service with AutoGen / Semantic Kernel to build a multi-agent's solution
\ No newline at end of file
diff --git a/03-agentic-design-patterns/README.md b/03-agentic-design-patterns/README.md
index 7e179c28..87b3483e 100644
--- a/03-agentic-design-patterns/README.md
+++ b/03-agentic-design-patterns/README.md
@@ -84,7 +84,8 @@ Imagine you are designing a Travel Agent, here is how you could think about usin
3. **Consistency** – Make sure the icons for Share Prompt, add a file or photo and tag someone or something are standard and recognizable. Use the paperclip icon to indicate file upload/sharing with the Agent, and an image icon to indicate graphics upload.
## Additional Resources
-- [Practices for Governing Agentic AI Systems | OpenAI](https://openai.com){target="_blank"}
-- [The HAX Toolkit Project - Microsoft Research](https://microsoft.com){target="_blank"}
-- [Responsible AI Toolbox](https://responsibleaitoolbox.ai){target="_blank"}
+
+- Practices for Governing Agentic AI Systems | OpenAI
+- The HAX Toolkit Project - Microsoft Research
+- Responsible AI Toolbox
diff --git a/04-tool-use/README.md b/04-tool-use/README.md
index 63bdb828..4ce3a718 100644
--- a/04-tool-use/README.md
+++ b/04-tool-use/README.md
@@ -1,5 +1,7 @@
# Tool Use Design Pattern
+Tools are interesting because they allow AI agents to have a broader range of capabilities. Instead of the agent having a limited set of actions it can perform, by adding a tool, the agent can now perform a wide range of actions. In this chapter, we will look at the Tool Use Design Pattern, which describes how AI agents can use specific tools to achieve their goals.
+
## Introduction
In this lesson, we're looking to answer the following questions:
@@ -34,6 +36,22 @@ AI Agents can leverage tools to complete complex tasks, retrieve information, or
## What are the elements/building blocks needed to implement the tool use design pattern?
+These building blocks allow the AI agent to perform a wide range of task. Let's look at the key elements needed to implement the Tool Use Design Pattern:
+
+- **Function/Tool Calling**: This is the primary way to enable LLMs to interact with tools. Functions or tools are blocks of reusable code that agents use to carry out tasks. These can range from simple functions like a calculator to API calls to third-party services such as stock price lookups or weather forecasts1.
+
+- **Dynamic Information Retrieval**: Agents can query external APIs or databases to fetch up-to-date data. This is useful for tasks like data analysis, fetching stock prices, or weather information1.
+
+- **Code Execution and Interpretation**: Agents can execute code or scripts to solve mathematical problems, generate reports, or perform simulations1.
+
+- **Workflow Automation**: This involves automating repetitive or multi-step workflows by integrating tools like task schedulers, email services, or data pipelines1.
+
+- **Customer Support**: Agents can interact with CRM systems, ticketing platforms, or knowledge bases to resolve user queries1.
+
+- **Content Generation and Editing: Agents can leverage tools like grammar checkers, text summarizers, or content safety evaluators to assist with content creation tasks**.
+
+Next, let's look Function/Tool Calling in more detail.
+
### Function/Tool Calling
Function calling is the primary way we enable Large Language Models (LLMs) to interact with tools. You will often see 'Function' and 'Tool' used interchangeably because 'functions' (blocks of reusable code) are the 'tools' agents use to carry out tasks. In order for a function's code to be invoked, an LLM must compare the users request against the functions description. To do this a schema containing the descriptions of all the available functions is sent to the LLM. The LLM then selects the most appropriate function for the task and returns its name and arguments. The selected function is invoked, it's response is sent back to the LLM, which uses the information to respond to the users request.
@@ -46,9 +64,9 @@ For developers to implement function calling for agents, you will need:
Let's use the example of getting the current time in a city to illustrate:
-- **Initialize an LLM that supports function calling:**
+1. **Initialize an LLM that supports function calling:**
- Not all models support function calling, so it's important to check that the LLM you are using does. [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/how-to/function-calling){target="_blank"} supports function calling. We can start by initiating the Azure OpenAI client.
+ Not all models support function calling, so it's important to check that the LLM you are using does. Azure OpenAI supports function calling. We can start by initiating the Azure OpenAI client.
```python
# Initialize the Azure OpenAI client
@@ -59,7 +77,7 @@ Let's use the example of getting the current time in a city to illustrate:
)
```
-- **Create a Function Schema**:
+1. **Create a Function Schema**:
Next we will define a JSON schema that contains the function name, description of what the function does, and the names and descriptions of the function parameters.
We will then take this schema and pass it to the client created above, along with the users request to find the time in San Francisco. Whats important to note is that a **tool call** is what is returned, **not** the final answer to the question. As mentioned earlier, the LLM returns the name of the function it selected for the task, and the arguments that will be passed to it.
@@ -115,7 +133,7 @@ Let's use the example of getting the current time in a city to illustrate:
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_pOsKdUlqvdyttYB67MOj434b', function=Function(arguments='{"location":"San Francisco"}', name='get_current_time'), type='function')])
```
-- **The function code required to carry out the task:**
+1. **The function code required to carry out the task:**
Now that the LLM has chosen which function needs to be run the code that carries out the task needs to be implemented and executed.
We can implement the code to get the current time in Python. We will also need to write the code to extract the name and arguments from the response_message to get the final result.
@@ -178,106 +196,108 @@ Let's use the example of getting the current time in a city to illustrate:
Function Calling is at the heart of most, if not all agent tool use design, however implementing it from scratch can sometimes be challenging.
As we learned in [Lesson 2](../02-explore-agentic-frameworks/) agentic frameworks provide us with pre-built building blocks to implement tool use.
-### Tool Use Examples with Agentic Frameworks
+## Tool Use Examples with Agentic Frameworks
-- ### **[Semantic Kernel](https://learn.microsoft.com/azure/ai-services/agents/overview){target="_blank"}**
+Here are some examples of how you can implement the Tool Use Design Pattern using different agentic frameworks:
- Semantic Kernel is an open-source AI framework for .NET, Python, and Java developers working with Large Language Models (LLMs). It simplifies the process of using function calling by automatically describing your functions and their parameters to the model through a process called [serializing](https://learn.microsoft.com/semantic-kernel/concepts/ai-services/chat-completion/function-calling/?pivots=programming-language-python#1-serializing-the-functions){target="_blank"}. It also handles the back-and-forth communication between the model and your code. Another advantage of using an agentic framework like Semantic Kernel, is that it allows you to access pre-built tools like [File Search](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/getting_started_with_agents/openai_assistant/step4_assistant_tool_file_search.py){target="_blank"} and [Code Interpreter](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/getting_started_with_agents/openai_assistant/step3_assistant_tool_code_interpreter.py){target="_blank"}.
+### Semantic Kernel
- The following diagram illustrates the process of function calling with Semantic Kernel:
+Semantic Kernel is an open-source AI framework for .NET, Python, and Java developers working with Large Language Models (LLMs). It simplifies the process of using function calling by automatically describing your functions and their parameters to the model through a process called serializing. It also handles the back-and-forth communication between the model and your code. Another advantage of using an agentic framework like Semantic Kernel, is that it allows you to access pre-built tools like File Search and Code Interpreter.
- 
+The following diagram illustrates the process of function calling with Semantic Kernel:
+
- In Semantic Kernel functions/tools are called [Plugins](https://learn.microsoft.com/semantic-kernel/concepts/plugins/?pivots=programming-language-python){target="_blank"}. We can convert the `get_current_time` function we saw earlier into a plugin by turning it into a class with the function in it. We can also import the `kernel_function` decorator, which takes in the description of the function. When you then create a kernel with the GetCurrentTimePlugin, the kernel will automatically serialize the function and its parameters, creating the schema to send to the LLM in the process.
+In Semantic Kernel functions/tools are called Plugins. We can convert the `get_current_time` function we saw earlier into a plugin by turning it into a class with the function in it. We can also import the `kernel_function` decorator, which takes in the description of the function. When you then create a kernel with the GetCurrentTimePlugin, the kernel will automatically serialize the function and its parameters, creating the schema to send to the LLM in the process.
- ```python
- from semantic_kernel.functions import kernel_function
+```python
+from semantic_kernel.functions import kernel_function
- class GetCurrentTimePlugin:
- async def __init__(self, location):
- self.location = location
+class GetCurrentTimePlugin:
+ async def __init__(self, location):
+ self.location = location
- @kernel_function(
- description="Get the current time for a given location"
- )
- def get_current_time(location: str = ""):
- ...
+ @kernel_function(
+ description="Get the current time for a given location"
+ )
+ def get_current_time(location: str = ""):
+ ...
- ```
+```
- ```python
- from semantic_kernel import Kernel
+```python
+from semantic_kernel import Kernel
- # Create the kernel
- kernel = Kernel()
+# Create the kernel
+kernel = Kernel()
- # Create the plugin
- get_current_time_plugin = GetCurrentTimePlugin(location)
+# Create the plugin
+get_current_time_plugin = GetCurrentTimePlugin(location)
- # Add the plugin to the kernel
- kernel.add_plugin(get_current_time_plugin)
- ```
+# Add the plugin to the kernel
+kernel.add_plugin(get_current_time_plugin)
+```
-- ### **[Azure AI Agent Service](https://learn.microsoft.com/azure/ai-services/agents/overview){target="_blank"}**
+### Azure AI Agent Service
- Azure AI Agent Service is a newer agentic framework that is designed to empower developers to securely build, deploy, and scale high-quality, and extensible AI agents without needing to manage the underlying compute and storage resources. It is particularly useful for enterprise applications since it is a fully managed service with enterprise grade security.
+Azure AI Agent Service is a newer agentic framework that is designed to empower developers to securely build, deploy, and scale high-quality, and extensible AI agents without needing to manage the underlying compute and storage resources. It is particularly useful for enterprise applications since it is a fully managed service with enterprise grade security.
- When compared to developing with the LLM API directly, Azure AI Agent Service provides some advantages, including:
- - Automatic tool calling – no need to parse a tool call, invoke the tool, and handle the response; all of this is now done server-side
- - Securely managed data – instead of managing your own conversation state, you can rely on threads to store all the information you need
- - Out-of-the-box tools – Tools that you can use to interact with your data sources, such as Bing, Azure AI Search, and Azure Functions.
+When compared to developing with the LLM API directly, Azure AI Agent Service provides some advantages, including:
- The tools available in Azure AI Agent Service can be divided into two categories:
+- Automatic tool calling – no need to parse a tool call, invoke the tool, and handle the response; all of this is now done server-side
+- Securely managed data – instead of managing your own conversation state, you can rely on threads to store all the information you need
+- Out-of-the-box tools – Tools that you can use to interact with your data sources, such as Bing, Azure AI Search, and Azure Functions.
- 1. Knowledge Tools:
- - [Grounding with Bing Search](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/bing-grounding?tabs=python&pivots=overview){target="_blank"}
- - [File Search](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/file-search?tabs=python&pivots=overview){target="_blank"}
- - [Azure AI Search](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/azure-ai-search?tabs=azurecli%2Cpython&pivots=overview-azure-ai-search){target="_blank"}
+The tools available in Azure AI Agent Service can be divided into two categories:
- 2. Action Tools:
- - [Function Calling](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/function-calling?tabs=python&pivots=overview){target="_blank"}
- - [Code Interpreter](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/code-interpreter?tabs=python&pivots=overview){target="_blank"}
- - [OpenAI defined tools](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/openapi-spec?tabs=python&pivots=overview){target="_blank"}
- - [Azure Functions](https://learn.microsoft.com/azure/ai-services/agents/how-to/tools/azure-functions?pivots=overview){target="_blank"}
+1. Knowledge Tools:
+ - Grounding with Bing Search
+ - File Search
+ - Azure AI Search
- The Agent Service allows us to be able to use these tools together as a `toolset`. It also utilizes `threads` which keep track of the history of messages from a particular conversation.
+2. Action Tools:
+ - Function Calling
+ - Code Interpreter
+ - OpenAI defined tools
+ - Azure Functions
- Imagine you are a sales agent at a company called Contoso. You want to develop a conversational agent that can answer questions about your sales data.
+The Agent Service allows us to be able to use these tools together as a `toolset`. It also utilizes `threads` which keep track of the history of messages from a particular conversation.
- The image below illustrates how you could use Azure AI Agent Service to analyze your sales data:
+Imagine you are a sales agent at a company called Contoso. You want to develop a conversational agent that can answer questions about your sales data.
- 
+The image below illustrates how you could use Azure AI Agent Service to analyze your sales data:
- To use any of these tools with the service we can create a client and define a tool or toolset. To implement this practically we can use the Python code below. The LLM will be able to look at the toolset and decide whether to use the user created function, `fetch_sales_data_using_sqlite_query`, or the pre-built Code Interpreter depending on the user request.
+
- ```python
- import os
- from azure.ai.projects import AIProjectClient
- from azure.identity import DefaultAzureCredential
- from fecth_sales_data_functions import fetch_sales_data_using_sqlite_query # fetch_sales_data_using_sqlite_query function which can be found in a fetch_sales_data_functions.py file.
- from azure.ai.projects.models import ToolSet, FunctionTool, CodeInterpreterTool
+To use any of these tools with the service we can create a client and define a tool or toolset. To implement this practically we can use the Python code below. The LLM will be able to look at the toolset and decide whether to use the user created function, `fetch_sales_data_using_sqlite_query`, or the pre-built Code Interpreter depending on the user request.
- project_client = AIProjectClient.from_connection_string(
- credential=DefaultAzureCredential(),
- conn_str=os.environ["PROJECT_CONNECTION_STRING"],
- )
+```python
+import os
+from azure.ai.projects import AIProjectClient
+from azure.identity import DefaultAzureCredential
+from fecth_sales_data_functions import fetch_sales_data_using_sqlite_query # fetch_sales_data_using_sqlite_query function which can be found in a fetch_sales_data_functions.py file.
+from azure.ai.projects.models import ToolSet, FunctionTool, CodeInterpreterTool
- # Initialize function calling agent with the fetch_sales_data_using_sqlite_query function and adding it to the toolset
- fetch_data_function = FunctionTool(fetch_sales_data_using_sqlite_query)
- toolset = ToolSet()
- toolset.add(fetch_data_function)
+project_client = AIProjectClient.from_connection_string(
+ credential=DefaultAzureCredential(),
+ conn_str=os.environ["PROJECT_CONNECTION_STRING"],
+)
- # Initialize Code Interpreter tool and adding it to the toolset.
- code_interpreter = code_interpreter = CodeInterpreterTool()
- toolset = ToolSet()
- toolset.add(code_interpreter)
+# Initialize function calling agent with the fetch_sales_data_using_sqlite_query function and adding it to the toolset
+fetch_data_function = FunctionTool(fetch_sales_data_using_sqlite_query)
+toolset = ToolSet()
+toolset.add(fetch_data_function)
- agent = project_client.agents.create_agent(
- model="gpt-4o-mini", name="my-agent", instructions="You are helpful agent",
- toolset=toolset
- )
- ```
+# Initialize Code Interpreter tool and adding it to the toolset.
+code_interpreter = code_interpreter = CodeInterpreterTool()
+toolset = ToolSet()
+toolset.add(code_interpreter)
+
+agent = project_client.agents.create_agent(
+ model="gpt-4o-mini", name="my-agent", instructions="You are helpful agent",
+ toolset=toolset
+)
+```
## What are the special considerations for using the Tool Use Design Pattern to build trustworthy AI agents?
@@ -287,8 +307,8 @@ Running the app in a secure environment further enhances protection. In enterpri
## Additional Resources
-- [Azure AI Agents Service Workshop](https://microsoft.github.io/build-your-first-agent-with-azure-ai-agent-service-workshop/){target="_blank"}
-- [Contoso Creative Writer Multi-Agent Workshop](https://github.com/Azure-Samples/contoso-creative-writer/tree/main/docs/workshop){target="_blank"}
-- [Semantic Kernel Function Calling Tutorial](https://learn.microsoft.com/semantic-kernel/concepts/ai-services/chat-completion/function-calling/?pivots=programming-language-python#1-serializing-the-functions){target="_blank"}
-- [Semantic Kernel Code Interpreter](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/getting_started_with_agents/openai_assistant/step3_assistant_tool_code_interpreter.py){target="_blank"}
-- [Autogen Tools](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/components/tools.html){target="_blank"}
+- Azure AI Agents Service Workshop
+- Contoso Creative Writer Multi-Agent Workshop
+- Semantic Kernel Function Calling Tutorial
+- Semantic Kernel Code Interpreter
+- Autogen Tools
diff --git a/05-agentic-rag/README.md b/05-agentic-rag/README.md
index eff7c29e..e8eebe44 100644
--- a/05-agentic-rag/README.md
+++ b/05-agentic-rag/README.md
@@ -114,18 +114,18 @@ Agentic RAG represents a natural evolution in how AI systems handle complex, dat
## Additional Resources
-- Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service: Learn how to use your own data with the Azure OpenAI Service.[This Microsoft Learn module provides a comprehensive guide on implementing RAG](https://learn.microsoft.com/training/modules/use-own-data-azure-openai){target="_blank"}
-- Evaluation of generative AI applications with Azure AI Foundry: This article covers the evaluation and comparison of models on publicly available datasets, including [Agentic AI applications and RAG architectures](https://learn.microsoft.com/azure/ai-studio/concepts/evaluation-approach-gen-ai){target="_blank"}
-- [What is Agentic RAG | Weaviate](https://weaviate.io/blog/what-is-agentic-rag){target="_blank"}
-- [Agentic RAG: A Complete Guide to Agent-Based Retrieval Augmented Generation – News from generation RAG](https://ragaboutit.com/agentic-rag-a-complete-guide-to-agent-based-retrieval-augmented-generation/){target="_blank"}
-- [Agentic RAG: turbocharge your RAG with query reformulation and self-query! Hugging Face Open-Source AI Cookbook](https://huggingface.co/learn/cookbook/agent_rag){target="_blank"}
-- [Adding Agentic Layers to RAG](https://youtu.be/aQ4yQXeB1Ss?si=2HUqBzHoeB5tR04U){target="_blank"}
-- [The Future of Knowledge Assistants: Jerry Liu](https://www.youtube.com/watch?v=zeAyuLc_f3Q&t=244s){target="_blank"}
-- [How to Build Agentic RAG Systems](https://www.youtube.com/watch?v=AOSjiXP1jmQ){target="_blank"}
-- [Using Azure AI Foundry Agent Service to scale your AI agents](https://ignite.microsoft.com/sessions/BRK102?source=sessions){target="_blank"}
+- Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service: Learn how to use your own data with the Azure OpenAI Service. This Microsoft Learn module provides a comprehensive guide on implementing RAG
+- Evaluation of generative AI applications with Azure AI Foundry: This article covers the evaluation and comparison of models on publicly available datasets, including Agentic AI applications and RAG architectures
+- What is Agentic RAG | Weaviate
+- Agentic RAG: A Complete Guide to Agent-Based Retrieval Augmented Generation – News from generation RAG
+- Agentic RAG: turbocharge your RAG with query reformulation and self-query! Hugging Face Open-Source AI Cookbook
+- Adding Agentic Layers to RAG
+- The Future of Knowledge Assistants: Jerry Liu
+- How to Build Agentic RAG Systems
+- Using Azure AI Foundry Agent Service to scale your AI agents
### Academic Papers
-- [2303.17651 Self-Refine: Iterative Refinement with Self-Feedback](https://arxiv.org/abs/2303.17651){target="_blank"}
-- [2303.11366 Reflexion: Language Agents with Verbal Reinforcement Learning](https://arxiv.org/abs/2303.11366){target="_blank"}
-- [2305.11738 CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing](https://arxiv.org/abs/2305.11738){target="_blank"}
+- 2303.17651 Self-Refine: Iterative Refinement with Self-Feedback
+- 2303.11366 Reflexion: Language Agents with Verbal Reinforcement Learning
+- 2305.11738 CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing
\ No newline at end of file
diff --git a/06-building-trustworthy-agents/README.md b/06-building-trustworthy-agents/README.md
index ec72fc3d..42ebedf7 100644
--- a/06-building-trustworthy-agents/README.md
+++ b/06-building-trustworthy-agents/README.md
@@ -183,7 +183,7 @@ Building trustworthy AI agents requires careful design, robust security measures
## Additional Resources
-- [Responsible AI overview](https://learn.microsoft.com/azure/ai-studio/responsible-use-of-ai-overview){target="_blank"}
--[Evaluation of generative AI models and AI applications](https://learn.microsoft.com/azure/ai-studio/concepts/evaluation-approach-gen-ai){target="_blank"}
-- [Safety system messages](https://learn.microsoft.com/azure/ai-services/openai/concepts/system-message?context=%2Fazure%2Fai-studio%2Fcontext%2Fcontext&tabs=top-techniques){target="_blank"}
-- [Risk Assessment Template](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-RAI-Impact-Assessment-Template.pdf?culture=en-us&country=us){target="_blank"}
+- Responsible AI overview
+- Evaluation of generative AI models and AI applications
+- Safety system messages
+- Risk Assessment Template
diff --git a/07-planning-design/README.md b/07-planning-design/README.md
index 39536311..8ac1fb06 100644
--- a/07-planning-design/README.md
+++ b/07-planning-design/README.md
@@ -43,7 +43,7 @@ This modular approach also allows for incremental enhancements. For instance, yo
### Structured output
-Large Language Models (LLMs) can generate structured output (e.g. JSON) that is easier for downstream agents or services to parse and process. This is especially useful in a multi-agent context, where we can action these tasks after the planning output is received. Refer to this [blogpost](https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/cookbook/structured-output-agent.html){target="_blank"} for a quick overview.
+Large Language Models (LLMs) can generate structured output (e.g. JSON) that is easier for downstream agents or services to parse and process. This is especially useful in a multi-agent context, where we can action these tasks after the planning output is received. Refer to this blogpost for a quick overview.
Below is an example Python snippet that demonstrates a simple planning agent decomposing a goal into subtasks and generating a structured plan:
@@ -192,7 +192,7 @@ e.g sample code
# .. re-plan and send the tasks to respective agents
```
-For a more comprehensive planning do checkout Magnetic One [Blogpost](https://www.microsoft.com/research/articles/magentic-one-a-generalist-multi-agent-system-for-solving-complex-tasks){target="_blank"} for solving complex tasks.
+For a more comprehensive planning do checkout Magnetic One Blogpost for solving complex tasks.
## Summary
@@ -200,4 +200,4 @@ In this article we have looked at an example of how we can create a planner that
## Additional Resources
-* AutoGen Magentic One - A Generalist multi agent system for solving complex task and has achieved impressive results on multiple challenging agentic benchmarks. Reference: [autogen-magentic-one](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one){target="_blank"}. In this implementation the orchestrator create task specific plan and delegates these tasks to the available agents. In addition to planning the orchestrator also employs a tracking mechanism to monitor the progress of the task and re-plans as required.
+* AutoGen Magentic One - A Generalist multi agent system for solving complex task and has achieved impressive results on multiple challenging agentic benchmarks. Reference: autogen-magentic-one. In this implementation the orchestrator create task specific plan and delegates these tasks to the available agents. In addition to planning the orchestrator also employs a tracking mechanism to monitor the progress of the task and re-plans as required.
diff --git a/08-multi-agent/README.md b/08-multi-agent/README.md
index 3df4ddd3..40dbfe7c 100644
--- a/08-multi-agent/README.md
+++ b/08-multi-agent/README.md
@@ -169,5 +169,5 @@ In this lesson, we've looked at the multi-agent design pattern, including the sc
## Additional resources
-- [AutoGen design patterns](https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/design-patterns/intro.html){target="_blank"}
-- [Agentic design patterns](https://www.analyticsvidhya.com/blog/2024/10/agentic-design-patterns/){target="_blank"}
\ No newline at end of file
+- AutoGen design patterns
+- Agentic design patterns
\ No newline at end of file
diff --git a/10-ai-agents-production/README.md b/10-ai-agents-production/README.md
index 59ffe20c..21b9fd8f 100644
--- a/10-ai-agents-production/README.md
+++ b/10-ai-agents-production/README.md
@@ -58,7 +58,7 @@ This is currently the last lesson of "AI Agents for Beginners".
We plan to continue to add lessons based on feedback and changes in this ever growing industry so stop by again in the near future.
-If you want to continue your learning and building with AI Agents, join the [Azure AI Community Discord](https://discord.gg/kzRShWzttr){target="_blank"}.
+If you want to continue your learning and building with AI Agents, join the Azure AI Community Discord.
We host workshops, community roundtables and "ask me anything" sessions there.