diff --git a/.openpublishing.redirection.ai.json b/.openpublishing.redirection.ai.json index c22ca5170dd6b..2590e0ea7f3b4 100644 --- a/.openpublishing.redirection.ai.json +++ b/.openpublishing.redirection.ai.json @@ -1,9 +1,17 @@ { "redirections": [ + { + "source_path_from_root": "/docs/ai/ai-extensions.md", + "redirect_url": "/dotnet/ai/microsoft-extensions-ai" + }, { "source_path_from_root": "/docs/ai/conceptual/agents.md", "redirect_url": "/dotnet/ai" }, + { + "source_path_from_root": "/docs/ai/get-started/dotnet-ai-overview.md", + "redirect_url": "/dotnet/ai/overview" + }, { "source_path_from_root": "/docs/ai/how-to/app-service-db-auth.md", "redirect_url": "/dotnet/ai" diff --git a/.openpublishing.redirection.core.json b/.openpublishing.redirection.core.json index 2542c8aef65af..5f3df9aae20b7 100644 --- a/.openpublishing.redirection.core.json +++ b/.openpublishing.redirection.core.json @@ -692,6 +692,10 @@ "source_path_from_root": "/docs/core/dotnet-five.md", "redirect_url": "/dotnet/core/whats-new/dotnet-5" }, + { + "source_path_from_root": "/docs/core/extensions/artificial-intelligence.md", + "redirect_url": "/dotnet/ai/microsoft-extensions-ai" + }, { "source_path_from_root": "/docs/core/extensions/culture-insensitive-string-operations.md", "redirect_url": "/dotnet/core/extensions/performing-culture-insensitive-string-operations" diff --git a/docs/ai/ai-extensions.md b/docs/ai/ai-extensions.md deleted file mode 100644 index f17860b02e845..0000000000000 --- a/docs/ai/ai-extensions.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Unified AI building blocks for .NET -description: Learn how to develop with unified AI building blocks for .NET using Microsoft.Extensions.AI and Microsoft.Extensions.AI.Abstractions libraries -ms.date: 12/16/2024 -ms.topic: quickstart -ms.custom: devx-track-dotnet, devx-track-dotnet-ai -author: alexwolfmsft -ms.author: alexwolf ---- - -# Unified AI building blocks for .NET using Microsoft.Extensions.AI - -The .NET ecosystem provides abstractions for integrating AI services into .NET applications and libraries using the libraries. The .NET team has also enhanced the core `Microsoft.Extensions` libraries with these abstractions for use in generative AI .NET applications and libraries. In the sections ahead, you learn: - -- Core concepts and capabilities of the `Microsoft.Extensions.AI` libraries. -- How to work with AI abstractions in your apps and the benefits they offer. -- Essential AI middleware concepts. - -For more information, see [Introduction to Microsoft.Extensions.AI](../core/extensions/artificial-intelligence.md). - -## What are the Microsoft.Extensions.AI libraries? - -The `Microsoft.Extensions.AI` libraries provides core exchange types and abstractions for interacting with AI services, such as small and large language models (SLMs and LLMs). They also provide the ability to register services like logging and caching in your dependency injection (DI) container. - -:::image type="content" source="media/ai-extensions/meai-architecture-diagram.png" lightbox="media/ai-extensions/meai-architecture-diagram.png" alt-text="An architectural diagram of the AI extensions libraries."::: - -The `Microsoft.Extensions.AI` namespaces provide abstractions that can be implemented by various services, all adhering to the same core concepts. This library is not intended to provide APIs tailored to any specific provider's services. The goal of `Microsoft.Extensions.AI` is to act as a unifying layer within the .NET ecosystem, enabling developers to choose their preferred frameworks and libraries while ensuring seamless integration and collaboration across the ecosystem. - -## Work with abstractions for common AI services - -AI capabilities are rapidly evolving, with patterns emerging for common functionality: - -- Chat features to conversationally prompt an AI for information or data analysis. -- Embedding generation to integrate with vector search capabilities. -- Tool calling to integrate with other services, platforms, or code. - -The `Microsoft.Extensions.AI.Abstractions` package provides abstractions for these types of tasks, so developers can focus on coding against conceptual AI capabilities rather than specific platforms or provider implementations. Unified abstractions are crucial for developers to work effectively across different sources. - -For example, the interface allows consumption of language models from various providers, such as an Azure OpenAI service or a local Ollama installation. Any .NET package that provides an AI client can implement the `IChatClient` interface to enable seamless integration with consuming .NET code: - -```csharp -IChatClient client = -    environment.IsDevelopment ? -    new OllamaChatClient(...) : -    new AzureAIInferenceChatClient(...); -``` - -Then, regardless of the provider you're using, you can send requests by calling , as follows: - -```csharp -var response = await chatClient.GetResponseAsync( -      "Translate the following text into Pig Latin: I love .NET and AI"); - -Console.WriteLine(response.Message); -``` - -These abstractions allow for idiomatic C# code for various scenarios with minimal code changes. They make it easy to use different services for development and production, addressing hybrid scenarios, or exploring other service providers. - -Library authors who implement these abstractions make their clients interoperable with the broader `Microsoft.Extensions.AI` ecosystem. Service-specific APIs remain accessible if needed, allowing consumers to code against the standard abstractions and pass through to proprietary APIs only when required. - -`Microsoft.Extensions.AI` provides implementations for the following services through additional packages: - -- [OpenAI](https://aka.ms/meai-openai-nuget) -- [Azure OpenAI](https://aka.ms/meai-openai-nuget) -- [Azure AI Inference](https://aka.ms/meai-azaiinference-nuget) -- [Ollama](https://aka.ms/meai-ollama-nuget) - -In the future, implementations of these `Microsoft.Extensions.AI` abstractions will be part of the respective client libraries rather than requiring installation of additional packages. - -## Middleware implementations for AI services - -Connecting to and using AI services is just one aspect of building robust applications. Production-ready applications require additional features like telemetry, logging, caching, and tool-calling capabilities. The `Microsoft.Extensions.AI` packages provides APIs that enable you to easily integrate these components into your applications using familiar dependency injection and middleware patterns. - -The following sample demonstrates how to register an OpenAI `IChatClient`. You can attach capabilities in a consistent way across various providers by calling methods such as on a . - -```csharp -app.Services.AddChatClient(builder => builder -    .UseLogging() - .UseFunctionInvocation() - .UseDistributedCache()    - .UseOpenTelemetry() -    .Use(new OpenAIClient(...)).AsChatClient(...)); -``` - -The capabilities demonstrated in this snippet are included in the `Microsoft.Extensions.AI` library, but they're only a small subset of the capabilities that can be layered in with this approach. .NET developers are able to expose many types of middleware to create powerful AI functionality. - -## Build with Microsoft.Extensions.AI - -You can start building with `Microsoft.Extensions.AI` in the following ways: - -- **Library developers**: If you own libraries that provide clients for AI services, consider implementing the interfaces in your libraries. This allows users to easily integrate your NuGet package via the abstractions. -- **Service consumers**: If you're developing libraries that consume AI services, use the abstractions instead of hardcoding to a specific AI service. This approach gives your consumers the flexibility to choose their preferred service. -- **Application developers**: Use the abstractions to simplify integration into your apps. This enables portability across models and services, facilitates testing and mocking, leverages middleware provided by the ecosystem, and maintains a consistent API throughout your app, even if you use different services in different parts of your application. -- **Ecosystem contributors**: If you're interested in contributing to the ecosystem, consider writing custom middleware components. - -To get started, see the samples in the [dotnet/ai-samples](https://aka.ms/meai-samples) GitHub repository. - -For an end-to-end sample using `Microsoft.Extensions.AI`, see [eShopSupport](https://github.com/dotnet/eShopSupport). - -## Next steps - -- [Build an AI chat app with .NET](quickstarts/build-chat-app.md) -- [Quickstart - Summarize text using Azure AI chat app with .NET](quickstarts/prompt-model.md) diff --git a/docs/ai/conceptual/evaluation-libraries.md b/docs/ai/conceptual/evaluation-libraries.md index 2d94f60468c31..c15f91fadb946 100644 --- a/docs/ai/conceptual/evaluation-libraries.md +++ b/docs/ai/conceptual/evaluation-libraries.md @@ -8,7 +8,7 @@ ms.date: 03/18/2025 The Microsoft.Extensions.AI.Evaluation libraries (currently in preview) simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps. Various metrics measure aspects like relevance, truthfulness, coherence, and completeness of the responses. Evaluations are crucial in testing, because they help ensure that the AI model performs as expected and provides reliable and accurate results. -The evaluation libraries, which are built on top of the [Microsoft.Extensions.AI abstractions](../ai-extensions.md), are composed of the following NuGet packages: +The evaluation libraries, which are built on top of the [Microsoft.Extensions.AI abstractions](../microsoft-extensions-ai.md), are composed of the following NuGet packages: - [📦 Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) – Defines the core abstractions and types for supporting evaluation. - [📦 Microsoft.Extensions.AI.Evaluation.Quality](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) – Contains evaluators that assess the quality of LLM responses in an app according to metrics such as relevance, fluency, coherence, and truthfulness. diff --git a/docs/ai/dotnet-ai-ecosystem.md b/docs/ai/dotnet-ai-ecosystem.md index 4e7b6db204328..911b75612c410 100644 --- a/docs/ai/dotnet-ai-ecosystem.md +++ b/docs/ai/dotnet-ai-ecosystem.md @@ -1,21 +1,21 @@ --- -title: Overview of the .NET + AI ecosystem +title: .NET + AI ecosystem tools and SDKs description: This article provides an overview of the ecosystem of SDKs and tools available to .NET developers integrating AI into their applications. ms.date: 11/24/2024 ms.topic: overview ms.custom: devx-track-dotnet, devx-track-dotnet-ai --- -# Overview of the .NET + AI ecosystem +# .NET + AI ecosystem tools and SDKs The .NET ecosystem provides many powerful tools, libraries, and services to develop AI applications. .NET supports both cloud and local AI model connections, many different SDKs for various AI and vector database services, and other tools to help you build intelligent apps of varying scope and complexity. > [!IMPORTANT] -> Not all of the SDKs and services presented in this doc are maintained by Microsoft. When considering an SDK, make sure to evaluate its quality, licensing, support, and compatibility to ensure they meet your requirements. +> Not all of the SDKs and services presented in this article are maintained by Microsoft. When considering an SDK, make sure to evaluate its quality, licensing, support, and compatibility to ensure they meet your requirements. ## Microsoft.Extensions.AI libraries -[`Microsoft.Extensions.AI`](ai-extensions.md) is a set of core .NET libraries that provide a unified layer of C# abstractions for interacting with AI services, such as small and large language models (SLMs and LLMs), embeddings, and middleware. These APIs were created in collaboration with developers across the .NET ecosystem, including Semantic Kernel. The low-level APIs, such as and , were extracted from Semantic Kernel and moved into the namespace. +[`Microsoft.Extensions.AI`](microsoft-extensions-ai.md) is a set of core .NET libraries that provide a unified layer of C# abstractions for interacting with AI services, such as small and large language models (SLMs and LLMs), embeddings, and middleware. These APIs were created in collaboration with developers across the .NET ecosystem, including Semantic Kernel. The low-level APIs, such as and , were extracted from Semantic Kernel and moved into the namespace. `Microsoft.Extensions.AI` provides abstractions that can be implemented by various services, all adhering to the same core concepts. This library is not intended to provide APIs tailored to any specific provider's services. The goal of `Microsoft.Extensions.AI` is to act as a unifying layer within the .NET ecosystem, enabling developers to choose their preferred frameworks and libraries while ensuring seamless integration and collaboration across the ecosystem. diff --git a/docs/ai/how-to/app-service-aoai-auth.md b/docs/ai/how-to/app-service-aoai-auth.md index 1774c71503432..7b70bfc763eca 100644 --- a/docs/ai/how-to/app-service-aoai-auth.md +++ b/docs/ai/how-to/app-service-aoai-auth.md @@ -11,7 +11,7 @@ zone_pivot_groups: azure-interface # Authenticate to Azure OpenAI from an Azure hosted app using Microsoft Entra ID -This article demonstrates how to use [Microsoft Entra ID managed identities](/azure/app-service/overview-managed-identity) and the [Microsoft.Extensions.AI library](../ai-extensions.md) to authenticate an Azure hosted app to an Azure OpenAI resource. +This article demonstrates how to use [Microsoft Entra ID managed identities](/azure/app-service/overview-managed-identity) and the [Microsoft.Extensions.AI library](../microsoft-extensions-ai.md) to authenticate an Azure hosted app to an Azure OpenAI resource. A managed identity from Microsoft Entra ID allows your app to easily access other Microsoft Entra protected resources such as Azure OpenAI. The identity is managed by the Azure platform and doesn't require you to provision, manage, or rotate any secrets. diff --git a/docs/ai/index.yml b/docs/ai/index.yml index c509f36218af9..fefbd4ebfdecf 100644 --- a/docs/ai/index.yml +++ b/docs/ai/index.yml @@ -23,15 +23,17 @@ landingContent: linkLists: - linkListType: get-started links: - - text: Develop .NET applications - url: get-started/dotnet-ai-overview.md - - text: Learning resources and samples - url: azure-ai-for-dotnet-developers.md - - text: Build an Azure AI chat app with .NET + - text: Develop .NET apps with AI features + url: overview.md + - text: Connect to and prompt an AI model + url: quickstarts/prompt-model.md + - text: Microsoft.Extensions.AI libraries + url: microsoft-extensions-ai.md + - text: Build an Azure AI chat app url: quickstarts/get-started-openai.md - text: Summarize text using an Azure OpenAI chat app url: quickstarts/quickstart-openai-summarize-text.md - - text: Generate images using Azure AI with .NET + - text: Generate images using Azure AI url: quickstarts/quickstart-openai-generate-images.md # Card diff --git a/docs/ai/microsoft-extensions-ai.md b/docs/ai/microsoft-extensions-ai.md new file mode 100644 index 0000000000000..ca4465bf29d7d --- /dev/null +++ b/docs/ai/microsoft-extensions-ai.md @@ -0,0 +1,265 @@ +--- +title: Microsoft.Extensions.AI libraries (Preview) +description: Learn how to use the Microsoft.Extensions.AI libraries to integrate and interact with various AI services in your .NET applications. +author: IEvangelist +ms.author: dapine +ms.date: 04/29/2025 +--- + +# Microsoft.Extensions.AI libraries (Preview) + +.NET developers need a way to integrate and interact with a growing variety of artificial intelligence (AI) services in their apps. The `Microsoft.Extensions.AI` libraries provide a unified approach for representing generative AI components, and enables seamless integration and interoperability with various AI services. This article introduces the libraries and provides in-depth usage examples to help you get started. + +## The packages + +The [📦 Microsoft.Extensions.AI.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.AI.Abstractions) package provides the core exchange types: and . Any .NET library that provides an AI client can implement the `IChatClient` interface to enable seamless integration with consuming code. + +The [📦 Microsoft.Extensions.AI](https://www.nuget.org/packages/Microsoft.Extensions.AI) package has an implicit dependency on the `Microsoft.Extensions.AI.Abstractions` package. This package enables you to easily integrate components such as telemetry and caching into your applications using familiar dependency injection and middleware patterns. For example, it provides the extension method, which adds OpenTelemetry support to the chat client pipeline. + +### Which package to reference + +Libraries that provide implementations of the abstractions typically reference only `Microsoft.Extensions.AI.Abstractions`. + +To also have access to higher-level utilities for working with generative AI components, reference the `Microsoft.Extensions.AI` package instead (which itself references `Microsoft.Extensions.AI.Abstractions`). Most consuming applications and services should reference the `Microsoft.Extensions.AI` package along with one or more libraries that provide concrete implementations of the abstractions. + +### Install the packages + +For information about how to install NuGet packages, see [dotnet package add](../core/tools/dotnet-package-add.md) or [Manage package dependencies in .NET applications](../core/tools/dependencies.md). + +## API usage examples + +The following subsections show specific [IChatClient](#the-ichatclient-interface) usage examples: + +- [Request a chat response](#request-a-chat-response) +- [Request a streaming chat response](#request-a-streaming-chat-response) +- [Tool calling](#tool-calling) +- [Cache responses](#cache-responses) +- [Use telemetry](#use-telemetry) +- [Provide options](#provide-options) +- [Pipelines of functionality](#functionality-pipelines) +- [Custom `IChatClient` middleware](#custom-ichatclient-middleware) +- [Dependency injection](#dependency-injection) +- [Stateless vs. stateful clients](#stateless-vs-stateful-clients) + +The following sections show specific [IEmbeddingGenerator](#the-iembeddinggenerator-interface) usage examples: + +- [Sample implementation](#sample-implementation) +- [Create embeddings](#create-embeddings) +- [Pipelines of functionality](#pipelines-of-functionality) + +### The `IChatClient` interface + +The interface defines a client abstraction responsible for interacting with AI services that provide chat capabilities. It includes methods for sending and receiving messages with multi-modal content (such as text, images, and audio), either as a complete set or streamed incrementally. Additionally, it allows for retrieving strongly typed services provided by the client or its underlying services. + +.NET libraries that provide clients for language models and services can provide an implementation of the `IChatClient` interface. Any consumers of the interface are then able to interoperate seamlessly with these models and services via the abstractions. + +#### Request a chat response + +With an instance of , you can call the method to send a request and get a response. The request is composed of one or more messages, each of which is composed of one or more pieces of content. Accelerator methods exist to simplify common cases, such as constructing a request for a single piece of text content. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI/Program.cs"::: + +The core `IChatClient.GetResponseAsync` method accepts a list of messages. This list represents the history of all messages that are part of the conversation. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.GetResponseAsyncArgs/Program.cs" id="Snippet1"::: + +The that's returned from `GetResponseAsync` exposes a list of instances that represent one or more messages generated as part of the operation. In common cases, there is only one response message, but in some situations, there can be multiple messages. The message list is ordered, such that the last message in the list represents the final message to the request. To provide all of those response messages back to the service in a subsequent request, you can add the messages from the response back into the messages list. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/Program.cs" id="Snippet1"::: + +#### Request a streaming chat response + +The inputs to are identical to those of `GetResponseAsync`. However, rather than returning the complete response as part of a object, the method returns an where `T` is , providing a stream of updates that collectively form the single response. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.GetStreamingResponseAsync/Program.cs" id="Snippet1"::: + +> [!TIP] +> Streaming APIs are nearly synonymous with AI user experiences. C# enables compelling scenarios with its `IAsyncEnumerable` support, allowing for a natural and efficient way to stream data. + +As with `GetResponseAsync`, you can add the updates from back into the messages list. Because the updates are individual pieces of a response, you can use helpers like to compose one or more updates back into a single instance. + +Helpers like compose a and then extract the composed messages from the response and add them to a list. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/Program.cs" id="Snippet2"::: + +#### Tool calling + +Some models and services support _tool calling_. To gather additional information, you can configure the with information about tools (usually .NET methods) that the model can request the client to invoke. Instead of sending a final response, the model requests a function invocation with specific arguments. The client then invokes the function and sends the results back to the model with the conversation history. The `Microsoft.Extensions.AI` library includes abstractions for various message content types, including function call requests and results. While `IChatClient` consumers can interact with this content directly, `Microsoft.Extensions.AI` automates these interactions pro. It provides the following types: + +- : Represents a function that can be described to an AI model and invoked. +- : Provides factory methods for creating `AIFunction` instances that represent .NET methods. +- : Wraps an `IChatClient` to add automatic function-invocation capabilities. + +The following example demonstrates a random function invocation (this example depends on the [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama) NuGet package): + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.ToolCalling/Program.cs"::: + +The preceding code: + +- Defines a function named `GetCurrentWeather` that returns a random weather forecast. +- Instantiates a with an and configures it to use function invocation. +- Calls `GetStreamingResponseAsync` on the client, passing a prompt and a list of tools that includes a function created with . +- Iterates over the response, printing each update to the console. + +#### Cache responses + +If you're familiar with [Caching in .NET](../core/extensions/caching.md), it's good to know that provides other such delegating `IChatClient` implementations. The is an `IChatClient` that layers caching around another arbitrary `IChatClient` instance. When a novel chat history is submitted to the `DistributedCachingChatClient`, it forwards it to the underlying client and then caches the response before sending it back to the consumer. The next time the same history is submitted, such that a cached response can be found in the cache, the `DistributedCachingChatClient` returns the cached response rather than forwarding the request along the pipeline. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/Program.cs"::: + +This example depends on the [📦 Microsoft.Extensions.Caching.Memory](https://www.nuget.org/packages/Microsoft.Extensions.Caching.Memory) NuGet package. For more information, see [Caching in .NET](../core/extensions/caching.md). + +#### Use telemetry + +Another example of a delegating chat client is the . This implementation adheres to the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/). Similar to other `IChatClient` delegators, it layers metrics and spans around other arbitrary `IChatClient` implementations. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.UseTelemetry/Program.cs"::: + +(The preceding example depends on the [📦 OpenTelemetry.Exporter.Console](https://www.nuget.org/packages/OpenTelemetry.Exporter.Console) NuGet package.) + +Alternatively, the and corresponding method provide a simple way to write log entries to an for every request and response. + +#### Provide options + +Every call to or can optionally supply a instance containing additional parameters for the operation. The most common parameters among AI models and services show up as strongly typed properties on the type, such as . Other parameters can be supplied by name in a weakly typed manner, via the dictionary. + +You can also specify options when building an `IChatClient` with the fluent API by chaining a call to the extension method. This delegating client wraps another client and invokes the supplied delegate to populate a `ChatOptions` instance for every call. For example, to ensure that the property defaults to a particular model name, you can use code like the following: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.ProvideOptions/Program.cs"::: + +#### Functionality pipelines + +`IChatClient` instances can be layered to create a pipeline of components that each add additional functionality. These components can come from `Microsoft.Extensions.AI`, other NuGet packages, or custom implementations. This approach allows you to augment the behavior of the `IChatClient` in various ways to meet your specific needs. Consider the following code snippet that layers a distributed cache, function invocation, and OpenTelemetry tracing around a sample chat client: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.FunctionalityPipelines/Program.cs" id="Snippet1"::: + +#### Custom `IChatClient` middleware + +To add additional functionality, you can implement `IChatClient` directly or use the class. This class serves as a base for creating chat clients that delegate operations to another `IChatClient` instance. It simplifies chaining multiple clients, allowing calls to pass through to an underlying client. + +The `DelegatingChatClient` class provides default implementations for methods like `GetResponseAsync`, `GetStreamingResponseAsync`, and `Dispose`, which forward calls to the inner client. A derived class can then override only the methods it needs to augment the behavior, while delegating other calls to the base implementation. This approach is useful for creating flexible and modular chat clients that are easy to extend and compose. + +The following is an example class derived from `DelegatingChatClient` that uses the [System.Threading.RateLimiting](https://www.nuget.org/packages/System.Threading.RateLimiting) library to provide rate-limiting functionality. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClient.cs"::: + +As with other `IChatClient` implementations, the `RateLimitingChatClient` can be composed: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/Program.cs"::: + +To simplify the composition of such components with others, component authors should create a `Use*` extension method for registering the component into a pipeline. For example, consider the following `UseRatingLimiting` extension method: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.cs" id="one"::: + +Such extensions can also query for relevant services from the DI container; the used by the pipeline is passed in as an optional parameter: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs" id="two"::: + +Now it's easy for the consumer to use this in their pipeline, for example: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/Program.cs" id="SnippetUse"::: + +The previous extension methods demonstrate using a `Use` method on . `ChatClientBuilder` also provides overloads that make it easier to write such delegating handlers. For example, in the earlier `RateLimitingChatClient` example, the overrides of `GetResponseAsync` and `GetStreamingResponseAsync` only need to do work before and after delegating to the next client in the pipeline. To achieve the same thing without writing a custom class, you can use an overload of `Use` that accepts a delegate that's used for both `GetResponseAsync` and `GetStreamingResponseAsync`, reducing the boilerplate required: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.UseExample/Program.cs"::: + +For scenarios where you need a different implementation for `GetResponseAsync` and `GetStreamingResponseAsync` in order to handle their unique return types, you can use the overload that accepts a delegate for each. + +#### Dependency injection + + implementations are often provided to an application via [dependency injection (DI)](../core/extensions/dependency-injection.md). In this example, an is added into the DI container, as is an `IChatClient`. The registration for the `IChatClient` uses a builder that creates a pipeline containing a caching client (which then uses an `IDistributedCache` retrieved from DI) and the sample client. The injected `IChatClient` can be retrieved and used elsewhere in the app. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/Program.cs"::: + +What instance and configuration is injected can differ based on the current needs of the application, and multiple pipelines can be injected with different keys. + +#### Stateless vs. stateful clients + +_Stateless_ services require all relevant conversation history to be sent back on every request. In contrast, _stateful_ services keep track of the history and require only additional messages to be sent with a request. The interface is designed to handle both stateless and stateful AI services. + +When working with a stateless service, callers maintain a list of all messages. They add in all received response messages and provide the list back on subsequent interactions. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs" id="Snippet1"::: + +For stateful services, you might already know the identifier used for the relevant conversation. You can put that identifier into . Usage then follows the same pattern, except there's no need to maintain a history manually. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs" id="Snippet2"::: + +Some services might support automatically creating a thread ID for a request that doesn't have one. In such cases, you can transfer the over to the `ChatOptions.ChatThreadId` for subsequent requests. For example: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs" id="Snippet3"::: + +If you don't know ahead of time whether the service is stateless or stateful, you can check the response and act based on its value. If it's set, then that value is propagated to the options and the history is cleared so as to not resend the same history again. If the response `ChatThreadId` isn't set, then the response message is added to the history so that it's sent back to the service on the next turn. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs" id="Snippet4"::: + +### The `IEmbeddingGenerator` interface + +The interface represents a generic generator of embeddings. Here, `TInput` is the type of input values being embedded, and `TEmbedding` is the type of generated embedding, which inherits from the class. + +The `Embedding` class serves as a base class for embeddings generated by an `IEmbeddingGenerator`. It's designed to store and manage the metadata and data associated with embeddings. Derived types, like `Embedding`, provide the concrete embedding vector data. For example, an `Embedding` exposes a `ReadOnlyMemory Vector { get; }` property for access to its embedding data. + +The `IEmbeddingGenerator` interface defines a method to asynchronously generate embeddings for a collection of input values, with optional configuration and cancellation support. It also provides metadata describing the generator and allows for the retrieval of strongly typed services that can be provided by the generator or its underlying services. + +#### Sample implementation + +The following sample implementation of `IEmbeddingGenerator` shows the general structure. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/AI.Shared/SampleEmbeddingGenerator.cs"::: + +The preceding code: + +- Defines a class named `SampleEmbeddingGenerator` that implements the `IEmbeddingGenerator>` interface. +- Has a primary constructor that accepts an endpoint and model ID, which are used to identify the generator. +- Implements the `GenerateAsync` method to generate embeddings for a collection of input values. + +The sample implementation just generates random embedding vectors. You can find actual concrete implementations in the following packages: + +- [📦 Microsoft.Extensions.AI.OpenAI](https://www.nuget.org/packages/Microsoft.Extensions.AI.OpenAI) +- [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama) + +#### Create embeddings + +The primary operation performed with an is embedding generation, which is accomplished with its method. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/Program.cs" id="Snippet1"::: + +Accelerator extension methods also exist to simplify common cases, such as generating an embedding vector from a single input. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/Program.cs" id="Snippet2"::: + +#### Pipelines of functionality + +As with `IChatClient`, `IEmbeddingGenerator` implementations can be layered. `Microsoft.Extensions.AI` provides a delegating implementation for `IEmbeddingGenerator` for caching and telemetry. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs"::: + +The `IEmbeddingGenerator` enables building custom middleware that extends the functionality of an `IEmbeddingGenerator`. The class is an implementation of the `IEmbeddingGenerator` interface that serves as a base class for creating embedding generators that delegate their operations to another `IEmbeddingGenerator` instance. It allows for chaining multiple generators in any order, passing calls through to an underlying generator. The class provides default implementations for methods such as and `Dispose`, which forward the calls to the inner generator instance, enabling flexible and modular embedding generation. + +The following is an example implementation of such a delegating embedding generator that rate-limits embedding generation requests: + +:::code language="csharp" source="snippets/microsoft-extensions-ai/AI.Shared/RateLimitingEmbeddingGenerator.cs"::: + +This can then be layered around an arbitrary `IEmbeddingGenerator>` to rate limit all embedding generation operations. + +:::code language="csharp" source="snippets/microsoft-extensions-ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs"::: + +In this way, the `RateLimitingEmbeddingGenerator` can be composed with other `IEmbeddingGenerator>` instances to provide rate-limiting functionality. + +## Build with Microsoft.Extensions.AI + +You can start building with `Microsoft.Extensions.AI` in the following ways: + +- **Library developers**: If you own libraries that provide clients for AI services, consider implementing the interfaces in your libraries. This allows users to easily integrate your NuGet package via the abstractions. +- **Service consumers**: If you're developing libraries that consume AI services, use the abstractions instead of hardcoding to a specific AI service. This approach gives your consumers the flexibility to choose their preferred service. +- **Application developers**: Use the abstractions to simplify integration into your apps. This enables portability across models and services, facilitates testing and mocking, leverages middleware provided by the ecosystem, and maintains a consistent API throughout your app, even if you use different services in different parts of your application. +- **Ecosystem contributors**: If you're interested in contributing to the ecosystem, consider writing custom middleware components. + +For more samples, see the [dotnet/ai-samples](https://aka.ms/meai-samples) GitHub repository. For an end-to-end sample, see [eShopSupport](https://github.com/dotnet/eShopSupport). + +## See also + +- [Build an AI chat app with .NET](./quickstarts/build-chat-app.md) +- [.NET dependency injection](../core/extensions/dependency-injection.md) +- [Rate limit an HTTP handler in .NET](../core/extensions/http-ratelimiter.md) +- [Caching in .NET](../core/extensions/caching.md) diff --git a/docs/ai/get-started/dotnet-ai-overview.md b/docs/ai/overview.md similarity index 50% rename from docs/ai/get-started/dotnet-ai-overview.md rename to docs/ai/overview.md index 497c40cec1d43..9f7f6dc118046 100644 --- a/docs/ai/get-started/dotnet-ai-overview.md +++ b/docs/ai/overview.md @@ -1,30 +1,30 @@ --- -title: Develop .NET applications with AI features +title: Develop .NET apps with AI features description: Learn how you can build .NET applications that include AI features. ms.date: 04/09/2025 ms.topic: overview ms.custom: devx-track-dotnet, devx-track-dotnet-ai --- -# Develop .NET applications with AI features +# Develop .NET apps with AI features With .NET, you can use artificial intelligence (AI) to automate and accomplish complex tasks in your applications using the tools, platforms, and services that are familiar to you. -## Why choose .NET to build AI applications? +## Why choose .NET to build AI apps? Millions of developers use .NET to create applications that run on the web, on mobile and desktop devices, or in the cloud. By using .NET to integrate AI into your applications, you can take advantage of all that .NET has to offer: * A unified story for building web UIs, APIs, and applications. -* Develop on Windows, macOS, and Linux. -* Open-source and community-focused. +* Supported on Windows, macOS, and Linux. +* Is open-source and community-focused. * Runs on top of the most popular web servers and cloud platforms. -* Powerful tooling to edit, debug, test, and deploy. +* Provides powerful tooling to edit, debug, test, and deploy. ## What can you build with AI and .NET? The opportunities with AI are near endless. Here are a few examples of solutions you can build using AI in your .NET applications: -* Language processing: Create virtual agents / chatbots to talk with your data and generate content and images. +* Language processing: Create virtual agents or chatbots to talk with your data and generate content and images. * Computer vision: Identify objects in an object or video. * Audio generation: Use synthesized voices to interact with customers. * Classification: Label the severity of a customer-reported issue. @@ -34,18 +34,18 @@ The opportunities with AI are near endless. Here are a few examples of solutions We recommend the following sequence of tutorials and articles for an introduction to developing applications with AI and .NET: -| Scenario | Tutorial | -|----------|----------| -| Create a chat application | [Build an Azure AI chat app with .NET](../quickstarts/build-chat-app.md)| -| Summarize text | [Summarize text using Azure AI chat app with .NET](../quickstarts/prompt-model.md) | -| Chat with your data | [Get insight about your data from an .NET Azure AI chat app](../quickstarts/build-vector-search-app.md) | -| Call .NET functions with AI | [Extend Azure AI using tools and execute a local function with .NET](../quickstarts/use-function-calling.md) | -| Generate images | [Generate images using Azure AI with .NET](../quickstarts/generate-images.md) | -| Train your own model |[ML.NET tutorial](https://dotnet.microsoft.com/learn/ml-dotnet/get-started-tutorial/intro) | +| Scenario | Tutorial | +|-----------------------------|-------------------------------------------------------------------------| +| Create a chat application | [Build an Azure AI chat app with .NET](./quickstarts/build-chat-app.md) | +| Summarize text | [Summarize text using Azure AI chat app with .NET](./quickstarts/prompt-model.md) | +| Chat with your data | [Get insight about your data from an .NET Azure AI chat app](./quickstarts/build-vector-search-app.md) | +| Call .NET functions with AI | [Extend Azure AI using tools and execute a local function with .NET](./quickstarts/use-function-calling.md) | +| Generate images | [Generate images using Azure AI with .NET](./quickstarts/generate-images.md) | +| Train your own model | [ML.NET tutorial](https://dotnet.microsoft.com/learn/ml-dotnet/get-started-tutorial/intro) | -Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](../conceptual/how-genai-and-llms-work.md). +Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). ## Next steps -- [Quickstart: Build an Azure AI chat app with .NET](../quickstarts/build-chat-app.md) -- [Video series: Machine Learning and AI with .NET](/shows/machine-learning-and-ai-with-dotnet-for-beginners) +* [Quickstart: Build an Azure AI chat app with .NET](./quickstarts/build-chat-app.md) +* [Video series: Machine Learning and AI with .NET](/shows/machine-learning-and-ai-with-dotnet-for-beginners) diff --git a/docs/ai/quickstarts/evaluate-ai-response.md b/docs/ai/quickstarts/evaluate-ai-response.md index 387f80454714b..97630751d4e30 100644 --- a/docs/ai/quickstarts/evaluate-ai-response.md +++ b/docs/ai/quickstarts/evaluate-ai-response.md @@ -24,7 +24,7 @@ To provision an Azure OpenAI service and model using the Azure portal, complete ## Create the test app -Complete the following steps to create an MSTest project that connects to your local `phi3:mini` AI model. +Complete the following steps to create an MSTest project that connects to the `gpt-4o` AI model. 1. In a terminal window, navigate to the directory where you want to create your app, and create a new MSTest app with the `dotnet new` command: diff --git a/docs/ai/quickstarts/snippets/structured-output/Program.cs b/docs/ai/quickstarts/snippets/structured-output/Program.cs new file mode 100644 index 0000000000000..ae5f6a125a24f --- /dev/null +++ b/docs/ai/quickstarts/snippets/structured-output/Program.cs @@ -0,0 +1,66 @@ +using Azure.AI.OpenAI; +using Azure.Identity; +using Microsoft.Extensions.AI; +using Microsoft.Extensions.Configuration; + +// +IConfigurationRoot config = new ConfigurationBuilder() + .AddUserSecrets() + .Build(); + +string endpoint = config["AZURE_OPENAI_ENDPOINT"]; +string model = config["AZURE_OPENAI_GPT_NAME"]; +string tenantId = config["AZURE_TENANT_ID"]; + +// Get a chat client for the Azure OpenAI endpoint. +AzureOpenAIClient azureClient = + new( + new Uri(endpoint), + new DefaultAzureCredential(new DefaultAzureCredentialOptions() { TenantId = tenantId })); +IChatClient chatClient = azureClient + .GetChatClient(deploymentName: model) + .AsIChatClient(); +// + +// +string review = "I'm happy with the product!"; +var response = await chatClient.GetResponseAsync($"What's the sentiment of this review? {review}"); +Console.WriteLine($"Sentiment: {response.Result}"); +// + +// +string[] inputs = [ + "Best purchase ever!", + "Returned it immediately.", + "Hello", + "It works as advertised.", + "The packaging was damaged but otherwise okay." +]; + +foreach (var i in inputs) +{ + var response2 = await chatClient.GetResponseAsync($"What's the sentiment of this review? {i}"); + Console.WriteLine($"Review: {i} | Sentiment: {response2.Result}"); +} +// + +// +var review3 = "This product worked okay."; +var response3 = await chatClient.GetResponseAsync($"What's the sentiment of this review? {review3}"); + +Console.WriteLine($"Response text: {response3.Result.ResponseText}"); +Console.WriteLine($"Sentiment: {response3.Result.ReviewSentiment}"); +// + +// +record SentimentRecord(string ResponseText, Sentiment ReviewSentiment); +// + +// +public enum Sentiment +{ + Positive, + Negative, + Neutral +} +// diff --git a/docs/ai/quickstarts/snippets/structured-output/SOChat.csproj b/docs/ai/quickstarts/snippets/structured-output/SOChat.csproj new file mode 100644 index 0000000000000..ba893a63002a3 --- /dev/null +++ b/docs/ai/quickstarts/snippets/structured-output/SOChat.csproj @@ -0,0 +1,20 @@ + + + + Exe + net9.0 + enable + enable + f28ec9ea-e017-46d7-9865-73550c9ec06b + + + + + + + + + + + + diff --git a/docs/ai/quickstarts/structured-output.md b/docs/ai/quickstarts/structured-output.md new file mode 100644 index 0000000000000..dc9586121f8c4 --- /dev/null +++ b/docs/ai/quickstarts/structured-output.md @@ -0,0 +1,115 @@ +--- +title: Quickstart - Request a response with structured output +description: Learn how to create a chat app that responds with structured output, that is, output that conforms to a type that you specify. +ms.date: 04/30/2025 +ms.topic: quickstart +ms.custom: devx-track-dotnet, devx-track-dotnet-ai +--- + +# Request a response with structured output + +In this quickstart, you create a chat app that requests a response with *structured output*. A structured output response is a chat response that's of a type you specify instead of just plain text. The chat app you create in this quickstart analyzes sentiment of various product reviews, categorizing each review according to the values of a custom enumeration. + +## Prerequisites + +- [.NET 8 or a later version](https://dotnet.microsoft.com/download) +- [Visual Studio Code](https://code.visualstudio.com/) (optional) + +## Configure the AI service + +To provision an Azure OpenAI service and model using the Azure portal, complete the steps in the [Create and deploy an Azure OpenAI Service resource](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal) article. In the "Deploy a model" step, select the `gpt-4o` model. + +## Create the chat app + +Complete the following steps to create a console app that connects to the `gpt-4o` AI model. + +1. In a terminal window, navigate to the directory where you want to create your app, and create a new console app with the `dotnet new` command: + + ```dotnetcli + dotnet new console -o SOChat + ``` + +1. Navigate to the `SOChat` directory, and add the necessary packages to your app: + + ```dotnetcli + dotnet add package Azure.AI.OpenAI + dotnet add package Azure.Identity + dotnet add package Microsoft.Extensions.AI --prerelease + dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease + dotnet add package Microsoft.Extensions.Configuration + dotnet add package Microsoft.Extensions.Configuration.UserSecrets + ``` + +1. Run the following commands to add [app secrets](/aspnet/core/security/app-secrets) for your Azure OpenAI endpoint, model name, and tenant ID: + + ```bash + dotnet user-secrets init + dotnet user-secrets set AZURE_OPENAI_ENDPOINT + dotnet user-secrets set AZURE_OPENAI_GPT_NAME gpt-4o + dotnet user-secrets set AZURE_TENANT_ID + ``` + + > [!NOTE] + > Depending on your environment, the tenant ID might not be needed. In that case, remove it from the code that instantiates the . + +1. Open the new app in your editor of choice. + +## Add the code + +1. Define the enumeration that describes the different sentiments. + + :::code language="csharp" source="./snippets/structured-output/Program.cs" id="SentimentEnum"::: + +1. Create the that will communicate with the model. + + :::code language="csharp" source="./snippets/structured-output/Program.cs" id="GetChatClient"::: + + > [!NOTE] + > searches for authentication credentials from your environment or local tooling. You'll need to assign the `Azure AI Developer` role to the account you used to sign in to Visual Studio or the Azure CLI. For more information, see [Authenticate to Azure AI services with .NET](../azure-ai-services-authentication.md). + +1. Send a request to the model with a single product review, and then print the analyzed sentiment to the console. You declare the requested structured output type by passing it as the type argument to the extension method. + + :::code language="csharp" source="./snippets/structured-output/Program.cs" id="SimpleRequest"::: + + This code produces output similar to: + + ```output + Sentiment: Positive + ``` + +1. Instead of just analyzing a single review, you can analyze a collection of reviews. + + :::code language="csharp" source="./snippets/structured-output/Program.cs" id="MultipleReviews"::: + + This code produces output similar to: + + ```output + Review: Best purchase ever! | Sentiment: Positive + Review: Returned it immediately. | Sentiment: Negative + Review: Hello | Sentiment: Neutral + Review: It works as advertised. | Sentiment: Neutral + Review: The packaging was damaged but otherwise okay. | Sentiment: Neutral + ``` + +1. And instead of requesting just the analyzed enumeration value, you can request the text response along with the analyzed value. + + Define a record type to contain the text response and analyzed sentiment: + + :::code language="csharp" source="./snippets/structured-output/Program.cs" id="InputOutputRecord"::: + + Send the request using the record type as the type argument to `GetResponseAsync`: + + :::code language="csharp" source="./snippets/structured-output/Program.cs" id="RecordRequest"::: + + This code produces output similar to: + + ```output + Response text: Certainly, I have analyzed the sentiment of the review you provided. + Sentiment: Neutral + ``` + +## See also + +- [Structured outputs (Azure OpenAI Service)](/azure/ai-services/openai/how-to/structured-outputs) +- [Using JSON schema for structured output in .NET for OpenAI models](https://devblogs.microsoft.com/semantic-kernel/using-json-schema-for-structured-output-in-net-for-openai-models) +- [Introducing Structured Outputs in the API (OpenAI)](https://openai.com/index/introducing-structured-outputs-in-the-api/) diff --git a/docs/core/extensions/snippets/ai/AI.Shared/AI.Shared.csproj b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/AI.Shared.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/AI.Shared/AI.Shared.csproj rename to docs/ai/snippets/microsoft-extensions-ai/AI.Shared/AI.Shared.csproj diff --git a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClient.cs b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClient.cs similarity index 83% rename from docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClient.cs rename to docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClient.cs index d913e2a20f86a..f4cf2b2cd8df8 100644 --- a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClient.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClient.cs @@ -7,36 +7,30 @@ public sealed class RateLimitingChatClient( : DelegatingChatClient(innerClient) { public override async Task GetResponseAsync( - IEnumerable chatMessages, + IEnumerable messages, ChatOptions? options = null, CancellationToken cancellationToken = default) { using var lease = await rateLimiter.AcquireAsync(permitCount: 1, cancellationToken) .ConfigureAwait(false); - if (!lease.IsAcquired) - { throw new InvalidOperationException("Unable to acquire lease."); - } - return await base.GetResponseAsync(chatMessages, options, cancellationToken) + return await base.GetResponseAsync(messages, options, cancellationToken) .ConfigureAwait(false); } public override async IAsyncEnumerable GetStreamingResponseAsync( - IEnumerable chatMessages, + IEnumerable messages, ChatOptions? options = null, [EnumeratorCancellation] CancellationToken cancellationToken = default) { using var lease = await rateLimiter.AcquireAsync(permitCount: 1, cancellationToken) .ConfigureAwait(false); - if (!lease.IsAcquired) - { throw new InvalidOperationException("Unable to acquire lease."); - } - await foreach (var update in base.GetStreamingResponseAsync(chatMessages, options, cancellationToken) + await foreach (var update in base.GetStreamingResponseAsync(messages, options, cancellationToken) .ConfigureAwait(false)) { yield return update; @@ -46,9 +40,7 @@ public override async IAsyncEnumerable GetStreamingResponseA protected override void Dispose(bool disposing) { if (disposing) - { rateLimiter.Dispose(); - } base.Dispose(disposing); } diff --git a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs similarity index 71% rename from docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs rename to docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs index 066cf22f6ee44..8973a3391e482 100644 --- a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs @@ -8,10 +8,12 @@ public static class RateLimitingChatClientExtensions { public static ChatClientBuilder UseRateLimiting( - this ChatClientBuilder builder, RateLimiter? rateLimiter = null) => + this ChatClientBuilder builder, + RateLimiter? rateLimiter = null) => builder.Use((innerClient, services) => new RateLimitingChatClient( innerClient, - rateLimiter ?? services.GetRequiredService())); + services.GetRequiredService()) + ); } // diff --git a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClientExtensions.cs b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.cs similarity index 54% rename from docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClientExtensions.cs rename to docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.cs index 5f0fe5765b193..9ab89f6227e44 100644 --- a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingChatClientExtensions.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingChatClientExtensions.cs @@ -7,7 +7,10 @@ public static class RateLimitingChatClientExtensions { public static ChatClientBuilder UseRateLimiting( - this ChatClientBuilder builder, RateLimiter rateLimiter) => - builder.Use(innerClient => new RateLimitingChatClient(innerClient, rateLimiter)); + this ChatClientBuilder builder, + RateLimiter rateLimiter) => + builder.Use(innerClient => + new RateLimitingChatClient(innerClient, rateLimiter) + ); } // diff --git a/docs/core/extensions/snippets/ai/AI.Shared/RateLimitingEmbeddingGenerator.cs b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingEmbeddingGenerator.cs similarity index 100% rename from docs/core/extensions/snippets/ai/AI.Shared/RateLimitingEmbeddingGenerator.cs rename to docs/ai/snippets/microsoft-extensions-ai/AI.Shared/RateLimitingEmbeddingGenerator.cs diff --git a/docs/core/extensions/snippets/ai/AI.Shared/SampleChatClient.cs b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/SampleChatClient.cs similarity index 100% rename from docs/core/extensions/snippets/ai/AI.Shared/SampleChatClient.cs rename to docs/ai/snippets/microsoft-extensions-ai/AI.Shared/SampleChatClient.cs diff --git a/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/SampleEmbeddingGenerator.cs b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/SampleEmbeddingGenerator.cs new file mode 100644 index 0000000000000..ddf1e6b53aa28 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/AI.Shared/SampleEmbeddingGenerator.cs @@ -0,0 +1,35 @@ +using Microsoft.Extensions.AI; + +public sealed class SampleEmbeddingGenerator( + Uri endpoint, string modelId) + : IEmbeddingGenerator> +{ + private readonly EmbeddingGeneratorMetadata _metadata = + new("SampleEmbeddingGenerator", endpoint, modelId); + + public async Task>> GenerateAsync( + IEnumerable values, + EmbeddingGenerationOptions? options = null, + CancellationToken cancellationToken = default) + { + // Simulate some async operation. + await Task.Delay(100, cancellationToken); + + // Create random embeddings. + return new GeneratedEmbeddings>( + from value in values + select new Embedding( + Enumerable.Range(0, 384).Select(_ => Random.Shared.NextSingle()).ToArray())); + } + + public object? GetService(Type serviceType, object? serviceKey) => + serviceKey is not null + ? null + : serviceType == typeof(EmbeddingGeneratorMetadata) + ? _metadata + : serviceType?.IsInstanceOfType(this) is true + ? this + : null; + + void IDisposable.Dispose() { } +} diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.UseExample/ConsoleAI.UseExample.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/ConsoleAI.AddMessages.csproj similarity index 89% rename from docs/core/extensions/snippets/ai/ConsoleAI.UseExample/ConsoleAI.UseExample.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/ConsoleAI.AddMessages.csproj index b615dd1b868c2..821fdd5c951db 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.UseExample/ConsoleAI.UseExample.csproj +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/ConsoleAI.AddMessages.csproj @@ -1,4 +1,4 @@ - + Exe diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/Program.cs new file mode 100644 index 0000000000000..9fc42e45199e4 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.AddMessages/Program.cs @@ -0,0 +1,37 @@ +using Microsoft.Extensions.AI; + +IChatClient client = new SampleChatClient( + new Uri("http://coolsite.ai"), "target-ai-model"); + +// +List history = []; +while (true) +{ + Console.Write("Q: "); + history.Add(new(ChatRole.User, Console.ReadLine())); + + ChatResponse response = await client.GetResponseAsync(history); + Console.WriteLine(response); + + history.AddMessages(response); +} +// + +// +List chatHistory = []; +while (true) +{ + Console.Write("Q: "); + chatHistory.Add(new(ChatRole.User, Console.ReadLine())); + + List updates = []; + await foreach (ChatResponseUpdate update in + client.GetStreamingResponseAsync(history)) + { + Console.Write(update); + } + Console.WriteLine(); + + chatHistory.AddMessages(updates); +} +// diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CacheResponses/ConsoleAI.CacheResponses.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/ConsoleAI.CacheResponses.csproj similarity index 82% rename from docs/core/extensions/snippets/ai/ConsoleAI.CacheResponses/ConsoleAI.CacheResponses.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/ConsoleAI.CacheResponses.csproj index 6d87d65be1031..94db5114b14ba 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.CacheResponses/ConsoleAI.CacheResponses.csproj +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/ConsoleAI.CacheResponses.csproj @@ -8,6 +8,7 @@ + diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CacheResponses/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/Program.cs similarity index 85% rename from docs/core/extensions/snippets/ai/ConsoleAI.CacheResponses/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/Program.cs index eac78def1b9e0..6e15bf3da0788 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.CacheResponses/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CacheResponses/Program.cs @@ -3,8 +3,7 @@ using Microsoft.Extensions.Caching.Memory; using Microsoft.Extensions.Options; -var sampleChatClient = new SampleChatClient( - new Uri("http://coolsite.ai"), "target-ai-model"); +var sampleChatClient = new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1"); IChatClient client = new ChatClientBuilder(sampleChatClient) .UseDistributedCache(new MemoryDistributedCache( @@ -19,6 +18,5 @@ { Console.Write(update); } - Console.WriteLine(); } diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeClientMiddleware/ConsoleAI.ConsumeClientMiddleware.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/ConsoleAI.ConsumeClientMiddleware.csproj similarity index 81% rename from docs/core/extensions/snippets/ai/ConsoleAI.ConsumeClientMiddleware/ConsoleAI.ConsumeClientMiddleware.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/ConsoleAI.ConsumeClientMiddleware.csproj index bec56340237d7..6da26fd2e1c47 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeClientMiddleware/ConsoleAI.ConsumeClientMiddleware.csproj +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/ConsoleAI.ConsumeClientMiddleware.csproj @@ -8,6 +8,7 @@ + diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeClientMiddleware/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/Program.cs similarity index 69% rename from docs/core/extensions/snippets/ai/ConsoleAI.ConsumeClientMiddleware/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/Program.cs index 082ee7821cdc2..d59952ee6f485 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeClientMiddleware/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeClientMiddleware/Program.cs @@ -1,11 +1,11 @@ using Example.Two; -// using Microsoft.Extensions.AI; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; -var builder = Host.CreateApplicationBuilder(args); +// +HostApplicationBuilder builder = Host.CreateApplicationBuilder(args); builder.Services.AddChatClient(services => new SampleChatClient(new Uri("http://localhost"), "test") @@ -14,13 +14,12 @@ .UseRateLimiting() .UseOpenTelemetry() .Build(services)); - -using var app = builder.Build(); +// // Elsewhere in the app -var chatClient = app.Services.GetRequiredService(); +using IHost app = builder.Build(); +IChatClient chatClient = app.Services.GetRequiredService(); Console.WriteLine(await chatClient.GetResponseAsync("What is AI?")); app.Run(); -// diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeRateLimitingEmbedding/ConsoleAI.ConsumeRateLimitingEmbedding.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeRateLimitingEmbedding/ConsoleAI.ConsumeRateLimitingEmbedding.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.ConsumeRateLimitingEmbedding/ConsoleAI.ConsumeRateLimitingEmbedding.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeRateLimitingEmbedding/ConsoleAI.ConsumeRateLimitingEmbedding.csproj diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs similarity index 80% rename from docs/core/extensions/snippets/ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs index d7987319e07ee..b6bb41eb7e5af 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs @@ -10,7 +10,8 @@ QueueLimit = int.MaxValue })); -foreach (var embedding in await generator.GenerateAsync(["What is AI?", "What is .NET?"])) +foreach (Embedding embedding in + await generator.GenerateAsync(["What is AI?", "What is .NET?"])) { Console.WriteLine(string.Join(", ", embedding.Vector.ToArray())); } diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CreateEmbeddings/ConsoleAI.CreateEmbeddings.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/ConsoleAI.CreateEmbeddings.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.CreateEmbeddings/ConsoleAI.CreateEmbeddings.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/ConsoleAI.CreateEmbeddings.csproj diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/Program.cs new file mode 100644 index 0000000000000..a9b9c3adacc34 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CreateEmbeddings/Program.cs @@ -0,0 +1,17 @@ +// +using Microsoft.Extensions.AI; + +IEmbeddingGenerator> generator = + new SampleEmbeddingGenerator( + new Uri("http://coolsite.ai"), "target-ai-model"); + +foreach (Embedding embedding in + await generator.GenerateAsync(["What is AI?", "What is .NET?"])) +{ + Console.WriteLine(string.Join(", ", embedding.Vector.ToArray())); +} +// + +// +ReadOnlyMemory vector = await generator.GenerateEmbeddingVectorAsync("What is AI?"); +// diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CustomClientMiddle/ConsoleAI.CustomClientMiddle.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/ConsoleAI.CustomClientMiddle.csproj similarity index 76% rename from docs/core/extensions/snippets/ai/ConsoleAI.CustomClientMiddle/ConsoleAI.CustomClientMiddle.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/ConsoleAI.CustomClientMiddle.csproj index b7875a3582d78..5a2a4cd8162cc 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.CustomClientMiddle/ConsoleAI.CustomClientMiddle.csproj +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/ConsoleAI.CustomClientMiddle.csproj @@ -8,7 +8,8 @@ - + + diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/Program.cs new file mode 100644 index 0000000000000..bc71d9bb9e897 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomClientMiddle/Program.cs @@ -0,0 +1,8 @@ +using Microsoft.Extensions.AI; +using System.Threading.RateLimiting; + +var client = new RateLimitingChatClient( + new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1"), + new ConcurrencyLimiter(new() { PermitLimit = 1, QueueLimit = int.MaxValue })); + +Console.WriteLine(await client.GetResponseAsync("What color is the sky?")); diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CustomEmbeddingsMiddle/ConsoleAI.CustomEmbeddingsMiddle.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomEmbeddingsMiddle/ConsoleAI.CustomEmbeddingsMiddle.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.CustomEmbeddingsMiddle/ConsoleAI.CustomEmbeddingsMiddle.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomEmbeddingsMiddle/ConsoleAI.CustomEmbeddingsMiddle.csproj diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs similarity index 90% rename from docs/core/extensions/snippets/ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs index d03272c2024ac..f35794faf6dff 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs @@ -11,8 +11,8 @@ .AddConsoleExporter() .Build(); -// Explore changing the order of the intermediate "Use" calls to see that impact -// that has on what gets cached, traced, etc. +// Explore changing the order of the intermediate "Use" calls to see +// what impact that has on what gets cached and traced. IEmbeddingGenerator> generator = new EmbeddingGeneratorBuilder>( new SampleEmbeddingGenerator(new Uri("http://coolsite.ai"), "target-ai-model")) .UseDistributedCache( @@ -28,7 +28,7 @@ "What is AI?" ]); -foreach (var embedding in embeddings) +foreach (Embedding embedding in embeddings) { Console.WriteLine(string.Join(", ", embedding.Vector.ToArray())); } diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.DependencyInjection/ConsoleAI.DependencyInjection.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/ConsoleAI.DependencyInjection.csproj similarity index 84% rename from docs/core/extensions/snippets/ai/ConsoleAI.DependencyInjection/ConsoleAI.DependencyInjection.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/ConsoleAI.DependencyInjection.csproj index 4ac04759c2475..621ab0bd78e45 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.DependencyInjection/ConsoleAI.DependencyInjection.csproj +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/ConsoleAI.DependencyInjection.csproj @@ -8,6 +8,7 @@ + diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/Program.cs new file mode 100644 index 0000000000000..67b58783b56fa --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.DependencyInjection/Program.cs @@ -0,0 +1,14 @@ +using Microsoft.Extensions.AI; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Hosting; + +// App setup. +var builder = Host.CreateApplicationBuilder(); +builder.Services.AddDistributedMemoryCache(); +builder.Services.AddChatClient(new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1")) + .UseDistributedCache(); +var host = builder.Build(); + +// Elsewhere in the app. +var chatClient = host.Services.GetRequiredService(); +Console.WriteLine(await chatClient.GetResponseAsync("What is AI?")); diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.FunctionalityPipelines/ConsoleAI.FunctionalityPipelines.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.FunctionalityPipelines/ConsoleAI.FunctionalityPipelines.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.FunctionalityPipelines/ConsoleAI.FunctionalityPipelines.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.FunctionalityPipelines/ConsoleAI.FunctionalityPipelines.csproj diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.FunctionalityPipelines/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.FunctionalityPipelines/Program.cs new file mode 100644 index 0000000000000..9d9c913d0d481 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.FunctionalityPipelines/Program.cs @@ -0,0 +1,40 @@ +using Microsoft.Extensions.AI; +using Microsoft.Extensions.Caching.Distributed; +using Microsoft.Extensions.Caching.Memory; +using Microsoft.Extensions.Options; +using OpenTelemetry.Trace; + +// Configure OpenTelemetry exporter. +var sourceName = Guid.NewGuid().ToString(); +var tracerProvider = OpenTelemetry.Sdk.CreateTracerProviderBuilder() + .AddSource(sourceName) + .AddConsoleExporter() + .Build(); + +// +// Explore changing the order of the intermediate "Use" calls. +IChatClient client = new ChatClientBuilder(new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1")) + .UseDistributedCache(new MemoryDistributedCache(Options.Create(new MemoryDistributedCacheOptions()))) + .UseFunctionInvocation() + .UseOpenTelemetry(sourceName: sourceName, configure: c => c.EnableSensitiveData = true) + .Build(); +// + +ChatOptions options = new() +{ + Tools = [AIFunctionFactory.Create( + () => Random.Shared.NextDouble() > 0.5 ? "It's sunny" : "It's raining", + name: "GetCurrentWeather", + description: "Gets the current weather")] +}; + +for (int i = 0; i < 3; i++) +{ + List history = + [ + new ChatMessage(ChatRole.System, "You are a helpful AI assistant"), + new ChatMessage(ChatRole.User, "Do I need an umbrella?") + ]; + + Console.WriteLine(await client.GetResponseAsync(history, options)); +} diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.GetResponseAsyncArgs/ConsoleAI.GetResponseAsyncArgs.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetResponseAsyncArgs/ConsoleAI.GetResponseAsyncArgs.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.GetResponseAsyncArgs/ConsoleAI.GetResponseAsyncArgs.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetResponseAsyncArgs/ConsoleAI.GetResponseAsyncArgs.csproj diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.GetResponseAsyncArgs/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetResponseAsyncArgs/Program.cs similarity index 90% rename from docs/core/extensions/snippets/ai/ConsoleAI.GetResponseAsyncArgs/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetResponseAsyncArgs/Program.cs index b33fe5f1a3d80..92bb0e9f6891b 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.GetResponseAsyncArgs/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetResponseAsyncArgs/Program.cs @@ -3,8 +3,10 @@ IChatClient client = new SampleChatClient( new Uri("http://coolsite.ai"), "target-ai-model"); +// Console.WriteLine(await client.GetResponseAsync( [ new(ChatRole.System, "You are a helpful AI assistant"), new(ChatRole.User, "What is AI?"), ])); +// diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.GetStreamingResponseAsync/ConsoleAI.GetStreamingResponseAsync.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetStreamingResponseAsync/ConsoleAI.GetStreamingResponseAsync.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.GetStreamingResponseAsync/ConsoleAI.GetStreamingResponseAsync.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetStreamingResponseAsync/ConsoleAI.GetStreamingResponseAsync.csproj diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.GetStreamingResponseAsync/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetStreamingResponseAsync/Program.cs similarity index 89% rename from docs/core/extensions/snippets/ai/ConsoleAI.GetStreamingResponseAsync/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetStreamingResponseAsync/Program.cs index 67cc73d828867..37f80109796ce 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.GetStreamingResponseAsync/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.GetStreamingResponseAsync/Program.cs @@ -3,7 +3,9 @@ IChatClient client = new SampleChatClient( new Uri("http://coolsite.ai"), "target-ai-model"); +// await foreach (ChatResponseUpdate update in client.GetStreamingResponseAsync("What is AI?")) { Console.Write(update); } +// diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ProvideOptions/ConsoleAI.ProvideOptions.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ProvideOptions/ConsoleAI.ProvideOptions.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.ProvideOptions/ConsoleAI.ProvideOptions.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ProvideOptions/ConsoleAI.ProvideOptions.csproj diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ProvideOptions/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ProvideOptions/Program.cs new file mode 100644 index 0000000000000..bbf1bf5c385e2 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ProvideOptions/Program.cs @@ -0,0 +1,11 @@ +using Microsoft.Extensions.AI; + +IChatClient client = new OllamaChatClient(new Uri("http://localhost:11434")) + .AsBuilder() + .ConfigureOptions(options => options.ModelId ??= "phi3") + .Build(); + +// Will request "phi3". +Console.WriteLine(await client.GetResponseAsync("What is AI?")); +// Will request "llama3.1". +Console.WriteLine(await client.GetResponseAsync("What is AI?", new() { ModelId = "llama3.1" })); diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/ConsoleAI.StatelessStateful.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/ConsoleAI.StatelessStateful.csproj new file mode 100644 index 0000000000000..64aeca66a5074 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/ConsoleAI.StatelessStateful.csproj @@ -0,0 +1,18 @@ + + + + Exe + net9.0 + enable + enable + + + + + + + + + + + diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs new file mode 100644 index 0000000000000..2e50df5071158 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.StatelessStateful/Program.cs @@ -0,0 +1,66 @@ +using Microsoft.Extensions.AI; + +IChatClient client = new SampleChatClient( + new Uri("http://coolsite.ai"), "target-ai-model"); + +// +List history = []; +while (true) +{ + Console.Write("Q: "); + history.Add(new(ChatRole.User, Console.ReadLine())); + + var response = await client.GetResponseAsync(history); + Console.WriteLine(response); + + history.AddMessages(response); +} +// + +// +ChatOptions statefulOptions = new() { ChatThreadId = "my-conversation-id" }; +while (true) +{ + Console.Write("Q: "); + ChatMessage message = new(ChatRole.User, Console.ReadLine()); + + Console.WriteLine(await client.GetResponseAsync(message, statefulOptions)); +} +// + +// +ChatOptions options = new(); +while (true) +{ + Console.Write("Q: "); + ChatMessage message = new(ChatRole.User, Console.ReadLine()); + + ChatResponse response = await client.GetResponseAsync(message, options); + Console.WriteLine(response); + + options.ChatThreadId = response.ChatThreadId; +} +// + +// +List chatHistory = []; +ChatOptions chatOptions = new(); +while (true) +{ + Console.Write("Q: "); + chatHistory.Add(new(ChatRole.User, Console.ReadLine())); + + ChatResponse response = await client.GetResponseAsync(chatHistory); + Console.WriteLine(response); + + chatOptions.ChatThreadId = response.ChatThreadId; + if (response.ChatThreadId is not null) + { + chatHistory.Clear(); + } + else + { + chatHistory.AddMessages(response); + } +} +// diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ToolCalling/ConsoleAI.ToolCalling.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ToolCalling/ConsoleAI.ToolCalling.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.ToolCalling/ConsoleAI.ToolCalling.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ToolCalling/ConsoleAI.ToolCalling.csproj diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ToolCalling/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ToolCalling/Program.cs new file mode 100644 index 0000000000000..438cd7a4bd7dd --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.ToolCalling/Program.cs @@ -0,0 +1,16 @@ +using Microsoft.Extensions.AI; + +string GetCurrentWeather() => Random.Shared.NextDouble() > 0.5 ? "It's sunny" : "It's raining"; + +IChatClient client = new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1") + .AsBuilder() + .UseFunctionInvocation() + .Build(); + +ChatOptions options = new() { Tools = [AIFunctionFactory.Create(GetCurrentWeather)] }; + +var response = client.GetStreamingResponseAsync("Should I wear a rain coat?", options); +await foreach (var update in response) +{ + Console.Write(update); +} diff --git a/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseExample/ConsoleAI.UseExample.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseExample/ConsoleAI.UseExample.csproj new file mode 100644 index 0000000000000..deb4106409231 --- /dev/null +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseExample/ConsoleAI.UseExample.csproj @@ -0,0 +1,18 @@ + + + + Exe + net9.0 + enable + enable + + + + + + + + + + + diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.UseExample/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseExample/Program.cs similarity index 58% rename from docs/core/extensions/snippets/ai/ConsoleAI.UseExample/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseExample/Program.cs index 5918576009ffe..333d9290e31f9 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.UseExample/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseExample/Program.cs @@ -7,22 +7,16 @@ QueueLimit = int.MaxValue }); -IChatClient client = new SampleChatClient(new Uri("http://localhost"), "test") +IChatClient client = new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1") .AsBuilder() .UseDistributedCache() - .Use(async (chatMessages, options, nextAsync, cancellationToken) => + .Use(async (messages, options, nextAsync, cancellationToken) => { - using var lease = await rateLimiter.AcquireAsync(permitCount: 1, cancellationToken) - .ConfigureAwait(false); - + using var lease = await rateLimiter.AcquireAsync(permitCount: 1, cancellationToken).ConfigureAwait(false); if (!lease.IsAcquired) - { throw new InvalidOperationException("Unable to acquire lease."); - } - await nextAsync(chatMessages, options, cancellationToken); + await nextAsync(messages, options, cancellationToken); }) .UseOpenTelemetry() .Build(); - -// Use client diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.UseTelemetry/ConsoleAI.UseTelemetry.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseTelemetry/ConsoleAI.UseTelemetry.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI.UseTelemetry/ConsoleAI.UseTelemetry.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseTelemetry/ConsoleAI.UseTelemetry.csproj diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.UseTelemetry/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseTelemetry/Program.cs similarity index 85% rename from docs/core/extensions/snippets/ai/ConsoleAI.UseTelemetry/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseTelemetry/Program.cs index fc7e483f8118f..db1c7a2502712 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI.UseTelemetry/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI.UseTelemetry/Program.cs @@ -1,7 +1,7 @@ using Microsoft.Extensions.AI; using OpenTelemetry.Trace; -// Configure OpenTelemetry exporter +// Configure OpenTelemetry exporter. string sourceName = Guid.NewGuid().ToString(); TracerProvider tracerProvider = OpenTelemetry.Sdk.CreateTracerProviderBuilder() .AddSource(sourceName) @@ -14,7 +14,7 @@ IChatClient client = new ChatClientBuilder(sampleChatClient) .UseOpenTelemetry( sourceName: sourceName, - configure: static c => c.EnableSensitiveData = true) + configure: c => c.EnableSensitiveData = true) .Build(); Console.WriteLine((await client.GetResponseAsync("What is AI?")).Text); diff --git a/docs/core/extensions/snippets/ai/ConsoleAI/ConsoleAI.csproj b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI/ConsoleAI.csproj similarity index 100% rename from docs/core/extensions/snippets/ai/ConsoleAI/ConsoleAI.csproj rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI/ConsoleAI.csproj diff --git a/docs/core/extensions/snippets/ai/ConsoleAI/Program.cs b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI/Program.cs similarity index 55% rename from docs/core/extensions/snippets/ai/ConsoleAI/Program.cs rename to docs/ai/snippets/microsoft-extensions-ai/ConsoleAI/Program.cs index dafbef8ee3ae1..f92735dee5e72 100644 --- a/docs/core/extensions/snippets/ai/ConsoleAI/Program.cs +++ b/docs/ai/snippets/microsoft-extensions-ai/ConsoleAI/Program.cs @@ -3,6 +3,4 @@ IChatClient client = new SampleChatClient( new Uri("http://coolsite.ai"), "target-ai-model"); -var response = await client.GetResponseAsync("What is AI?"); - -Console.WriteLine(response.Messages.Single()); +Console.WriteLine(await client.GetResponseAsync("What is AI?")); diff --git a/docs/ai/toc.yml b/docs/ai/toc.yml index e070935f5a93a..68229d946e1e0 100644 --- a/docs/ai/toc.yml +++ b/docs/ai/toc.yml @@ -2,21 +2,23 @@ items: - name: AI for .NET developers href: index.yml - name: Overview - href: get-started/dotnet-ai-overview.md + href: overview.md - name: "Quickstart: Connect to and prompt an AI model" href: quickstarts/prompt-model.md -- name: AI frameworks and SDKs +- name: AI tools and SDKs items: - name: Overview href: dotnet-ai-ecosystem.md - name: Microsoft.Extensions.AI - href: ai-extensions.md + href: microsoft-extensions-ai.md - name: Semantic Kernel href: semantic-kernel-dotnet-overview.md - name: Quickstarts items: - name: Build a chat app href: quickstarts/build-chat-app.md + - name: Request structured output + href: quickstarts/structured-output.md - name: Build a .NET AI vector search app href: quickstarts/build-vector-search-app.md - name: Execute a local .NET function diff --git a/docs/azure/includes/dotnet-all.md b/docs/azure/includes/dotnet-all.md index 0b9780b4244a0..9ee6338cbb203 100644 --- a/docs/azure/includes/dotnet-all.md +++ b/docs/azure/includes/dotnet-all.md @@ -397,6 +397,7 @@ | Speech Extension ONNX Runtime | NuGet [1.43.0](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.ONNX.Runtime/1.43.0) | | | | Speech Extension Telemetry | NuGet [1.43.0](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.Extension.Telemetry/1.43.0) | | | | System Net Client Model | NuGet [1.0.0-beta.1](https://www.nuget.org/packages/System.Net.ClientModel/1.0.0-beta.1) | | | +| Unknown Display Name | NuGet [1.1.0-preview](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.AzureCosmosDb.Mongo/1.1.0-preview) | | | | WebJobs Extension MySQL | NuGet [1.0.129](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.MySql/1.0.129) | | | | Anomaly Detector | NuGet [1.0.0](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.AnomalyDetector/1.0.0) | | GitHub [1.0.0](https://github.com/Azure/azure-sdk-for-net/tree/Microsoft.Azure.CognitiveServices.AnomalyDetector_1.0.0-preview.1/sdk/cognitiveservices/AnomalyDetector) | | App Service | NuGet [0.2.2-alpha](https://www.nuget.org/packages/Microsoft.Azure.AppService/0.2.2-alpha) | | | diff --git a/docs/azure/index.yml b/docs/azure/index.yml index db0306af33cdb..fbb9e38f2aedb 100644 --- a/docs/azure/index.yml +++ b/docs/azure/index.yml @@ -12,7 +12,7 @@ metadata: ms.date: 08/15/2024 highlightedContent: -# itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | whats-new + # itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | whats-new items: - itemType: overview title: Introduction to Azure and .NET @@ -42,7 +42,7 @@ highlightedContent: conceptualContent: title: Featured content summary: Learn to develop .NET apps leveraging a variety of Azure services. -# itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | video | whats-new + # itemType: architecture | concept | deploy | download | get-started | how-to-guide | learn | overview | quickstart | reference | tutorial | video | whats-new items: - title: Create web apps links: @@ -96,7 +96,7 @@ conceptualContent: - title: Create intelligent apps with AI links: - itemType: overview - url: ../ai/get-started/dotnet-ai-overview.md + url: ../ai/overview.md text: AI for .NET overview - itemType: quickstart url: ../ai/quickstarts/get-started-openai.md diff --git a/docs/azure/TOC.yml b/docs/azure/toc.yml similarity index 98% rename from docs/azure/TOC.yml rename to docs/azure/toc.yml index 92284500bf699..fb85b4f900b16 100644 --- a/docs/azure/TOC.yml +++ b/docs/azure/toc.yml @@ -49,8 +49,8 @@ href: ./migration/vm.md - name: Migrate a SQL Server database to Azure href: ./migration/sql.md -- name: Azure AI for .NET - href: ../ai/get-started/dotnet-ai-overview.md +- name: AI for .NET + href: ../ai/overview.md?toc=/dotnet/azure/toc.json&bc=/dotnet/breadcrumb/toc.json - name: Azure SDK for .NET items: - name: What is the Azure SDK for .NET? diff --git a/docs/core/extensions/artificial-intelligence.md b/docs/core/extensions/artificial-intelligence.md deleted file mode 100644 index b97c27c2b6fdd..0000000000000 --- a/docs/core/extensions/artificial-intelligence.md +++ /dev/null @@ -1,280 +0,0 @@ ---- -title: Artificial Intelligence in .NET (Preview) -description: Learn how to use the Microsoft.Extensions.AI libraries to integrate and interact with various AI services in your .NET applications. -author: IEvangelist -ms.author: dapine -ms.date: 01/06/2025 -ms.collection: ce-skilling-ai-copilot ---- - -# Artificial intelligence in .NET (Preview) - -With a growing variety of artificial intelligence (AI) services available, developers need a way to integrate and interact with these services in their .NET applications. The `Microsoft.Extensions.AI` libraries provide a unified approach for representing generative AI components, which enables seamless integration and interoperability with various AI services. This article introduces the libraries and provides installation instructions and usage examples to help you get started. - -The [📦 Microsoft.Extensions.AI.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.AI.Abstractions) package provides the core exchange types: and . Any .NET library that provides an AI client can implement the `IChatClient` interface to enable seamless integration with consuming code. - -The [📦 Microsoft.Extensions.AI](https://www.nuget.org/packages/Microsoft.Extensions.AI) package has an implicit dependency on the `Microsoft.Extensions.AI.Abstractions` package. This package enables you to easily integrate components such as telemetry and caching into your applications using familiar dependency injection and middleware patterns. For example, it provides the extension method, which adds OpenTelemetry support to the chat client pipeline. - -## Install the package - -To install the [📦 Microsoft.Extensions.AI](https://www.nuget.org/packages/Microsoft.Extensions.AI) and [📦 Microsoft.Extensions.AI.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.AI.Abstractions) NuGet packages, use the .NET CLI or add package references directly to your C# project file: - -### [.NET CLI](#tab/dotnet-cli) - -```dotnetcli -dotnet add package Microsoft.Extensions.AI --prerelease -``` - -Or, if you're using .NET 10+ SDK: - -```dotnetcli -dotnet package add Microsoft.Extensions.AI --prerelease -``` - -### [PackageReference](#tab/package-reference) - -```xml - -``` - ---- - -For more information, see [dotnet package add](../tools/dotnet-package-add.md) or [Manage package dependencies in .NET applications](../tools/dependencies.md). - -## The `IChatClient` interface - -The interface defines a client abstraction responsible for interacting with AI services that provide chat capabilities. It includes methods for sending and receiving messages with multi-modal content (such as text, images, and audio), either as a complete set or streamed incrementally. Additionally, it provides metadata information about the client and allows retrieving strongly typed services. - -> [!IMPORTANT] -> For more usage examples and real-world scenarios, see [AI for .NET developers](../../ai/index.yml). - -The following sample implements `IChatClient` to show the general structure. - -:::code language="csharp" source="snippets/ai/AI.Shared/SampleChatClient.cs"::: - -You can find other concrete implementations of `IChatClient` in the following NuGet packages: - -- [📦 Microsoft.Extensions.AI.AzureAIInference](https://www.nuget.org/packages/Microsoft.Extensions.AI.AzureAIInference): Implementation backed by [Azure AI Model Inference API](/azure/ai-studio/reference/reference-model-inference-api). -- [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama): Implementation backed by [Ollama](https://ollama.com/). -- [📦 Microsoft.Extensions.AI.OpenAI](https://www.nuget.org/packages/Microsoft.Extensions.AI.OpenAI): Implementation backed by either [OpenAI](https://openai.com/) or OpenAI-compatible endpoints (such as [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service)). - -The following subsections show specific `IChatClient` usage examples: - -- [Request chat completion](#request-chat-completion) -- [Request chat completion with streaming](#request-chat-completion-with-streaming) -- [Tool calling](#tool-calling) -- [Cache responses](#cache-responses) -- [Use telemetry](#use-telemetry) -- [Provide options](#provide-options) -- [Functionality pipelines](#functionality-pipelines) -- [Custom `IChatClient` middleware](#custom-ichatclient-middleware) -- [Dependency injection](#dependency-injection) - -### Request chat completion - -To request a completion, call the method. The request is composed of one or more messages, each of which is composed of one or more pieces of content. Accelerator methods exist to simplify common cases, such as constructing a request for a single piece of text content. - -:::code language="csharp" source="snippets/ai/ConsoleAI/Program.cs"::: - -The core `IChatClient.GetResponseAsync` method accepts a list of messages. This list represents the history of all messages that are part of the conversation. - -:::code language="csharp" source="snippets/ai/ConsoleAI.GetResponseAsyncArgs/Program.cs"::: - -Each message in the history is represented by a object. The `ChatMessage` class provides a property that indicates the role of the message. By default, the is used. The following roles are available: - -- : Instructs or sets the behavior of the assistant. -- : Provides responses to system-instructed, user-prompted input. -- : Provides additional information and references for chat completions. -- : Provides input for chat completions. - -Each chat message is instantiated, assigning to its property a new . There are various [types of content](xref:Microsoft.Extensions.AI.AIContent) that can be represented, such as a simple string or a more complex object that represents a multi-modal message with text, images, and audio: - -- -- -- -- -- - -### Request chat completion with streaming - -The inputs to are identical to those of `GetResponseAsync`. However, rather than returning the complete response as part of a object, the method returns an where `T` is , providing a stream of updates that collectively form the single response. - -:::code language="csharp" source="snippets/ai/ConsoleAI.GetStreamingResponseAsync/Program.cs"::: - -> [!TIP] -> Streaming APIs are nearly synonymous with AI user experiences. C# enables compelling scenarios with its `IAsyncEnumerable` support, allowing for a natural and efficient way to stream data. - -### Tool calling - -Some models and services support _tool calling_, where requests can include tools for the model to invoke functions to gather additional information. Instead of sending a final response, the model requests a function invocation with specific arguments. The client then invokes the function and sends the results back to the model along with the conversation history. The `Microsoft.Extensions.AI` library includes abstractions for various message content types, including function call requests and results. While consumers can interact with this content directly, `Microsoft.Extensions.AI` automates these interactions and provides: - -- : Represents a function that can be described to an AI service and invoked. -- : Provides factory methods for creating commonly used implementations of `AIFunction`. -- : Wraps an `IChatClient` to add automatic function invocation capabilities. - -Consider the following example that demonstrates a random function invocation: - -:::code language="csharp" source="snippets/ai/ConsoleAI.ToolCalling/Program.cs"::: - -The preceding example depends on the [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama) NuGet package. - -The preceding code: - -- Defines a function named `GetCurrentWeather` that returns a random weather forecast. - - This function is decorated with a , which is used to provide a description of the function to the AI service. -- Instantiates a with an and configures it to use function invocation. -- Calls `GetStreamingResponseAsync` on the client, passing a prompt and a list of tools that includes a function created with . -- Iterates over the response, printing each update to the console. - -### Cache responses - -If you're familiar with [Caching in .NET](caching.md), it's good to know that provides other such delegating `IChatClient` implementations. The is an `IChatClient` that layers caching around another arbitrary `IChatClient` instance. When a unique chat history is submitted to the `DistributedCachingChatClient`, it forwards it to the underlying client and then caches the response before sending it back to the consumer. The next time the same prompt is submitted, such that a cached response can be found in the cache, the `DistributedCachingChatClient` returns the cached response rather than needing to forward the request along the pipeline. - -:::code language="csharp" source="snippets/ai/ConsoleAI.CacheResponses/Program.cs"::: - -The preceding example depends on the [📦 Microsoft.Extensions.Caching.Memory](https://www.nuget.org/packages/Microsoft.Extensions.Caching.Memory) NuGet package. For more information, see [Caching in .NET](caching.md). - -### Use telemetry - -Another example of a delegating chat client is the . This implementation adheres to the [OpenTelemetry Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/). Similar to other `IChatClient` delegators, it layers metrics and spans around any underlying `IChatClient` implementation, providing enhanced observability. - -:::code language="csharp" source="snippets/ai/ConsoleAI.UseTelemetry/Program.cs"::: - -The preceding example depends on the [📦 OpenTelemetry.Exporter.Console](https://www.nuget.org/packages/OpenTelemetry.Exporter.Console) NuGet package. - -### Provide options - -Every call to or can optionally supply a instance containing additional parameters for the operation. The most common parameters among AI models and services show up as strongly typed properties on the type, such as . Other parameters can be supplied by name in a weakly typed manner via the dictionary. - -You can also specify options when building an `IChatClient` with the fluent API and chaining a call to the `ConfigureOptions` extension method. This delegating client wraps another client and invokes the supplied delegate to populate a `ChatOptions` instance for every call. For example, to ensure that the property defaults to a particular model name, you can use code like the following: - -:::code language="csharp" source="snippets/ai/ConsoleAI.ProvideOptions/Program.cs"::: - -The preceding example depends on the [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama) NuGet package. - -### Functionality pipelines - -`IChatClient` instances can be layered to create a pipeline of components, each adding specific functionality. These components can come from `Microsoft.Extensions.AI`, other NuGet packages, or custom implementations. This approach allows you to augment the behavior of the `IChatClient` in various ways to meet your specific needs. Consider the following example code that layers a distributed cache, function invocation, and OpenTelemetry tracing around a sample chat client: - -:::code language="csharp" source="snippets/ai/ConsoleAI.FunctionalityPipelines/Program.cs"::: - -The preceding example depends on the following NuGet packages: - -- [📦 Microsoft.Extensions.Caching.Memory](https://www.nuget.org/packages/Microsoft.Extensions.Caching.Memory) -- [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama) -- [📦 OpenTelemetry.Exporter.Console](https://www.nuget.org/packages/OpenTelemetry.Exporter.Console) - -### Custom `IChatClient` middleware - -To add additional functionality, you can implement `IChatClient` directly or use the class. This class serves as a base for creating chat clients that delegate operations to another `IChatClient` instance. It simplifies chaining multiple clients, allowing calls to pass through to an underlying client. - -The `DelegatingChatClient` class provides default implementations for methods like `GetResponseAsync`, `GetStreamingResponseAsync`, and `Dispose`, which forward calls to the inner client. You can derive from this class and override only the methods you need to enhance behavior, while delegating other calls to the base implementation. This approach helps create flexible and modular chat clients that are easy to extend and compose. - -The following is an example class derived from `DelegatingChatClient` to provide rate limiting functionality, utilizing the : - -:::code language="csharp" source="snippets/ai/AI.Shared/RateLimitingChatClient.cs"::: - -The preceding example depends on the [📦 System.Threading.RateLimiting](https://www.nuget.org/packages/System.Threading.RateLimiting) NuGet package. Composition of the `RateLimitingChatClient` with another client is straightforward: - -:::code language="csharp" source="snippets/ai/ConsoleAI.CustomClientMiddle/Program.cs"::: - -To simplify the composition of such components with others, component authors should create a `Use*` extension method for registering the component into a pipeline. For example, consider the following extension method: - -:::code language="csharp" source="snippets/ai/AI.Shared/RateLimitingChatClientExtensions.cs" id="one"::: - -Such extensions can also query for relevant services from the DI container; the used by the pipeline is passed in as an optional parameter: - -:::code language="csharp" source="snippets/ai/AI.Shared/RateLimitingChatClientExtensions.OptionalOverload.cs" id="two"::: - -The consumer can then easily use this in their pipeline, for example: - -:::code language="csharp" source="snippets/ai/ConsoleAI.ConsumeClientMiddleware/Program.cs" id="program"::: - -This example demonstrates [hosted scenario](generic-host.md), where the consumer relies on [dependency injection](dependency-injection.md) to provide the `RateLimiter` instance. The preceding extension methods demonstrate using a `Use` method on . The `ChatClientBuilder` also provides overloads that make it easier to write such delegating handlers. - -For example, in the earlier `RateLimitingChatClient` example, the overrides of `GetResponseAsync` and `GetStreamingResponseAsync` only need to do work before and after delegating to the next client in the pipeline. To achieve the same thing without writing a custom class, you can use an overload of `Use` that accepts a delegate that's used for both `GetResponseAsync` and `GetStreamingResponseAsync`, reducing the boilerplate required: - -:::code language="csharp" source="snippets/ai/ConsoleAI.UseExample/Program.cs"::: - -The preceding overload internally uses an `AnonymousDelegatingChatClient`, which enables more complicated patterns with only a little additional code. - -For scenarios where you'd like to specify delegating implementations of `GetResponseAsync` and `GetStreamingResponseAsync` inline, and where it's important to be able to write a different implementation for each in order to handle their unique return types specially, you can use the overload that accepts a delegate for each. - -### Dependency injection - - implementations will typically be provided to an application via [dependency injection (DI)](dependency-injection.md). In this example, an is added into the DI container, as is an `IChatClient`. The registration for the `IChatClient` employs a builder that creates a pipeline containing a caching client (which will then use an `IDistributedCache` retrieved from DI) and the sample client. The injected `IChatClient` can be retrieved and used elsewhere in the app. - -:::code language="csharp" source="snippets/ai/ConsoleAI.DependencyInjection/Program.cs"::: - -The preceding example depends on the following NuGet packages: - -- [📦 Microsoft.Extensions.Hosting](https://www.nuget.org/packages/Microsoft.Extensions.Hosting) -- [📦 Microsoft.Extensions.Caching.Memory](https://www.nuget.org/packages/Microsoft.Extensions.Caching.Memory) - -What instance and configuration is injected can differ based on the current needs of the application, and multiple pipelines can be injected with different keys. - -## The `IEmbeddingGenerator` interface - -The interface represents a generic generator of embeddings. Here, `TInput` is the type of input values being embedded, and `TEmbedding` is the type of generated embedding, which inherits from the class. - -The `Embedding` class serves as a base class for embeddings generated by an `IEmbeddingGenerator`. It's designed to store and manage the metadata and data associated with embeddings. Derived types like `Embedding` provide the concrete embedding vector data. For instance, an embedding exposes a property to access its embedding data. - -The `IEmbeddingGenerator` interface defines a method to asynchronously generate embeddings for a collection of input values, with optional configuration and cancellation support. It also provides metadata describing the generator and allows for the retrieval of strongly typed services that can be provided by the generator or its underlying services. - -The following sample implementation of `IEmbeddingGenerator` shows the general structure (however, it just generates random embedding vectors). - -:::code language="csharp" source="snippets/ai/AI.Shared/SampleEmbeddingGenerator.cs"::: - -The preceding code: - -- Defines a class named `SampleEmbeddingGenerator` that implements the `IEmbeddingGenerator>` interface. -- Has a primary constructor that accepts an endpoint and model ID, which are used to identify the generator. -- Exposes a `Metadata` property that provides metadata about the generator. -- Implements the `GenerateAsync` method to generate embeddings for a collection of input values: - - Simulates an asynchronous operation by delaying for 100 milliseconds. - - Returns random embeddings for each input value. - -You can find actual concrete implementations in the following packages: - -- [📦 Microsoft.Extensions.AI.OpenAI](https://www.nuget.org/packages/Microsoft.Extensions.AI.OpenAI) -- [📦 Microsoft.Extensions.AI.Ollama](https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama) - -The following sections show specific `IEmbeddingGenerator` usage examples: - -- [Create embeddings](#create-embeddings) -- [Custom `IEmbeddingGenerator` middleware](#custom-iembeddinggenerator-middleware) - -### Create embeddings - -The primary operation performed with an is embedding generation, which is accomplished with its method. - -:::code language="csharp" source="snippets/ai/ConsoleAI.CreateEmbeddings/Program.cs"::: - -### Custom `IEmbeddingGenerator` middleware - -As with `IChatClient`, `IEmbeddingGenerator` implementations can be layered. Just as `Microsoft.Extensions.AI` provides delegating implementations of `IChatClient` for caching and telemetry, it provides an implementation for `IEmbeddingGenerator` as well. - -:::code language="csharp" source="snippets/ai/ConsoleAI.CustomEmbeddingsMiddle/Program.cs"::: - -The `IEmbeddingGenerator` enables building custom middleware that extends the functionality of an `IEmbeddingGenerator`. The class is an implementation of the `IEmbeddingGenerator` interface that serves as a base class for creating embedding generators that delegate their operations to another `IEmbeddingGenerator` instance. It allows for chaining multiple generators in any order, passing calls through to an underlying generator. The class provides default implementations for methods such as and `Dispose`, which forward the calls to the inner generator instance, enabling flexible and modular embedding generation. - -The following is an example implementation of such a delegating embedding generator that rate limits embedding generation requests: - -:::code language="csharp" source="snippets/ai/AI.Shared/RateLimitingEmbeddingGenerator.cs"::: - -This can then be layered around an arbitrary `IEmbeddingGenerator>` to rate limit all embedding generation operations performed. - -:::code language="csharp" source="snippets/ai/ConsoleAI.ConsumeRateLimitingEmbedding/Program.cs"::: - -In this way, the `RateLimitingEmbeddingGenerator` can be composed with other `IEmbeddingGenerator>` instances to provide rate limiting functionality. - -## See also - -- [Develop .NET applications with AI features](../../ai/get-started/dotnet-ai-overview.md) -- [Unified AI building blocks for .NET using Microsoft.Extensions.AI](../../ai/ai-extensions.md) -- [Build an AI chat app with .NET](../../ai/quickstarts/build-chat-app.md) -- [.NET dependency injection](dependency-injection.md) -- [Rate limit an HTTP handler in .NET](http-ratelimiter.md) -- [.NET Generic Host](generic-host.md) -- [Caching in .NET](caching.md) diff --git a/docs/core/extensions/log-sampling.md b/docs/core/extensions/log-sampling.md new file mode 100644 index 0000000000000..5d45b65563c65 --- /dev/null +++ b/docs/core/extensions/log-sampling.md @@ -0,0 +1,171 @@ +--- +title: Log sampling +description: Learn how to fine-tune the volume of logs emitted by your application using log sampling. +ms.date: 04/29/2025 +--- + +# Log sampling in .NET + +.NET provides log sampling capabilities that allow you to control the volume of logs your application emits without losing important information. The following sampling strategies are available: + +- Trace-based sampling: Sample logs based on the sampling decision of the current trace. +- Random probabilistic sampling: Sample logs based on configured probability rules. +- Custom sampling: Implement your own custom sampling strategy. For more information, see [Implement custom sampling](#implement-custom-sampling). + +> [!NOTE] +> Only one sampler can be used at a time. If you register multiple samplers, the last one is used. + +Log sampling extends [filtering capabilities](logging.md#configure-logging-with-code) by giving you more fine-grained control over which logs are emitted by your application. Instead of simply enabling or disabling logs, you can configure sampling to emit only a fraction of them. + +For example, while filtering typically uses probabilities like `0` (emit no logs) or `1` (emit all logs), sampling lets you choose any value in between—such as `0.1` to emit 10% of logs, or `0.25` to emit 25%. + +## Get started + +To get started, install the [📦 Microsoft.Extensions.Telemetry](https://www.nuget.org/packages/Microsoft.Extensions.Telemetry) NuGet package: + +### [.NET CLI](#tab/dotnet-cli) + +```dotnetcli +dotnet add package Microsoft.Extensions.Telemetry +``` + +### [PackageReference](#tab/package-reference) + +```xml + + + +``` + +--- + +For more information, see [dotnet add package](../tools/dotnet-package-add.md) or [Manage package dependencies in .NET applications](../tools/dependencies.md). + +## Configure trace-based sampling + +Trace-based sampling ensures that logs are sampled consistently with the underlying . This is useful when you want to maintain correlation between traces and logs. You can enable trace sampling (as described in the [guide](../diagnostics/distributed-tracing-concepts.md#sampling)), and then configure trace-based log sampling accordingly: + +:::code language="csharp" source="snippets/logging/log-sampling/trace-based/Program.cs" range="20"::: + +When trace-based sampling is enabled, logs will only be emitted if the underlying is sampled. The sampling decision comes from the current value. + +## Configure random probabilistic sampling + +Random probabilistic sampling allows you to sample logs based on configured probability rules. You can define rules specific to: + +- Log category +- Log level +- Event ID + +There are several ways to configure random probabilistic sampling with its rules: + +### File-based configuration + +Create a configuration section in your _appsettings.json_, for example: + +:::code language="json" source="snippets/logging/log-sampling/file-config/appsettings.json" ::: + +The preceding configuration: + +- Samples 10% of logs from categories starting with `System.` of all levels. +- Samples 25% of logs from categories starting with `Microsoft.AspNetCore.` of the . +- Samples 5% of logs with event ID 1001 of all categories and levels. +- Samples 100% of all other logs. + +> [!IMPORTANT] +> The value represents probability with values from 0 to 1. For example, 0.25 means 25% of logs will be sampled. 0 means no logs will be sampled, and 1 means all logs will be sampled. Those cases with 0 and 1 can be used to effectively disable or enable all logs for a specific rule. Probability cannot be less than 0 or greater than 1, and if this occurs in the application, an exception is thrown. + +To register the sampler with the configuration, consider the following code: + +:::code language="csharp" source="snippets/logging/log-sampling/file-config/Program.cs" range="16"::: + +#### Change sampling rules in a running app + +Random probabilistic sampling supports runtime configuration updates via the interface. If you're using a configuration provider that supports reloads—such as the [File Configuration Provider](configuration-providers.md#file-configuration-provider)—you can update sampling rules at runtime without restarting the application. + +For example, you can start your application with the following _appsettings.json_, which effectively acts as a no-op: + +:::code language="json" source="snippets/logging/log-sampling/appsettings.noop.json" ::: + +While the app is running, you can update the _appsettings.json_ with the following configuration: + +:::code language="json" source="snippets/logging/log-sampling/appsettings.updated.json" ::: + +The new rules will be applied automatically, for instance, with the preceding configuration, 1% of logs with the are sampled. + +#### How sampling rules are applied + +The algorithm is very similar to [log filtering](logging.md#how-filtering-rules-are-applied), yet there are some differences. + +Log sampling rules evaluation is performed on each log record, however, there are performance optimizations in place, such as caching. The following algorithm is used for each log record for a given category: + +- Select rules with `LogLevel` equal to or higher than the log level of the logger. +- Select rules with `EventId` not defined or defined and equal to the log event ID. +- Select rules with longest matching category prefix. If no match is found, select all rules that don't specify a category. +- If multiple rules are selected, take the **last** one. +- If no rules are selected, sampling is not applied, e.g. the log record is emitted as usual. + +### Inline code configuration + +:::code language="csharp" source="snippets/logging/log-sampling/code-config/Program.cs" range="16-22"::: + +The preceding configuration: + +- Samples 5% of logs with event ID 1001 of all categories and levels. +- Samples 100% of all other logs. + +### Simple probability configuration + +For basic scenarios, you can configure a single probability value that applies to all logs at or below a specified level: + +:::code language="csharp" source="snippets/logging/log-sampling/Program.cs" range="14-15"::: + +The code above registers the sampler which would sample 10% of logs and 1% of (and below) logs. +If the configuration did not have the rule for , it would have sampled 10% of logs and all levels below, including . + +## Implement custom sampling + +You can create a custom sampling strategy by deriving from the abstract class and overriding its abstract members. This allows you to tailor the sampling behavior to your specific requirements. For example, a custom sampler could: + +- Make sampling decisions based on the presence and value of specific key/value pairs in the log state. +- Apply rate-limiting logic, such as emitting logs only if the number of logs within a predefined time interval stays below a certain threshold. + +To implement a custom sampler, follow these steps: + +1. Create a class that inherits from . +1. Override the method to define your custom sampling logic. +1. Register your custom sampler in the logging pipeline using the extension method. + +For each log record that isn't filtered out, the method is called exactly once. Its return value determines whether the log record should be emitted. + +## Performance considerations + +Log sampling is designed to reduce storage costs, with a trade-off of slightly increased CPU usage. If your application generates a high volume of logs that are expensive to store, sampling can help reduce that volume. When configured appropriately, sampling can lower storage costs without losing information that's critical for diagnosing incidents. + +For the built-in sampling, see the benchmarks [here](https://github.com/dotnet/extensions/blob/main/bench/Libraries/Microsoft.Extensions.Telemetry.PerformanceTests/README.md). + +## Log level guidance on when to use sampling + +| Log level | Recommendation | +|--|--| +| | Don't apply sampling, because normally you disable these logs in production | +| | Don't apply sampling, because normally you disable these logs in production | +| | Do apply sampling | +| | Consider applying sampling | +| | Don't apply sampling | +| | Don't apply sampling | + +## Best practices + +- Begin with higher sampling rates and adjust them downwards as necessary. +- Use category-based rules to target specific components. +- If you're using distributed tracing, consider implementing trace-based sampling. +- Monitor the effectiveness of your sampling rules collectively. +- Find the right balance for your application—too low a sampling rate can reduce observability, while too high a rate can increase costs. + +## See also + +- [Logging in .NET](logging.md) +- [High-performance logging in .NET](high-performance-logging.md) +- [OpenTelemetry Tracing Sampling](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/sdk.md#sampling) diff --git a/docs/core/extensions/snippets/ai/AI.Shared/SampleEmbeddingGenerator.cs b/docs/core/extensions/snippets/ai/AI.Shared/SampleEmbeddingGenerator.cs deleted file mode 100644 index 8cf53982d2cb1..0000000000000 --- a/docs/core/extensions/snippets/ai/AI.Shared/SampleEmbeddingGenerator.cs +++ /dev/null @@ -1,35 +0,0 @@ -using Microsoft.Extensions.AI; - -public sealed class SampleEmbeddingGenerator( - Uri endpoint, string modelId) - : IEmbeddingGenerator> -{ - public EmbeddingGeneratorMetadata Metadata { get; } = - new(nameof(SampleEmbeddingGenerator), endpoint, modelId); - - public async Task>> GenerateAsync( - IEnumerable values, - EmbeddingGenerationOptions? options = null, - CancellationToken cancellationToken = default) - { - // Simulate some async operation - await Task.Delay(100, cancellationToken); - - // Create random embeddings - return - [ - .. from value in values - select new Embedding( - Enumerable.Range(0, 384) - .Select(_ => Random.Shared.NextSingle()) - .ToArray()) - ]; - } - - public object? GetService(Type serviceType, object? serviceKey) => this; - - public TService? GetService(object? key = null) - where TService : class => this as TService; - - void IDisposable.Dispose() { } -} diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CreateEmbeddings/Program.cs b/docs/core/extensions/snippets/ai/ConsoleAI.CreateEmbeddings/Program.cs deleted file mode 100644 index c3d8ece9410fb..0000000000000 --- a/docs/core/extensions/snippets/ai/ConsoleAI.CreateEmbeddings/Program.cs +++ /dev/null @@ -1,10 +0,0 @@ -using Microsoft.Extensions.AI; - -IEmbeddingGenerator> generator = - new SampleEmbeddingGenerator( - new Uri("http://coolsite.ai"), "target-ai-model"); - -foreach (var embedding in await generator.GenerateAsync(["What is AI?", "What is .NET?"])) -{ - Console.WriteLine(string.Join(", ", embedding.Vector.ToArray())); -} diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.CustomClientMiddle/Program.cs b/docs/core/extensions/snippets/ai/ConsoleAI.CustomClientMiddle/Program.cs deleted file mode 100644 index 31b73e10c8f53..0000000000000 --- a/docs/core/extensions/snippets/ai/ConsoleAI.CustomClientMiddle/Program.cs +++ /dev/null @@ -1,12 +0,0 @@ -using Microsoft.Extensions.AI; -using System.Threading.RateLimiting; - -var client = new RateLimitingChatClient( - new SampleChatClient(new Uri("http://localhost"), "test"), - new ConcurrencyLimiter(new() - { - PermitLimit = 1, - QueueLimit = int.MaxValue - })); - -await client.GetResponseAsync("What color is the sky?"); diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.DependencyInjection/Program.cs b/docs/core/extensions/snippets/ai/ConsoleAI.DependencyInjection/Program.cs deleted file mode 100644 index 255791a824dda..0000000000000 --- a/docs/core/extensions/snippets/ai/ConsoleAI.DependencyInjection/Program.cs +++ /dev/null @@ -1,20 +0,0 @@ -using Microsoft.Extensions.AI; -using Microsoft.Extensions.DependencyInjection; -using Microsoft.Extensions.Hosting; - -// App setup -HostApplicationBuilder builder = Host.CreateApplicationBuilder(); - -builder.Services.AddDistributedMemoryCache(); -builder.Services.AddChatClient(new SampleChatClient( - new Uri("http://coolsite.ai"), "target-ai-model")) - .UseDistributedCache(); - -using IHost app = builder.Build(); - -// Elsewhere in the app -IChatClient chatClient = app.Services.GetRequiredService(); - -Console.WriteLine(await chatClient.GetResponseAsync("What is AI?")); - -app.Run(); diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.FunctionalityPipelines/Program.cs b/docs/core/extensions/snippets/ai/ConsoleAI.FunctionalityPipelines/Program.cs deleted file mode 100644 index db6660ab5d5a1..0000000000000 --- a/docs/core/extensions/snippets/ai/ConsoleAI.FunctionalityPipelines/Program.cs +++ /dev/null @@ -1,46 +0,0 @@ -using Microsoft.Extensions.AI; -using Microsoft.Extensions.Caching.Distributed; -using Microsoft.Extensions.Caching.Memory; -using Microsoft.Extensions.Options; -using OpenTelemetry.Trace; - -// Configure OpenTelemetry exporter -string sourceName = Guid.NewGuid().ToString(); -TracerProvider tracerProvider = OpenTelemetry.Sdk.CreateTracerProviderBuilder() - .AddSource(sourceName) - .AddConsoleExporter() - .Build(); - -// Explore changing the order of the intermediate "Use" calls to see that impact -// that has on what gets cached, traced, etc. -IChatClient client = new ChatClientBuilder( - new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1")) - .UseDistributedCache(new MemoryDistributedCache( - Options.Create(new MemoryDistributedCacheOptions()))) - .UseFunctionInvocation() - .UseOpenTelemetry( - sourceName: sourceName, - configure: static c => c.EnableSensitiveData = true) - .Build(); - -ChatOptions options = new() -{ - Tools = - [ - AIFunctionFactory.Create( - () => Random.Shared.NextDouble() > 0.5 ? "It's sunny" : "It's raining", - name: "GetCurrentWeather", - description: "Gets the current weather") - ] -}; - -for (int i = 0; i < 3; ++i) -{ - List history = - [ - new ChatMessage(ChatRole.System, "You are a helpful AI assistant"), - new ChatMessage(ChatRole.User, "Do I need an umbrella?") - ]; - - Console.WriteLine(await client.GetResponseAsync(history, options)); -} diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ProvideOptions/Program.cs b/docs/core/extensions/snippets/ai/ConsoleAI.ProvideOptions/Program.cs deleted file mode 100644 index 77098afb4b8ba..0000000000000 --- a/docs/core/extensions/snippets/ai/ConsoleAI.ProvideOptions/Program.cs +++ /dev/null @@ -1,13 +0,0 @@ -using Microsoft.Extensions.AI; - -IChatClient client = new ChatClientBuilder( - new OllamaChatClient(new Uri("http://localhost:11434"))) - .ConfigureOptions(options => options.ModelId ??= "phi3") - .Build(); - -// will request "phi3" -Console.WriteLine(await client.GetResponseAsync("What is AI?")); - -// will request "llama3.1" -Console.WriteLine(await client.GetResponseAsync( - "What is AI?", new() { ModelId = "llama3.1" })); diff --git a/docs/core/extensions/snippets/ai/ConsoleAI.ToolCalling/Program.cs b/docs/core/extensions/snippets/ai/ConsoleAI.ToolCalling/Program.cs deleted file mode 100644 index 7c2abd8d1e5d6..0000000000000 --- a/docs/core/extensions/snippets/ai/ConsoleAI.ToolCalling/Program.cs +++ /dev/null @@ -1,21 +0,0 @@ -using System.ComponentModel; -using Microsoft.Extensions.AI; - -[Description("Gets the current weather")] -string GetCurrentWeather() => Random.Shared.NextDouble() > 0.5 - ? "It's sunny" - : "It's raining"; - -IChatClient client = new ChatClientBuilder( - new OllamaChatClient(new Uri("http://localhost:11434"), "llama3.1")) - .UseFunctionInvocation() - .Build(); - -IAsyncEnumerable response = client.GetStreamingResponseAsync( - "Should I wear a rain coat?", - new() { Tools = [AIFunctionFactory.Create(GetCurrentWeather)] }); - -await foreach (ChatResponseUpdate update in response) -{ - Console.Write(update); -} diff --git a/docs/core/extensions/snippets/logging/log-sampling/Program.cs b/docs/core/extensions/snippets/logging/log-sampling/Program.cs new file mode 100644 index 0000000000000..84112201e224c --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/Program.cs @@ -0,0 +1,29 @@ +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Diagnostics.Sampling; +using Microsoft.Extensions.Hosting; +using Microsoft.Extensions.Logging; + +var builder = Host.CreateApplicationBuilder(); +builder.Logging.AddSimpleConsole(options => +{ + options.SingleLine = true; + options.TimestampFormat = "hh:mm:ss"; +}); + +// Add the Random probabilistic sampler to the logging pipeline. +builder.Logging.AddRandomProbabilisticSampler(0.01, LogLevel.Information); +builder.Logging.AddRandomProbabilisticSampler(0.1, LogLevel.Warning); + +using var app = builder.Build(); + +var loggerFactory = app.Services.GetRequiredService(); +var logger = loggerFactory.CreateLogger("SamplingDemo"); + +while (true) +{ + Log.EnteredWhileLoop(logger); + Log.NoisyLogMessage(logger); + Log.LeftWhileLoop(logger); + + await Task.Delay(100); +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/appsettings.json b/docs/core/extensions/snippets/logging/log-sampling/appsettings.json new file mode 100644 index 0000000000000..442d5a486641a --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/appsettings.json @@ -0,0 +1,24 @@ +{ + "Logging": { + "LogLevel": { + "Default": "Debug" + } + }, + "RandomProbabilisticSampler": { + "Rules": [ + { + "CategoryName": "Microsoft.AspNetCore.*", + "Probability": 0.25, + "LogLevel": "Information" + }, + { + "CategoryName": "System.*", + "Probability": 0.1 + }, + { + "EventId": 1001, + "Probability": 0.05 + } + ] + } +} \ No newline at end of file diff --git a/docs/core/extensions/snippets/logging/log-sampling/appsettings.noop.json b/docs/core/extensions/snippets/logging/log-sampling/appsettings.noop.json new file mode 100644 index 0000000000000..15bd6be3af9a4 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/appsettings.noop.json @@ -0,0 +1,11 @@ +{ + "Logging": { + "RandomProbabilisticSampler": { + "Rules": [ + { + "Probability": 1 + } + ] + } + } +} \ No newline at end of file diff --git a/docs/core/extensions/snippets/logging/log-sampling/appsettings.updated.json b/docs/core/extensions/snippets/logging/log-sampling/appsettings.updated.json new file mode 100644 index 0000000000000..733c172efcb03 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/appsettings.updated.json @@ -0,0 +1,12 @@ +{ + "Logging": { + "RandomProbabilisticSampler": { + "Rules": [ + { + "Probability": 0.01, + "LogLevel": "Information" + } + ] + } + } +} \ No newline at end of file diff --git a/docs/core/extensions/snippets/logging/log-sampling/code-config/Log.cs b/docs/core/extensions/snippets/logging/log-sampling/code-config/Log.cs new file mode 100644 index 0000000000000..12aa73ae37729 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/code-config/Log.cs @@ -0,0 +1,9 @@ +using Microsoft.Extensions.Logging; + +namespace LogSamplingCodeConfig; + +internal static partial class Log +{ + [LoggerMessage(EventId = 1001, Level = LogLevel.Information, Message = "Noisy log message in my application.")] + public static partial void NoisyMessage(this ILogger logger); +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/code-config/LogSamplingCodeConfig.csproj b/docs/core/extensions/snippets/logging/log-sampling/code-config/LogSamplingCodeConfig.csproj new file mode 100644 index 0000000000000..6dc59747c56e4 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/code-config/LogSamplingCodeConfig.csproj @@ -0,0 +1,15 @@ + + + + Demonstrates how to use log sampling feature. + Exe + $(NoWarn);EXTEXP0003 + + + + + + + + + diff --git a/docs/core/extensions/snippets/logging/log-sampling/code-config/Program.cs b/docs/core/extensions/snippets/logging/log-sampling/code-config/Program.cs new file mode 100644 index 0000000000000..04c2132b24602 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/code-config/Program.cs @@ -0,0 +1,32 @@ +using LogSamplingCodeConfig; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Diagnostics.Sampling; +using Microsoft.Extensions.Hosting; +using Microsoft.Extensions.Logging; + +var builder = Host.CreateApplicationBuilder(args); + +builder.Logging.AddSimpleConsole(options => +{ + options.SingleLine = true; + options.TimestampFormat = "hh:mm:ss"; +}); + +// Add the Random probabilistic sampler to the logging pipeline. +builder.Logging.AddRandomProbabilisticSampler(options => +{ + options.Rules.Add( + new RandomProbabilisticSamplerFilterRule( + probability: 0.05d, + eventId : 1001)); +}); + +using var app = builder.Build(); + +var loggerFactory = app.Services.GetRequiredService(); +var logger = loggerFactory.CreateLogger("SamplingDemo"); + +for (int i = 0; i < 1_000_000; i++) +{ + logger.NoisyMessage(); +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/file-config/Log.cs b/docs/core/extensions/snippets/logging/log-sampling/file-config/Log.cs new file mode 100644 index 0000000000000..a79e7fc79e737 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/file-config/Log.cs @@ -0,0 +1,15 @@ +using Microsoft.Extensions.Logging; + +namespace LogSamplingFileConfig; + +internal static partial class Log +{ + [LoggerMessage(Level = LogLevel.Debug, Message = "Entered While loop in my application.")] + public static partial void EnteredWhileLoop(this ILogger logger); + + [LoggerMessage(Level = LogLevel.Debug, Message = "Left While loop in my application.")] + public static partial void LeftWhileLoop(this ILogger logger); + + [LoggerMessage(EventId = 1001, Level = LogLevel.Information, Message = "Noisy log message in my application.")] + public static partial void NoisyMessage(this ILogger logger); +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/file-config/LogSamplingFileConfig.csproj b/docs/core/extensions/snippets/logging/log-sampling/file-config/LogSamplingFileConfig.csproj new file mode 100644 index 0000000000000..de183e9b44ec8 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/file-config/LogSamplingFileConfig.csproj @@ -0,0 +1,21 @@ + + + + Demonstrates how to use log sampling feature. + Exe + $(NoWarn);EXTEXP0003 + + + + + + + + + + + Always + + + + diff --git a/docs/core/extensions/snippets/logging/log-sampling/file-config/Program.cs b/docs/core/extensions/snippets/logging/log-sampling/file-config/Program.cs new file mode 100644 index 0000000000000..1b559c1bf23fc --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/file-config/Program.cs @@ -0,0 +1,29 @@ +using LogSamplingFileConfig; +using System.Threading.Tasks; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Hosting; +using Microsoft.Extensions.Logging; + +var builder = Host.CreateApplicationBuilder(); + +builder.Logging.AddSimpleConsole(options => +{ + options.SingleLine = true; + options.TimestampFormat = "hh:mm:ss"; +}); + +// Add the Random probabilistic sampler to the logging pipeline. +builder.Logging.AddRandomProbabilisticSampler(builder.Configuration); + +using var app = builder.Build(); + +var loggerFactory = app.Services.GetRequiredService(); +var logger = loggerFactory.CreateLogger("SamplingDemo"); + +while (true) +{ + logger.EnteredWhileLoop(); + logger.NoisyMessage(); + logger.LeftWhileLoop(); + await Task.Delay(100); +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/file-config/appsettings.json b/docs/core/extensions/snippets/logging/log-sampling/file-config/appsettings.json new file mode 100644 index 0000000000000..06124e7b4bc02 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/file-config/appsettings.json @@ -0,0 +1,25 @@ +{ + "Logging": { + "LogLevel": { + "Default": "Debug" + } + }, + + "RandomProbabilisticSampler": { + "Rules": [ + { + "CategoryName": "Microsoft.AspNetCore.*", + "Probability": 0.25, + "LogLevel": "Information" + }, + { + "CategoryName": "System.*", + "Probability": 0.1 + }, + { + "EventId": 1001, + "Probability": 0.05 + } + ] + } +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/trace-based/Log.cs b/docs/core/extensions/snippets/logging/log-sampling/trace-based/Log.cs new file mode 100644 index 0000000000000..8db2c560676b4 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/trace-based/Log.cs @@ -0,0 +1,9 @@ +using Microsoft.Extensions.Logging; + +namespace LogSamplingTraceBased; + +internal static partial class Log +{ + [LoggerMessage(EventId = 1001, Level = LogLevel.Information, Message = "Count: {count}. Noisy log message.")] + public static partial void NoisyMessage(this ILogger logger, int count); +} diff --git a/docs/core/extensions/snippets/logging/log-sampling/trace-based/LogSamplingTraceBased.csproj b/docs/core/extensions/snippets/logging/log-sampling/trace-based/LogSamplingTraceBased.csproj new file mode 100644 index 0000000000000..444864117d5bb --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/trace-based/LogSamplingTraceBased.csproj @@ -0,0 +1,18 @@ + + + + Demonstrates how to use log sampling feature. + Exe + $(NoWarn);EXTEXP0003 + + + + + + + + + + + + diff --git a/docs/core/extensions/snippets/logging/log-sampling/trace-based/Program.cs b/docs/core/extensions/snippets/logging/log-sampling/trace-based/Program.cs new file mode 100644 index 0000000000000..1b2a766fe6710 --- /dev/null +++ b/docs/core/extensions/snippets/logging/log-sampling/trace-based/Program.cs @@ -0,0 +1,44 @@ +using System.Diagnostics; +using LogSamplingTraceBased; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Hosting; +using Microsoft.Extensions.Logging; +using OpenTelemetry; +using OpenTelemetry.Trace; + +using ActivitySource demoSource = new("LogSamplingTraceBased"); + +var builder = Host.CreateApplicationBuilder(args); + +builder.Logging.AddSimpleConsole(options => +{ + options.SingleLine = true; + options.TimestampFormat = "hh:mm:ss"; +}); + +// Add the Random probabilistic sampler to the logging pipeline. +builder.Logging.AddTraceBasedSampler(); + +using var tracerProvider = Sdk.CreateTracerProviderBuilder() + // Enable Tracing sampling configured with 50% probability: + .SetSampler(new TraceIdRatioBasedSampler(0.5)) + .AddSource("LogSamplingTraceBased") + .AddConsoleExporter() + .Build(); + +using var app = builder.Build(); + +var loggerFactory = app.Services.GetRequiredService(); +var logger = loggerFactory.CreateLogger("SamplingDemo"); + +// On average, 50% of Activities and logs will be sampled: +for (int i = 0; i < 10; i++) +{ + using var activity = demoSource.StartActivity("SayHello"); + activity?.SetTag("foo", "bar"); + activity?.SetStatus(ActivityStatusCode.Ok); + + // The parent activity is sampled with 50% probability, + // and the same sampling decision will be used for logging + logger.NoisyMessage(i); +} diff --git a/docs/csharp/toc.yml b/docs/csharp/toc.yml index b6129c1f03afb..735ed842fbc44 100644 --- a/docs/csharp/toc.yml +++ b/docs/csharp/toc.yml @@ -19,6 +19,8 @@ items: href: tour-of-csharp/tutorials/branches-and-loops.md - name: List collections href: tour-of-csharp/tutorials/list-collection.md + - name: Pattern matching + href: tour-of-csharp/tutorials/pattern-matching.md - name: C# language strategy href: tour-of-csharp/strategy.md - name: Learn C# for Java developers diff --git a/docs/csharp/tour-of-csharp/tutorials/index.md b/docs/csharp/tour-of-csharp/tutorials/index.md index 23d619b23026b..67c0ade444c0f 100644 --- a/docs/csharp/tour-of-csharp/tutorials/index.md +++ b/docs/csharp/tour-of-csharp/tutorials/index.md @@ -1,7 +1,7 @@ --- title: Interactive tutorials description: Learn C# in your browser, and get started with your own development environment -ms.date: 03/20/2025 +ms.date: 04/23/2025 --- # Introduction to C\# @@ -37,6 +37,10 @@ The [Branches and loops](branches-and-loops.md) tutorial teaches the basics of s The [List collection](list-collection.md) lesson gives you a tour of the List collection type that stores sequences of data. You'll learn how to add and remove items, search for items, and sort the lists. You'll explore different kinds of lists. +## Pattern matching + +The [Pattern matching](pattern-matching.md) lesson provides an introduction to *pattern matching*. Pattern matching enables you to compare an expression against a pattern. The success of the match determines which program logic to follow. Patterns can compare types, properties of a type, or contents of a list. You can combine multiple patterns using `and`, `or`, and `not` logic. Patterns provide a rich vocabulary to inspect data and make decisions in your program based on that inspection. + ## Set up your local environment After you finish these tutorials, set up a development environment. You'll want: diff --git a/docs/csharp/tour-of-csharp/tutorials/pattern-matching.md b/docs/csharp/tour-of-csharp/tutorials/pattern-matching.md new file mode 100644 index 0000000000000..50092e7b79148 --- /dev/null +++ b/docs/csharp/tour-of-csharp/tutorials/pattern-matching.md @@ -0,0 +1,75 @@ +--- +title: Pattern matching +description: In this tutorial about pattern matching, you use your browser to learn C# interactively. You're going to write C# code and see the results of compiling and running your code directly in the browser. +ms.date: 05/02/2025 +--- +# Match data against patterns + +This tutorial teaches you how to use pattern matching to inspect data in C#. You write small amounts of code, then you compile and run that code. The tutorial contains a series of lessons that explore different kinds of types in C#. These lessons teach you the fundamentals of the C# language. + +> [!TIP] +> When a code snippet block includes the "Run" button, that button opens the interactive window, or replaces the existing code in the interactive window. When the snippet doesn't include a "Run" button, you can copy the code and add it to the current interactive window. + +The preceding tutorials demonstrated built-in types and types you define as tuples or records. Instances of these types can be checked against a *pattern*. Whether an instance matches a pattern determines the actions your program takes. Let's start to explore how you can use patterns. + +## Match a value + +All the examples in this tutorial use text input that represents a series of bank transactions as comma separated values (CSV) input. In each of the samples you can match the record against a pattern using either an `is` or `switch` expression. This first example splits each line on the `,` character and then *matches* the first string field against the value "DEPOSIT" or "WITHDRAWAL" using an `is` expression. When it matches, the transaction amount is added or deducted from the current account balance. To see it work, press the "Run" button: + +:::code language="csharp" interactive="try-dotnet-method" source="./snippets/PatternMatching/Program.cs" id="FirstExample"::: + +Examine the output. You can see that each line is processed by comparing the value of the text in the first field. The preceding sample could be similarly constructed using the `==` operator to test that two `string` values are equal. Comparing a variable to a constant is a basic building block for pattern matching. Let's explore more of the building blocks that are part of pattern matching. + +## Enum matches + +Another common use for pattern matching is to match on the values of an `enum` type. This next sample processes the input records to create a *tuple* where the first value is an `enum` value that notes a deposit or a withdrawal. The second value is the value of the transaction. To see it work, press the "Run" button: + +> [!WARNING] +> Don't copy and paste. The interactive window must be reset to run the following samples. If you make a mistake, the window hangs, and you need to refresh the page to continue. + +:::code language="csharp" interactive="try-dotnet" source="./snippets/PatternMatching/FirstEnumExample.cs" id="IsEnumValue"::: + +The preceding example also uses an `if` statement to check the value of an `enum` expression. Another form of pattern matching uses a `switch` expression. Let's explore that syntax and how you can use it. + +## Exhaustive matches with `switch` + +A series of `if` statements can test a series of conditions. But, the compiler can't tell if a series of `if` statements are *exhaustive* or if later `if` conditions are *subsumed* by earlier conditions. The `switch` expression ensures both of those characteristics are met, which results in fewer bugs in your apps. Let's try it and experiment. Copy the following code. Replace the two `if` statements in the interactive window with the `switch` expression you copied. After you've modified the code, press the "Run" button at the top of the interactive window to run the new sample. + +:::code language="csharp" interactive="try-dotnet" source="./snippets/PatternMatching/EnumSwitchExample.cs" id="SwitchEnumValue"::: + +When you run the code, you see that it works the same. To demonstrate *subsumption*, reorder the switch arms as shown in the following snippet: + +```csharp +currentBalance += transaction switch +{ + (TransactionType.Deposit, var amount) => amount, + _ => 0.0, + (TransactionType.Withdrawal, var amount) => -amount, +}; +``` + +After you reorder the switch arms, press the "Run" button. The compiler issues an error because the arm with `_` matches every value. As a result, that final arm with `TransactionType.Withdrawal` never runs. The compiler tells you that something's wrong in your code. + +The compiler issues a warning if the expression tested in a `switch` expression could contain values that don't match any switch arm. If some values could fail to match any condition, the `switch` expression isn't *exhaustive*. The compiler also issues a warning if some values of the input don't match any of the switch arms. For example, if you remove the line with `_ => 0.0,`, any invalid values don't match. At run time, that would fail. Once you install the .NET SDK and build programs in your environment, you can test this behavior. The online experience doesn't display warnings in the output window. + +## Type patterns + +To finish this tutorial, let's explore one more building block to pattern matching: the *type pattern*. A *type pattern* tests an expression at run time to see if it's the specified type. You can use a type test with either an `is` expression or a `switch` expression. Let's modify the current sample in two ways. First, instead of a tuple, let's build `Deposit` and `Withdrawal` record types that represent the transactions. Add the following declarations at the bottom of the interactive window: + +:::code language="csharp" interactive="try-dotnet" source="./snippets/PatternMatching/FinalExampleProgram.cs" id="RecordDeclarations"::: + +Next, add this method after the `Main` method to parse the text and return a series of records: + +:::code language="csharp" interactive="try-dotnet" source="./snippets/PatternMatching/FinalExampleProgram.cs" id="ParseToRecord"::: + +Finally, replace the `foreach` loop in the `Main` method with the following code: + +:::code language="csharp" interactive="try-dotnet" source="./snippets/PatternMatching/FinalExampleProgram.cs" id="TypePattern"::: + +Then, press the "Run" button to see the results. This final version tests the input against a *type*. + +Pattern matching provides a vocabulary to compare an expression against characteristics. Patterns can include the expression's type, values of types, property values, and combinations of them. Comparing expressions against a pattern can be more clear than multiple `if` comparisons. You explored some of the patterns you can use to match expressions. There are many more ways to use pattern matching in your applications. First, visit the [.NET site](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro) to download the .NET SDK, create a project on your machine, and keep coding. As you explore, you can learn more about pattern matching in C# in the following articles: + +- [Pattern matching in C#](../../fundamentals/functional/pattern-matching.md) +- [Explore pattern matching tutorial](../../tutorials/patterns-objects.md) +- [Pattern matching scenario](../../fundamentals/tutorials/pattern-matching.md) diff --git a/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/EnumSwitchExample.cs b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/EnumSwitchExample.cs new file mode 100644 index 0000000000000..2e0d59867a1ab --- /dev/null +++ b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/EnumSwitchExample.cs @@ -0,0 +1,76 @@ +namespace EnumSwitchExample; + +// +public static class ExampleProgram +{ + const string bankRecords = """ + DEPOSIT, 10000, Initial balance + DEPOSIT, 500, regular deposit + WITHDRAWAL, 1000, rent + DEPOSIT, 2000, freelance payment + WITHDRAWAL, 300, groceries + DEPOSIT, 700, gift from friend + WITHDRAWAL, 150, utility bill + DEPOSIT, 1200, tax refund + WITHDRAWAL, 500, car maintenance + DEPOSIT, 400, cashback reward + WITHDRAWAL, 250, dining out + DEPOSIT, 3000, bonus payment + WITHDRAWAL, 800, loan repayment + DEPOSIT, 600, stock dividends + WITHDRAWAL, 100, subscription fee + DEPOSIT, 1500, side hustle income + WITHDRAWAL, 200, fuel expenses + DEPOSIT, 900, refund from store + WITHDRAWAL, 350, shopping + DEPOSIT, 2500, project milestone payment + WITHDRAWAL, 400, entertainment + """; + + public static void Main() + { + double currentBalance = 0.0; + + foreach (var transaction in TransactionRecords(bankRecords)) + { + // + currentBalance += transaction switch + { + (TransactionType.Deposit, var amount) => amount, + (TransactionType.Withdrawal, var amount) => -amount, + _ => 0.0, + }; + // + Console.WriteLine($"{transaction.type} => Parsed Amount: {transaction.amount}, New Balance: {currentBalance}"); + } + } + + static IEnumerable<(TransactionType type, double amount)> TransactionRecords(string inputText) + { + var reader = new StringReader(inputText); + string? line; + while ((line = reader.ReadLine()) is not null) + { + string[] parts = line.Split(','); + + string? transactionType = parts[0]?.Trim(); + if (double.TryParse(parts[1].Trim(), out double amount)) + { + // Update the balance based on transaction type + if (transactionType?.ToUpper() is "DEPOSIT") + yield return (TransactionType.Deposit, amount); + else if (transactionType?.ToUpper() is "WITHDRAWAL") + yield return (TransactionType.Withdrawal, amount); + } + yield return (TransactionType.Invalid, 0.0); + } + } +} + +public enum TransactionType +{ + Deposit, + Withdrawal, + Invalid +} +// diff --git a/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/FinalExampleProgram.cs b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/FinalExampleProgram.cs new file mode 100644 index 0000000000000..da4b22834ae74 --- /dev/null +++ b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/FinalExampleProgram.cs @@ -0,0 +1,79 @@ +public static class ExampleProgram +{ + const string bankRecords = """ + DEPOSIT, 10000, Initial balance + DEPOSIT, 500, regular deposit + WITHDRAWAL, 1000, rent + DEPOSIT, 2000, freelance payment + WITHDRAWAL, 300, groceries + DEPOSIT, 700, gift from friend + WITHDRAWAL, 150, utility bill + DEPOSIT, 1200, tax refund + WITHDRAWAL, 500, car maintenance + DEPOSIT, 400, cashback reward + WITHDRAWAL, 250, dining out + DEPOSIT, 3000, bonus payment + WITHDRAWAL, 800, loan repayment + DEPOSIT, 600, stock dividends + WITHDRAWAL, 100, subscription fee + DEPOSIT, 1500, side hustle income + WITHDRAWAL, 200, fuel expenses + DEPOSIT, 900, refund from store + WITHDRAWAL, 350, shopping + DEPOSIT, 2500, project milestone payment + WITHDRAWAL, 400, entertainment + """; + + public static void Main() + { + double currentBalance = 0.0; + + // + foreach (var transaction in TransactionRecordType(bankRecords)) + { + currentBalance += transaction switch + { + Deposit d => d.Amount, + Withdrawal w => -w.Amount, + _ => 0.0, + }; + Console.WriteLine($" {transaction} => New Balance: {currentBalance}"); + } + // + } + + // + public static IEnumerable TransactionRecordType(string inputText) + { + var reader = new StringReader(inputText); + string? line; + while ((line = reader.ReadLine()) is not null) + { + string[] parts = line.Split(','); + + string? transactionType = parts[0]?.Trim(); + if (double.TryParse(parts[1].Trim(), out double amount)) + { + // Update the balance based on transaction type + if (transactionType?.ToUpper() is "DEPOSIT") + yield return new Deposit(amount, parts[2]); + else if (transactionType?.ToUpper() is "WITHDRAWAL") + yield return new Withdrawal(amount, parts[2]); + } + yield return default; + } + } + // +} + +public enum TransactionType +{ + Deposit, + Withdrawal, + Invalid +} + +// +public record Deposit(double Amount, string description); +public record Withdrawal(double Amount, string description); +// diff --git a/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/FirstEnumExample.cs b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/FirstEnumExample.cs new file mode 100644 index 0000000000000..455bc37279aa0 --- /dev/null +++ b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/FirstEnumExample.cs @@ -0,0 +1,72 @@ +namespace FirstEnumExample; + +// +public static class ExampleProgram +{ + const string bankRecords = """ + DEPOSIT, 10000, Initial balance + DEPOSIT, 500, regular deposit + WITHDRAWAL, 1000, rent + DEPOSIT, 2000, freelance payment + WITHDRAWAL, 300, groceries + DEPOSIT, 700, gift from friend + WITHDRAWAL, 150, utility bill + DEPOSIT, 1200, tax refund + WITHDRAWAL, 500, car maintenance + DEPOSIT, 400, cashback reward + WITHDRAWAL, 250, dining out + DEPOSIT, 3000, bonus payment + WITHDRAWAL, 800, loan repayment + DEPOSIT, 600, stock dividends + WITHDRAWAL, 100, subscription fee + DEPOSIT, 1500, side hustle income + WITHDRAWAL, 200, fuel expenses + DEPOSIT, 900, refund from store + WITHDRAWAL, 350, shopping + DEPOSIT, 2500, project milestone payment + WITHDRAWAL, 400, entertainment + """; + + public static void Main() + { + double currentBalance = 0.0; + + foreach (var transaction in TransactionRecords(bankRecords)) + { + if (transaction.type == TransactionType.Deposit) + currentBalance += transaction.amount; + else if (transaction.type == TransactionType.Withdrawal) + currentBalance -= transaction.amount; + Console.WriteLine($"{transaction.type} => Parsed Amount: {transaction.amount}, New Balance: {currentBalance}"); + } + } + + static IEnumerable<(TransactionType type, double amount)> TransactionRecords(string inputText) + { + var reader = new StringReader(inputText); + string? line; + while ((line = reader.ReadLine()) is not null) + { + string[] parts = line.Split(','); + + string? transactionType = parts[0]?.Trim(); + if (double.TryParse(parts[1].Trim(), out double amount)) + { + // Update the balance based on transaction type + if (transactionType?.ToUpper() is "DEPOSIT") + yield return (TransactionType.Deposit, amount); + else if (transactionType?.ToUpper() is "WITHDRAWAL") + yield return (TransactionType.Withdrawal, amount); + } + yield return (TransactionType.Invalid, 0.0); + } + } +} + +public enum TransactionType +{ + Deposit, + Withdrawal, + Invalid +} +// diff --git a/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/PatternMatching.csproj b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/PatternMatching.csproj new file mode 100644 index 0000000000000..fd4bd08da2987 --- /dev/null +++ b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/PatternMatching.csproj @@ -0,0 +1,10 @@ + + + + Exe + net9.0 + enable + enable + + + diff --git a/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/Program.cs b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/Program.cs new file mode 100644 index 0000000000000..abb2305b43bce --- /dev/null +++ b/docs/csharp/tour-of-csharp/tutorials/snippets/PatternMatching/Program.cs @@ -0,0 +1,57 @@ +// +string bankRecords = """ + DEPOSIT, 10000, Initial balance + DEPOSIT, 500, regular deposit + WITHDRAWAL, 1000, rent + DEPOSIT, 2000, freelance payment + WITHDRAWAL, 300, groceries + DEPOSIT, 700, gift from friend + WITHDRAWAL, 150, utility bill + DEPOSIT, 1200, tax refund + WITHDRAWAL, 500, car maintenance + DEPOSIT, 400, cashback reward + WITHDRAWAL, 250, dining out + DEPOSIT, 3000, bonus payment + WITHDRAWAL, 800, loan repayment + DEPOSIT, 600, stock dividends + WITHDRAWAL, 100, subscription fee + DEPOSIT, 1500, side hustle income + WITHDRAWAL, 200, fuel expenses + DEPOSIT, 900, refund from store + WITHDRAWAL, 350, shopping + DEPOSIT, 2500, project milestone payment + WITHDRAWAL, 400, entertainment + """; + +double currentBalance = 0.0; +var reader = new StringReader(bankRecords); + +string? line; +while ((line = reader.ReadLine()) is not null) +{ + if (string.IsNullOrWhiteSpace(line)) continue; + // Split the line based on comma delimiter and trim each part + string[] parts = line.Split(','); + + string? transactionType = parts[0]?.Trim(); + if (double.TryParse(parts[1].Trim(), out double amount)) + { + // Update the balance based on transaction type + if (transactionType?.ToUpper() is "DEPOSIT") + currentBalance += amount; + else if (transactionType?.ToUpper() is "WITHDRAWAL") + currentBalance -= amount; + + Console.WriteLine($"{line.Trim()} => Parsed Amount: {amount}, New Balance: {currentBalance}"); + } +} +// + +Console.WriteLine(); +FirstEnumExample.ExampleProgram.Main(); +Console.WriteLine(); +EnumSwitchExample.ExampleProgram.Main(); +Console.WriteLine(); +ExampleProgram.Main(); +Console.WriteLine(); + diff --git a/docs/csharp/whats-new/csharp-14.md b/docs/csharp/whats-new/csharp-14.md index 5c2620bea8fb5..a29a00fbd7a13 100644 --- a/docs/csharp/whats-new/csharp-14.md +++ b/docs/csharp/whats-new/csharp-14.md @@ -52,7 +52,7 @@ public static class Enumerable public static IEnumerable Combine(IEnumerable first, IEnumerable second) { ... } // static extension property: - public static IEnumerable Identity => yield return default; + public static IEnumerable Identity => Enumerable.Empty(); } } ``` diff --git a/docs/fundamentals/toc.yml b/docs/fundamentals/toc.yml index 8ca869da34675..cbd051f5dba7d 100644 --- a/docs/fundamentals/toc.yml +++ b/docs/fundamentals/toc.yml @@ -1039,7 +1039,7 @@ items: href: runtime-libraries/system-random.md - name: Artificial intelligence (AI) displayName: microsoft.extensions.ai,ollama,ai,openai,azure inference,ichatclient - href: ../core/extensions/artificial-intelligence.md + href: ../ai/microsoft-extensions-ai.md?toc=/dotnet/fundamentals/toc.json&bc=/dotnet/breadcrumb/toc.json - name: Dependency injection items: - name: Overview @@ -1096,6 +1096,8 @@ items: - name: High-performance logging href: ../core/extensions/high-performance-logging.md displayName: high-performance logging,high-performance log,high-performance logging provider,high-performance log provider + - name: Log Sampling + href: ../core/extensions/log-sampling.md - name: Console log formatting href: ../core/extensions/console-log-formatter.md displayName: console log formatting,console log formatter,console log formatting provider,console log formatter provider diff --git a/docs/navigate/tools-diagnostics/toc.yml b/docs/navigate/tools-diagnostics/toc.yml index cdcd3b0d95679..8f20bbccb3c04 100644 --- a/docs/navigate/tools-diagnostics/toc.yml +++ b/docs/navigate/tools-diagnostics/toc.yml @@ -363,6 +363,8 @@ items: href: ../../core/diagnostics/logging-tracing.md - name: ILogger Logging href: ../../core/extensions/logging.md + - name: Log Sampling + href: ../../core/extensions/log-sampling.md - name: Observability with OpenTelemetry items: - name: Overview