From d0428662d1935798f139fdf8f79271a6f94649d7 Mon Sep 17 00:00:00 2001
From: JonasHelming To learn more about MCP, see the official announcement from Anthropic.
For a list of available MCP servers, visit the MCP Servers Repository.
To configure MCP servers, open the preferences and add entries to the MCP Servers Configuration
section. Each server requires a unique identifier (e.g., "brave-search"
or "filesystem"
) and configuration details such as the command, arguments, and optional environment variables. For Windows users, please see the additional information below. 'autostart' will automatically start the respective MCP server the next time you restart your IDE, you will still need to manually start it the first time (see below).
To configure MCP servers, open the preferences and add entries to the MCP Servers Configuration
section. Each server requires a unique identifier (e.g., "brave-search"
or "filesystem"
) and configuration details such as the command, arguments, and optional environment variables. For Windows users, please see the additional information below.
'autostart' will automatically start the respective MCP server the next time you restart your IDE, you will still need to manually start it the first time (see below).
Example Configuration:
{
"brave-search": {
diff --git a/pr-previews/pr-718/page-data/docs/user_ai/page-data.json b/pr-previews/pr-718/page-data/docs/user_ai/page-data.json
index 6925b61a..e2e87e56 100644
--- a/pr-previews/pr-718/page-data/docs/user_ai/page-data.json
+++ b/pr-previews/pr-718/page-data/docs/user_ai/page-data.json
@@ -1 +1 @@
-{"componentChunkName":"component---src-templates-doc-js","path":"/docs/user_ai/","result":{"data":{"markdownRemark":{"frontmatter":{"title":"Using the AI Features in the Theia IDE as an End User"},"html":"Using the AI Features in the Theia IDE as an End User
\nThis section documents how to use AI features in the Theia IDE (available since version 1.54, see also this introduction). These features are based on Theia AI, a framework for building AI assistance in tools and IDEs. Theia AI is part of the Theia platform. If you're interested in building your own custom tool or IDE with Theia AI, please refer to the corresponding documentation.
\nPlease note that these features are in early access and experimental. This means they may be unstable, behave unexpectedly, or undergo significant changes. In particular, using your own LLM might incur costs that you need to monitor closely. We have not yet optimized the AI assistants in the Theia IDE for token usage. Use these features at your own risk, and we welcome any feedback, suggestions, and contributions!
\nTheia AI features within the Theia IDE are currently disabled by default. See the next section on how to enable them.
\nTable of Contents
\n\n- Set-Up\n\n
\n- Current Agents in the Theia IDE\n\n
\n- Chat
\n- AI Configuration\n\n
\n- Custom Agents
\n- MCP Integration
\n- SCANOSS
\n- AI History
\n- Learn more
\n
\nSet-Up
\nTo activate AI support in the Theia IDE, go to Preferences and enable the setting “AI-features => AI Enable.”
\nTo use Theia AI within the Theia IDE, you need to provide access to at least one LLM. Theia IDE comes with preinstalled support for OpenAI API-compatible models, either hosted by OpenAI or self-hosted via VLLM. Additionally, Theia IDE supports connecting to models via Ollama. See the corresponding sections below on how to configure these providers.
\nOther LLM providers, including local models, can be added easily. If you would like to see support for a specific LLM, please provide feedback or consider contributing.
\nEach LLM provider offers a configurable list of available models (see the screenshot below for Hugging Face Models models). To use a model in your IDE, configure it on a per-agent basis in the AI Configuration view.
\nLLM Providers Overview
\nNote: Theia IDE enables connections to various models (e.g., HuggingFace, custom OpenAPI models, LlamaFile). However, not all models may work out of the box, as they may require specific customizations or optimizations. If you encounter issues, please provide feedback, keeping in mind this is an early-phase feature.
\nMany models and providers support using an OpenAI compatible API. In this case, we recommend using the Theia AI provider for OpenAI Compatible Models
\nBelow is an overview of various Large Language Model (LLM) providers supported within the Theia IDE, highlighting their key features and current state.
\n\n \n Provider \n Streaming \n Tool Calls \n Structured Output \n State \n \n \n OpenAI Official \n ✅ \n ✅ \n ✅ \n Public \n \n \n OpenAI Compatible \n ✅ \n ✅ \n ✅ \n Public \n \n \n Azure \n ✅ \n ✅ \n ✅ \n Public \n \n \n Anthropic \n ✅ \n ✅ \n ❌ \n Beta \n \n \n Hugging Face \n ✅ \n ❌ \n ❌ \n Experimental \n \n \n LlamaFile \n ✅ \n ❌ \n ❌ \n Experimental \n \n \n Ollama \n ✅ \n ✅ \n ✅ \n Alpha \n \n
\n\n\nOpenAI (Hosted by OpenAI)
\nTo enable the use of OpenAI, you need to create an API key in your OpenAI account and enter it in the settings AI-features => OpenAiOfficial (see the screenshot below).\nPlease note: By using this preference the Open AI API key will be stored in clear text on the machine running Theia. Use the environment variable OPENAI_API_KEY
to set the key securely.\nPlease also note that creating an API key requires a paid subscription, and using these models may incur additional costs. Be sure to monitor your usage carefully to avoid unexpected charges. We have not yet optimized the AI assistants in the Theia IDE for token usage.
\n
\nThe OpenAI provider is preconfigured with a list of available models. You can easily add new models to this list, for example, if new options are released.
\nOpenAI Compatible Models (e.g. via VLLM)
\nAs an alternative to using an official OpenAI account, Theia IDE also supports arbitrary models compatible with the OpenAI API (e.g., hosted via VLLM). This enables you to connect to self-hosted models with ease. To add a custom model, click on the link in the settings section and add your configuration like this:
\n{\n \"ai-features.openAiCustom.customOpenAiModels\": [\n {\n \"model\": \"your-model-name\",\n \"url\": \"your-URL\",\n \"id\": \"your-unique-id\", // Optional: if not provided, the model name will be used as the ID\n \"apiKey\": \"your-api-key\", // Optional: use 'true' to apply the global OpenAI API key\n \"supportsDeveloperMessage\": false //Optional: whether your API supports the developer message (turn off when using OpenAI on Azure)\n }\n ]\n}
\nAzure
\nAll models hosted on Azure that are compatible with the OpenAI API are accessible via the Provider for OpenAI Compatible Models provider. Note that some models hosted on Azure may require different settings for the system message, which are detailed in the OpenAI Compatible Models section.
\nAnthropic
\nTo enable Anthropics AI models in the Theia IDE, create an API key in your Anthropics account and\nenter it in the Theia IDE settings under AI-features => Anthropics.
\nPlease note: The Anthropics API key will be stored in clear text. Use the environment variable ANTHROPIC_API_KEY
to set the key securely.
\nConfigure available models in the settings under AI-features => AnthropicsModels.\nDefault supported models include choices like claude-3-5-sonnet-latest.
\nHugging Face
\nMany hosting options and models on Hugging Face support using an OpenAI compatible API. In this case, we recommend using the Theia AI provider for OpenAI Compatible Models. The Hugging face provider only supports text generation at the moment for models not compatible with the OpenAI API.
\nTo enable Hugging Face as an AI provider, you need to create an API key in your Hugging Face account and enter it in the Theia IDE settings: AI-features => Hugging Face\nPlease note: By using this preference the Hugging Face API key will be stored in clear text on the machine running Theia. Use the environment variable HUGGINGFACE_API_KEY
to set the key securely.\nNote also that Hugging Face offers both paid and free-tier options (including \"serverless\"), and usage limits vary. Monitor your usage carefully to avoid unexpected costs, especially when using high-demand models.\nAdd or remove the desired Hugging Face models from the list of available models (see screenshot below). Please note that there is a copy button in the Hugging face UI to copy model IDs to the clipboard.
\n
\nLlamaFile Models
\nTo configure a LlamaFile LLM in the Theia IDE, add the necessary settings to your configuration (see example below)
\n{\n \"ai-features.llamafile.llamafiles\": [\n {\n \"name\": \"modelname\", //you can choose a name for your model\n \"uri\": \"file:///home/.../YourModel.llamafile\",\n \"port\": 30000 //you can choose a port to be used by llamafile\n }\n ]\n}
\nReplace \"name\", \"uri\", and \"port\" with your specific LlamaFile details.
\nThe Theia IDE also offers convenience commands to start and stop your LlamaFiles:
\n\n- Start a LlamaFile: Use the command \"Start Llamafile\", then select the model you want to start.
\n- Stop a LlamaFile: Use the \"Stop Llamafile\" command, then select the running Llamafile which you want to terminate.
\n
\nPlease make sure that your LlamaFiles are executable.\nFor more details on LlamaFiles, including a quickstart, see the official Mozilla LlamaFile documentation.
\nOllama
\nTo connect to models hosted via Ollama, enter the corresponding URL, along with the available models, in the settings (as shown below).
\n
\nSome models on Ollama support using an OpenAI compatible API. In this case, we recommend using the Theia AI provider for OpenAI Compatible Models
\nCustom Request Settings
\nYou can define custom request settings for specific language models in the Theia IDE to tailor how models handle requests, based on their provider.
\nAdd the settings in settings.json
:
\n\"ai-features.modelSettings.requestSettings\": [\n {\n \"modelId\": \"Qwen/Qwen2.5-Coder-32B-Instruct\",\n \"requestSettings\": { \"max_new_tokens\": 2048 },\n \"providerId\": \"huggingface\"\n },\n {\n \"modelId\": \"gemma2\",\n \"requestSettings\": { \"stop\": [\"<file_sep>\"] },\n \"providerId\": \"ollama\"\n }\n]
\nOr navigate in the settings view to ModelSettings
=> Request Settings
.
\nKey Fields
\n\nmodelId
: The unique identifier of the model. \nrequestSettings
: Provider-specific options, such as token limits or stopping criteria. \nproviderId
: (Optional) Specifies the provider for the settings (e.g., huggingface
, ollama
, openai
). If omitted, settings apply to all providers that match the modelId
. \n
\nValid options for requestSettings
depend on the model provider.
\nCurrent Agents in the Theia IDE
\nThis section provides an overview of the currently available agents in the Theia IDE. Agents marked as “Chat Agents” are available in the global chat, while others are directly integrated into UI elements, such as code completion. You can configure and deactivate agents in the AI Configuration view.
\nUniversal (Chat Agent)
\nThis agent helps developers by providing concise and accurate answers to general programming and software development questions. It also serves as a fallback for generic user questions. By default, this agent does not have access to the current user context or workspace. However, you can add variables, such as #selectedText
, to your requests to provide additional context.
\nOrchestrator (Chat Agent)
\nThis agent analyzes user requests against the descriptions of all available chat agents and selects the best-fitting agent to respond (using AI). The user's request is delegated to the selected agent without further confirmation. The Orchestrator is currently the default agent in the Theia IDE for all chat requests. You can deactivate it in the AI Configuration View.
\nCommand (Chat Agent)
\nThis agent is aware of all commands available in the Theia IDE and the current tool the user is working with. Based on the user request, it can find the appropriate command and let the user execute it.
\nWorkspace (Chat Agent)
\nThis agent can access the user's workspace, retrieve a list of all available files, and view their content. It can answer questions about the current project, including project files and source code in the workspace, such as how to build the project, where to place source code, or where to find specific files or configurations.
\nCode Completion (Agent)
\nThis agent provides inline code completion within the Theia IDE's code editor. By default, automatic inline completion is disabled to give users greater control over how AI code suggestions are presented. Users can manually trigger inline completion via the default key binding Ctrl+Alt+Space (adaptable). Requests are canceled when moving the cursor.
\nUsers who prefer continuous suggestions can enable 'Automatic Code Completion' in the settings ('AIFeatures'=>'CodeCompletion'). This agent makes continuous requests to the underlying LLM while coding if automatic suggestions are enabled.
\nPlease note that there are two prompt variants available for the code completion agent, you can select them in the 'AI Configuration view' => 'Code Completion' => 'Prompt Templates'.
\nYou can also adapt the used prompt template to your personal preferences or to the LLM you want to use, see for example how to use the Theia IDE with StarCoder.
\nIn the settings, you can specify 'Excluded File Extensions' for which the AI-powered code completion will be deactivated.
\nThe setting 'Strip Backticks' will remove surrounding backticks that some LLMs might produce (depending on the prompt).
\nFinally, the setting 'Max Context Lines' allows you to configure the maximum number of lines used for AI code completion context. This setting can be adjusted to customize the size of the context provided to the model, which is especially useful when using smaller models with limited token capacity.
\nTerminal Assistance (Agent)
\nThis agent assists with writing and executing terminal commands. Based on the user's request, it suggests commands and allows them to be directly pasted and executed in the terminal. It can access the current directory, environment, and recent terminal output to provide context-aware assistance. You can open the terminal assistance agent via Ctrl+I in the terminal view.
\nChat
\nThe Theia IDE provides a global chat interface where users can interact with all chat agents. The Orchestrator automatically delegates user requests to the most appropriate agent. To send a request directly to a specific agent, mention the agent's name using '@', for example, '@Command'. Press '@' in the chat to see a list of available chat agents.
\n
\nSome agents produce special results, such as buttons (shown in the screenshot above) or code that can be directly inserted. You can augment your requests in the chat with context by using variables. For example, to refer to the currently selected text, use #selectedText
in your request. Pressing '#' in the chat will show a list of available variables.
\nAI Configuration
\nThe AI Configuration View allows you to review and adapt agent-specific settings. Select an agent on the left side and review its properties on the right:
\n\n- Enable Agent: Disabled agents will no longer be available in the chat or UI elements. Disabled agents also won't make any requests to LLMs.
\n- Edit Prompts: Click \"Edit\" to open the prompt template editor, where you can customize the agent's prompts (see the section below). \"Reset\" will revert the prompt to its default.
\n- Language Model: Select which language model the agent sends its requests to. Some agents have multiple \"purposes,\" allowing you to select a model for each purpose.
\n- Variables and Functions: Review the variables and functions used by an agent. Global variables are shared across agents, and they are listed in the second tab of the AI Configuration View. Agent-specific variables are declared and used exclusively by one agent.
\n
\n
\nView and Modify Prompts
\nIn the Theia IDE, you can open and edit prompts for all agents from the AI Configuration View. Prompts are shown in a text editor (see the screenshot below). Changes saved in the prompt editor will take effect with the next request made to the corresponding agent. You can reset a prompt to its default using the \"Reset\" button in the AI configuration view or the \"Revert\" toolbar item in the prompt editor (top-right corner).
\n
\nVariables and functions can be used in prompts. Variables are replaced with context-specific information at the time of the request (e.g., the currently selected text), while functions can trigger actions or retrieve additional information. You can find an overview of all global variables in the \"Variables\" tab of the AI Configuration View and agent-specific variables in the agent's configuration.
\nVariables are used with the following syntax:
\n{{variableName}}
\nTool functions are used with the following syntax:
\n~{functionName}
\nCustom Agents
\nCustom agents enable users to define new chat agents with custom prompts on the fly, allowing the creation of custom workflows and extending the Theia IDE with new capabilities. These agents are immediately available in the default chat.
\nTo define a new custom agent, navigate to the AI Configuration View and click on \"Add Custom Agent\".
\n
\nThis action opens a YAML file where all available custom agents are defined. Below is an example configuration:
\n- id: obfuscator\n name: Obfuscator\n description: This is an example agent. Please adapt the properties to fit your needs.\n prompt: Obfuscate the following code so that no human can understand it anymore. Preserve the functionality.\n defaultLLM: openai/gpt-4o
\n\n- id: A unique identifier for the agent.
\n- name: The display name of the agent.
\n- description: A brief explanation of what the agent does.
\n- prompt: The default prompt that the agent will use for processing requests.
\n- defaultLLM: The language model used by default.
\n
\nCustom agents can be configured in the AI Configuration View just like other chat agents. You can enable/disable them, modify their prompt templates, and integrate variables and functions within these templates to enhance functionality.
\nHere is the updated MCP Integration section with the requested changes:
\nMCP Integration
\nThe Theia IDE now supports an with the Model Context Protocol (MCP), enabling users to configure and utilize external services in their AI workflows.\nPlease note: While this integration does not yet include MCP servers in any standard prompts, it already allows end users to explore the MCP ecosystem and discover interesting new use cases. In the future, we plan to provide ready-to-use prompts using MCP servers and support auto-starting configured servers.
\nTo learn more about MCP, see the official announcement from Anthropic.
\nFor a list of available MCP servers, visit the MCP Servers Repository.
\nConfiguring MCP Servers
\nTo configure MCP servers, open the preferences and add entries to the MCP Servers Configuration
section. Each server requires a unique identifier (e.g., \"brave-search\"
or \"filesystem\"
) and configuration details such as the command, arguments, and optional environment variables. For Windows users, please see the additional information below. 'autostart' will automatically start the respective MCP server the next time you restart your IDE, you will still need to manually start it the first time (see below).
\nExample Configuration:
\n{\n \"brave-search\": {\n \"command\": \"npx\",\n \"args\": [\n \"-y\",\n \"@modelcontextprotocol/server-brave-search\"\n ],\n \"env\": {\n \"BRAVE_API_KEY\": \"YOUR_API_KEY\"\n },\n \"autostart\": true\n },\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/Users/YOUR_USERNAME/Desktop\"],\n \"env\": {\n \"CUSTOM_ENV_VAR\": \"custom-value\"\n }\n }\n}
\nThe configuration options include:
\n\ncommand
: The executable used to start the server (e.g., npx
). \nargs
: An array of arguments passed to the command. \nenv
: An optional set of environment variables for the server. \n
\nNote for Windows users: On Windows, you need to start a command interpreter (e.g. cmd.exe) as the server command in order for path lookups to work as expected. The effective command line is then passed as an argument. For example:
\n\"filesystem\": {\n \"command\": \"cmd\",\n \"args\": [\"/C\", \"npx -y @modelcontextprotocol/server-filesystem /Users/YOUR_USERNAME/Desktop\"],\n \"env\": {\n \"CUSTOM_ENV_VAR\": \"custom-value\"\n }\n }
\nStarting and Stopping MCP Servers
\nTheia provides commands to manage MCP servers:
\n\n- Start MCP Server: Use the command
\"MCP: Start MCP Server\"
to start a server. The system displays a list of available servers to select from. \n- Stop MCP Server: Use the command
\"MCP: Stop MCP Server\"
to stop a running server. \n
\nWhen a server starts, a notification is displayed confirming the operation, and the functions made available.\nYou can also set a MCP server to 'autostart' in the settings, this will take effect on the next restart of your IDE.\nPlease note that in a browser deployment MCP servers are scoped per connection, i.e. if you manually start them, you need to start them once per browser tab.
\nUsing MCP Server Functions
\nOnce a server is running, its functions can be invoked in prompts using the following syntax:
\n~{mcp_<server-name>_<function-name>}
\n\nmcp
: Prefix for all MCP commands. \n<server-name>
: The unique identifier of the server (e.g., brave-search
). \n<function-name>
: The specific function exposed by the server (e.g., brave_web_search
). \n
\nExample:
\nTo use the brave_web_search
function of the brave-search
server, you can write:
\n~{mcp_brave-search_brave_web_search}
\nThis allows you to seamlessly integrate external services into your AI workflows within the Theia IDE.
\nSCANOSS
\nThe Theia IDE (and Theia AI) integrates a code scanner powered by SCANOSS, enabling developers to analyze generated code snippets for open-source compliance and licensing. This feature helps developers understand potential licensing implications when using generated code in the Chat view.
\nPlease note: This feature sends a hash of suggested code snippets to the SCANOSS service hosted by the Software Transparency Foundation for analysis. While the service is free to use, very high usage may trigger rate limiting (unlikely for individual developers). Additionally, neither Theia nor SCANOSS can guarantee that no license implications exist, even if no issues are detected during the scan.
\nConfigure SCANOSS in the Theia IDE
\n\n- Open the Settings panel in the Theia IDE.
\n- Navigate to SCANOSS Mode under the AI Features section.
\n- Select the desired mode:\n
\n- Off: Disables SCANOSS completely.
\n- Manual: Allows users to trigger scans manually via the SCANOSS button on generated code snippets in the Chat view.
\n- Automatic: Automatically scans generated code snippets in the Chat view.
\n
\n \n
\nManual Scanning
\nTo manually scan a code snippet:
\n\n- Generate a code snippet in the AI Chat view.
\n- Click the SCANOSS button in the toolbar of the code renderer embedded in the Chat view.
\n- A result icon will appear:\n
\n- A warning icon if a match is found.
\n- A green check mark if no matches are found.
\n
\n \n- If a warning icon is displayed, click the SCANOSS button again to view detailed scan results in a popup window.
\n
\n
\nAutomatic Scanning
\nIn Automatic mode, SCANOSS scans code snippets in the background whenever they are generated in the Chat view. Results are displayed immediately, indicating whether any matches are found.
\nUnderstanding SCANOSS Results
\nAfter a scan is completed, SCANOSS provides a detailed summary, including:
\n\n- Match Percentage: The degree of similarity between the scanned snippet and the code in the database.
\n- Matched File: The file or project containing the matched code.
\n- Licenses: A list of licenses associated with the matched code, including links to detailed license terms.
\n
\nAI History
\nThe AI History view allows you to review all communications between agents and underlying LLMs. Select the corresponding agent at the top to see all its requests in the section below.
\n
\nLearn more
\nIf want to learn more about the AI support in the Theia AI, please see this introduction, our article on the vision of Theia AI and the demonstrations in Sneak Preview Series about Theia AI
","fields":{"slug":"user_ai"}}},"pageContext":{"slug":"user_ai"}},"staticQueryHashes":["2468095761"],"slicesMap":{}}
\ No newline at end of file
+{"componentChunkName":"component---src-templates-doc-js","path":"/docs/user_ai/","result":{"data":{"markdownRemark":{"frontmatter":{"title":"Using the AI Features in the Theia IDE as an End User"},"html":"Using the AI Features in the Theia IDE as an End User
\nThis section documents how to use AI features in the Theia IDE (available since version 1.54, see also this introduction). These features are based on Theia AI, a framework for building AI assistance in tools and IDEs. Theia AI is part of the Theia platform. If you're interested in building your own custom tool or IDE with Theia AI, please refer to the corresponding documentation.
\nPlease note that these features are in early access and experimental. This means they may be unstable, behave unexpectedly, or undergo significant changes. In particular, using your own LLM might incur costs that you need to monitor closely. We have not yet optimized the AI assistants in the Theia IDE for token usage. Use these features at your own risk, and we welcome any feedback, suggestions, and contributions!
\nTheia AI features within the Theia IDE are currently disabled by default. See the next section on how to enable them.
\nTable of Contents
\n\n- Set-Up\n\n
\n- Current Agents in the Theia IDE\n\n
\n- Chat
\n- AI Configuration\n\n
\n- Custom Agents
\n- MCP Integration
\n- SCANOSS
\n- AI History
\n- Learn more
\n
\nSet-Up
\nTo activate AI support in the Theia IDE, go to Preferences and enable the setting “AI-features => AI Enable.”
\nTo use Theia AI within the Theia IDE, you need to provide access to at least one LLM. Theia IDE comes with preinstalled support for OpenAI API-compatible models, either hosted by OpenAI or self-hosted via VLLM. Additionally, Theia IDE supports connecting to models via Ollama. See the corresponding sections below on how to configure these providers.
\nOther LLM providers, including local models, can be added easily. If you would like to see support for a specific LLM, please provide feedback or consider contributing.
\nEach LLM provider offers a configurable list of available models (see the screenshot below for Hugging Face Models models). To use a model in your IDE, configure it on a per-agent basis in the AI Configuration view.
\nLLM Providers Overview
\nNote: Theia IDE enables connections to various models (e.g., HuggingFace, custom OpenAPI models, LlamaFile). However, not all models may work out of the box, as they may require specific customizations or optimizations. If you encounter issues, please provide feedback, keeping in mind this is an early-phase feature.
\nMany models and providers support using an OpenAI compatible API. In this case, we recommend using the Theia AI provider for OpenAI Compatible Models
\nBelow is an overview of various Large Language Model (LLM) providers supported within the Theia IDE, highlighting their key features and current state.
\n\n \n Provider \n Streaming \n Tool Calls \n Structured Output \n State \n \n \n OpenAI Official \n ✅ \n ✅ \n ✅ \n Public \n \n \n OpenAI Compatible \n ✅ \n ✅ \n ✅ \n Public \n \n \n Azure \n ✅ \n ✅ \n ✅ \n Public \n \n \n Anthropic \n ✅ \n ✅ \n ❌ \n Beta \n \n \n Hugging Face \n ✅ \n ❌ \n ❌ \n Experimental \n \n \n LlamaFile \n ✅ \n ❌ \n ❌ \n Experimental \n \n \n Ollama \n ✅ \n ✅ \n ✅ \n Alpha \n \n
\n\n\nOpenAI (Hosted by OpenAI)
\nTo enable the use of OpenAI, you need to create an API key in your OpenAI account and enter it in the settings AI-features => OpenAiOfficial (see the screenshot below).\nPlease note: By using this preference the Open AI API key will be stored in clear text on the machine running Theia. Use the environment variable OPENAI_API_KEY
to set the key securely.\nPlease also note that creating an API key requires a paid subscription, and using these models may incur additional costs. Be sure to monitor your usage carefully to avoid unexpected charges. We have not yet optimized the AI assistants in the Theia IDE for token usage.
\n
\nThe OpenAI provider is preconfigured with a list of available models. You can easily add new models to this list, for example, if new options are released.
\nOpenAI Compatible Models (e.g. via VLLM)
\nAs an alternative to using an official OpenAI account, Theia IDE also supports arbitrary models compatible with the OpenAI API (e.g., hosted via VLLM). This enables you to connect to self-hosted models with ease. To add a custom model, click on the link in the settings section and add your configuration like this:
\n{\n \"ai-features.openAiCustom.customOpenAiModels\": [\n {\n \"model\": \"your-model-name\",\n \"url\": \"your-URL\",\n \"id\": \"your-unique-id\", // Optional: if not provided, the model name will be used as the ID\n \"apiKey\": \"your-api-key\", // Optional: use 'true' to apply the global OpenAI API key\n \"supportsDeveloperMessage\": false //Optional: whether your API supports the developer message (turn off when using OpenAI on Azure)\n }\n ]\n}
\nAzure
\nAll models hosted on Azure that are compatible with the OpenAI API are accessible via the Provider for OpenAI Compatible Models provider. Note that some models hosted on Azure may require different settings for the system message, which are detailed in the OpenAI Compatible Models section.
\nAnthropic
\nTo enable Anthropics AI models in the Theia IDE, create an API key in your Anthropics account and\nenter it in the Theia IDE settings under AI-features => Anthropics.
\nPlease note: The Anthropics API key will be stored in clear text. Use the environment variable ANTHROPIC_API_KEY
to set the key securely.
\nConfigure available models in the settings under AI-features => AnthropicsModels.\nDefault supported models include choices like claude-3-5-sonnet-latest.
\nHugging Face
\nMany hosting options and models on Hugging Face support using an OpenAI compatible API. In this case, we recommend using the Theia AI provider for OpenAI Compatible Models. The Hugging face provider only supports text generation at the moment for models not compatible with the OpenAI API.
\nTo enable Hugging Face as an AI provider, you need to create an API key in your Hugging Face account and enter it in the Theia IDE settings: AI-features => Hugging Face\nPlease note: By using this preference the Hugging Face API key will be stored in clear text on the machine running Theia. Use the environment variable HUGGINGFACE_API_KEY
to set the key securely.\nNote also that Hugging Face offers both paid and free-tier options (including \"serverless\"), and usage limits vary. Monitor your usage carefully to avoid unexpected costs, especially when using high-demand models.\nAdd or remove the desired Hugging Face models from the list of available models (see screenshot below). Please note that there is a copy button in the Hugging face UI to copy model IDs to the clipboard.
\n
\nLlamaFile Models
\nTo configure a LlamaFile LLM in the Theia IDE, add the necessary settings to your configuration (see example below)
\n{\n \"ai-features.llamafile.llamafiles\": [\n {\n \"name\": \"modelname\", //you can choose a name for your model\n \"uri\": \"file:///home/.../YourModel.llamafile\",\n \"port\": 30000 //you can choose a port to be used by llamafile\n }\n ]\n}
\nReplace \"name\", \"uri\", and \"port\" with your specific LlamaFile details.
\nThe Theia IDE also offers convenience commands to start and stop your LlamaFiles:
\n\n- Start a LlamaFile: Use the command \"Start Llamafile\", then select the model you want to start.
\n- Stop a LlamaFile: Use the \"Stop Llamafile\" command, then select the running Llamafile which you want to terminate.
\n
\nPlease make sure that your LlamaFiles are executable.\nFor more details on LlamaFiles, including a quickstart, see the official Mozilla LlamaFile documentation.
\nOllama
\nTo connect to models hosted via Ollama, enter the corresponding URL, along with the available models, in the settings (as shown below).
\n
\nSome models on Ollama support using an OpenAI compatible API. In this case, we recommend using the Theia AI provider for OpenAI Compatible Models
\nCustom Request Settings
\nYou can define custom request settings for specific language models in the Theia IDE to tailor how models handle requests, based on their provider.
\nAdd the settings in settings.json
:
\n\"ai-features.modelSettings.requestSettings\": [\n {\n \"modelId\": \"Qwen/Qwen2.5-Coder-32B-Instruct\",\n \"requestSettings\": { \"max_new_tokens\": 2048 },\n \"providerId\": \"huggingface\"\n },\n {\n \"modelId\": \"gemma2\",\n \"requestSettings\": { \"stop\": [\"<file_sep>\"] },\n \"providerId\": \"ollama\"\n }\n]
\nOr navigate in the settings view to ModelSettings
=> Request Settings
.
\nKey Fields
\n\nmodelId
: The unique identifier of the model. \nrequestSettings
: Provider-specific options, such as token limits or stopping criteria. \nproviderId
: (Optional) Specifies the provider for the settings (e.g., huggingface
, ollama
, openai
). If omitted, settings apply to all providers that match the modelId
. \n
\nValid options for requestSettings
depend on the model provider.
\nCurrent Agents in the Theia IDE
\nThis section provides an overview of the currently available agents in the Theia IDE. Agents marked as “Chat Agents” are available in the global chat, while others are directly integrated into UI elements, such as code completion. You can configure and deactivate agents in the AI Configuration view.
\nUniversal (Chat Agent)
\nThis agent helps developers by providing concise and accurate answers to general programming and software development questions. It also serves as a fallback for generic user questions. By default, this agent does not have access to the current user context or workspace. However, you can add variables, such as #selectedText
, to your requests to provide additional context.
\nOrchestrator (Chat Agent)
\nThis agent analyzes user requests against the descriptions of all available chat agents and selects the best-fitting agent to respond (using AI). The user's request is delegated to the selected agent without further confirmation. The Orchestrator is currently the default agent in the Theia IDE for all chat requests. You can deactivate it in the AI Configuration View.
\nCommand (Chat Agent)
\nThis agent is aware of all commands available in the Theia IDE and the current tool the user is working with. Based on the user request, it can find the appropriate command and let the user execute it.
\nWorkspace (Chat Agent)
\nThis agent can access the user's workspace, retrieve a list of all available files, and view their content. It can answer questions about the current project, including project files and source code in the workspace, such as how to build the project, where to place source code, or where to find specific files or configurations.
\nCode Completion (Agent)
\nThis agent provides inline code completion within the Theia IDE's code editor. By default, automatic inline completion is disabled to give users greater control over how AI code suggestions are presented. Users can manually trigger inline completion via the default key binding Ctrl+Alt+Space (adaptable). Requests are canceled when moving the cursor.
\nUsers who prefer continuous suggestions can enable 'Automatic Code Completion' in the settings ('AIFeatures'=>'CodeCompletion'). This agent makes continuous requests to the underlying LLM while coding if automatic suggestions are enabled.
\nPlease note that there are two prompt variants available for the code completion agent, you can select them in the 'AI Configuration view' => 'Code Completion' => 'Prompt Templates'.
\nYou can also adapt the used prompt template to your personal preferences or to the LLM you want to use, see for example how to use the Theia IDE with StarCoder.
\nIn the settings, you can specify 'Excluded File Extensions' for which the AI-powered code completion will be deactivated.
\nThe setting 'Strip Backticks' will remove surrounding backticks that some LLMs might produce (depending on the prompt).
\nFinally, the setting 'Max Context Lines' allows you to configure the maximum number of lines used for AI code completion context. This setting can be adjusted to customize the size of the context provided to the model, which is especially useful when using smaller models with limited token capacity.
\nTerminal Assistance (Agent)
\nThis agent assists with writing and executing terminal commands. Based on the user's request, it suggests commands and allows them to be directly pasted and executed in the terminal. It can access the current directory, environment, and recent terminal output to provide context-aware assistance. You can open the terminal assistance agent via Ctrl+I in the terminal view.
\nChat
\nThe Theia IDE provides a global chat interface where users can interact with all chat agents. The Orchestrator automatically delegates user requests to the most appropriate agent. To send a request directly to a specific agent, mention the agent's name using '@', for example, '@Command'. Press '@' in the chat to see a list of available chat agents.
\n
\nSome agents produce special results, such as buttons (shown in the screenshot above) or code that can be directly inserted. You can augment your requests in the chat with context by using variables. For example, to refer to the currently selected text, use #selectedText
in your request. Pressing '#' in the chat will show a list of available variables.
\nAI Configuration
\nThe AI Configuration View allows you to review and adapt agent-specific settings. Select an agent on the left side and review its properties on the right:
\n\n- Enable Agent: Disabled agents will no longer be available in the chat or UI elements. Disabled agents also won't make any requests to LLMs.
\n- Edit Prompts: Click \"Edit\" to open the prompt template editor, where you can customize the agent's prompts (see the section below). \"Reset\" will revert the prompt to its default.
\n- Language Model: Select which language model the agent sends its requests to. Some agents have multiple \"purposes,\" allowing you to select a model for each purpose.
\n- Variables and Functions: Review the variables and functions used by an agent. Global variables are shared across agents, and they are listed in the second tab of the AI Configuration View. Agent-specific variables are declared and used exclusively by one agent.
\n
\n
\nView and Modify Prompts
\nIn the Theia IDE, you can open and edit prompts for all agents from the AI Configuration View. Prompts are shown in a text editor (see the screenshot below). Changes saved in the prompt editor will take effect with the next request made to the corresponding agent. You can reset a prompt to its default using the \"Reset\" button in the AI configuration view or the \"Revert\" toolbar item in the prompt editor (top-right corner).
\n
\nVariables and functions can be used in prompts. Variables are replaced with context-specific information at the time of the request (e.g., the currently selected text), while functions can trigger actions or retrieve additional information. You can find an overview of all global variables in the \"Variables\" tab of the AI Configuration View and agent-specific variables in the agent's configuration.
\nVariables are used with the following syntax:
\n{{variableName}}
\nTool functions are used with the following syntax:
\n~{functionName}
\nCustom Agents
\nCustom agents enable users to define new chat agents with custom prompts on the fly, allowing the creation of custom workflows and extending the Theia IDE with new capabilities. These agents are immediately available in the default chat.
\nTo define a new custom agent, navigate to the AI Configuration View and click on \"Add Custom Agent\".
\n
\nThis action opens a YAML file where all available custom agents are defined. Below is an example configuration:
\n- id: obfuscator\n name: Obfuscator\n description: This is an example agent. Please adapt the properties to fit your needs.\n prompt: Obfuscate the following code so that no human can understand it anymore. Preserve the functionality.\n defaultLLM: openai/gpt-4o
\n\n- id: A unique identifier for the agent.
\n- name: The display name of the agent.
\n- description: A brief explanation of what the agent does.
\n- prompt: The default prompt that the agent will use for processing requests.
\n- defaultLLM: The language model used by default.
\n
\nCustom agents can be configured in the AI Configuration View just like other chat agents. You can enable/disable them, modify their prompt templates, and integrate variables and functions within these templates to enhance functionality.
\nHere is the updated MCP Integration section with the requested changes:
\nMCP Integration
\nThe Theia IDE now supports an with the Model Context Protocol (MCP), enabling users to configure and utilize external services in their AI workflows.\nPlease note: While this integration does not yet include MCP servers in any standard prompts, it already allows end users to explore the MCP ecosystem and discover interesting new use cases. In the future, we plan to provide ready-to-use prompts using MCP servers and support auto-starting configured servers.
\nTo learn more about MCP, see the official announcement from Anthropic.
\nFor a list of available MCP servers, visit the MCP Servers Repository.
\nConfiguring MCP Servers
\nTo configure MCP servers, open the preferences and add entries to the MCP Servers Configuration
section. Each server requires a unique identifier (e.g., \"brave-search\"
or \"filesystem\"
) and configuration details such as the command, arguments, and optional environment variables. For Windows users, please see the additional information below.
\n'autostart' will automatically start the respective MCP server the next time you restart your IDE, you will still need to manually start it the first time (see below).
\nExample Configuration:
\n{\n \"brave-search\": {\n \"command\": \"npx\",\n \"args\": [\n \"-y\",\n \"@modelcontextprotocol/server-brave-search\"\n ],\n \"env\": {\n \"BRAVE_API_KEY\": \"YOUR_API_KEY\"\n },\n \"autostart\": true\n },\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/Users/YOUR_USERNAME/Desktop\"],\n \"env\": {\n \"CUSTOM_ENV_VAR\": \"custom-value\"\n }\n }\n}
\nThe configuration options include:
\n\ncommand
: The executable used to start the server (e.g., npx
). \nargs
: An array of arguments passed to the command. \nenv
: An optional set of environment variables for the server. \n
\nNote for Windows users: On Windows, you need to start a command interpreter (e.g. cmd.exe) as the server command in order for path lookups to work as expected. The effective command line is then passed as an argument. For example:
\n\"filesystem\": {\n \"command\": \"cmd\",\n \"args\": [\"/C\", \"npx -y @modelcontextprotocol/server-filesystem /Users/YOUR_USERNAME/Desktop\"],\n \"env\": {\n \"CUSTOM_ENV_VAR\": \"custom-value\"\n }\n }
\nStarting and Stopping MCP Servers
\nTheia provides commands to manage MCP servers:
\n\n- Start MCP Server: Use the command
\"MCP: Start MCP Server\"
to start a server. The system displays a list of available servers to select from. \n- Stop MCP Server: Use the command
\"MCP: Stop MCP Server\"
to stop a running server. \n
\nWhen a server starts, a notification is displayed confirming the operation, and the functions made available.\nYou can also set a MCP server to 'autostart' in the settings, this will take effect on the next restart of your IDE.\nPlease note that in a browser deployment MCP servers are scoped per connection, i.e. if you manually start them, you need to start them once per browser tab.
\nUsing MCP Server Functions
\nOnce a server is running, its functions can be invoked in prompts using the following syntax:
\n~{mcp_<server-name>_<function-name>}
\n\nmcp
: Prefix for all MCP commands. \n<server-name>
: The unique identifier of the server (e.g., brave-search
). \n<function-name>
: The specific function exposed by the server (e.g., brave_web_search
). \n
\nExample:
\nTo use the brave_web_search
function of the brave-search
server, you can write:
\n~{mcp_brave-search_brave_web_search}
\nThis allows you to seamlessly integrate external services into your AI workflows within the Theia IDE.
\nSCANOSS
\nThe Theia IDE (and Theia AI) integrates a code scanner powered by SCANOSS, enabling developers to analyze generated code snippets for open-source compliance and licensing. This feature helps developers understand potential licensing implications when using generated code in the Chat view.
\nPlease note: This feature sends a hash of suggested code snippets to the SCANOSS service hosted by the Software Transparency Foundation for analysis. While the service is free to use, very high usage may trigger rate limiting (unlikely for individual developers). Additionally, neither Theia nor SCANOSS can guarantee that no license implications exist, even if no issues are detected during the scan.
\nConfigure SCANOSS in the Theia IDE
\n\n- Open the Settings panel in the Theia IDE.
\n- Navigate to SCANOSS Mode under the AI Features section.
\n- Select the desired mode:\n
\n- Off: Disables SCANOSS completely.
\n- Manual: Allows users to trigger scans manually via the SCANOSS button on generated code snippets in the Chat view.
\n- Automatic: Automatically scans generated code snippets in the Chat view.
\n
\n \n
\nManual Scanning
\nTo manually scan a code snippet:
\n\n- Generate a code snippet in the AI Chat view.
\n- Click the SCANOSS button in the toolbar of the code renderer embedded in the Chat view.
\n- A result icon will appear:\n
\n- A warning icon if a match is found.
\n- A green check mark if no matches are found.
\n
\n \n- If a warning icon is displayed, click the SCANOSS button again to view detailed scan results in a popup window.
\n
\n
\nAutomatic Scanning
\nIn Automatic mode, SCANOSS scans code snippets in the background whenever they are generated in the Chat view. Results are displayed immediately, indicating whether any matches are found.
\nUnderstanding SCANOSS Results
\nAfter a scan is completed, SCANOSS provides a detailed summary, including:
\n\n- Match Percentage: The degree of similarity between the scanned snippet and the code in the database.
\n- Matched File: The file or project containing the matched code.
\n- Licenses: A list of licenses associated with the matched code, including links to detailed license terms.
\n
\nAI History
\nThe AI History view allows you to review all communications between agents and underlying LLMs. Select the corresponding agent at the top to see all its requests in the section below.
\n
\nLearn more
\nIf want to learn more about the AI support in the Theia AI, please see this introduction, our article on the vision of Theia AI and the demonstrations in Sneak Preview Series about Theia AI
","fields":{"slug":"user_ai"}}},"pageContext":{"slug":"user_ai"}},"staticQueryHashes":["2468095761"],"slicesMap":{}}
\ No newline at end of file