diff --git a/README.md b/README.md
index c05b4c4e6..c12f0ac8f 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@

-
A Client/Server Framework for Building Guided LLM Agents
+
Parlant: A behavioral control system for customer-facing LLM agents
Website |
Introduction |
@@ -20,9 +20,13 @@
## ✨ What is Parlant?
-Parlant is a fully open-source (Apache 2.0) client/server framework for building and serving guided customer-facing agents based on LLMs (Large Language Models).
+Parlant is a framework that transforms how AI agents make decisions in customer-facing scenarios.
-It comes pre-built with responsive session (conversation) management, content-filtering, jailbreak protection, an integrated sandbox UI for behavioral testing, and other goodies.
+Unlike traditional approaches that rely on prompt engineering or conversational flow charts, Parlant implements a dynamic control system that ensures agents follow your specific business rules by matching and activating the appropriate combination of behavioral guidelines for every specific context.
+
+When an agent needs to respond to a customer, Parlant's engine evaluates the situation, checks relevant guidelines, gathers necessary information through your tools, and continuously re-evaluates its approach based on your guidelines as new information emerges. When it's time to generate a message, Parlant implements self-critique mechanisms to ensure that the agent's responses precisely align with your intended behavior as given by the contextually-matched guidelines.
+
+Parlant comes pre-built with responsive session (conversation) management, a detection mechanism for incoherence and contradictions in guidelines, content-filtering, jailbreak protection, an integrated sandbox UI for behavioral testing, native API clients in Python and TypeScript, and other goodies.
## 📦 Quickstart
```bash
@@ -36,10 +40,10 @@ $ # Open the sandbox UI at http://localhost:8000 and play
## 🙋♂️🙋♀️ Who Is Parlant For?
Parlant is the right tool for the job if you're building an LLM-based chat agent, and:
-1. You require a high degree of behavioral precision and consistency
+1. You require a high degree of behavioral precision and consistency, particularly in customer-facing scenarios
1. Your agent is expected to undergo many behavioral refinements and changes, and you need a way to implement those changes efficiently and confidently
-1. You would benefit from assistance in maintaining coherence and disentangling subtleties in numerous agent instructions
-1. Conversational UX and user-engagmeent is an important concern for your use case
+1. You're expected to maintain a large set of behavioral guidelines, and you want to maintain them coherently and with version-tracking
+1. Conversational UX and user-engagmeent is an important concern for your use case, and you want to easily control the flow and language of conversations
## 🤔 What Makes Parlant Different?
@@ -94,7 +98,7 @@ By giving structure to behavioral guidelines, and _granularizing_ guidelines (i.
To start learning and building with Parlant, visit our [documentation portal](https://parlant.io/docs/quickstart/introduction).
-Need help? Send us a message on [Discord](https://discord.gg/duxWqxKk6J). We're happy to answer questions and help you get up and running!
+Need help? Ask us anything on [Discord](https://discord.gg/duxWqxKk6J). We're happy to answer questions and help you get up and running!
## 💻 Usage Example
Adding a guideline for an agent—for example, to ask a counter-question to get more info when a customer asks a question:
@@ -102,7 +106,7 @@ Adding a guideline for an agent—for example, to ask a counter-question to get
parlant guideline create \
--agent-id CUSTOMER_SUCCESS_AGENT_ID \
--condition "a free-tier customer is asking how to use our product" \
- --action "first seek to understsand what they're trying to achieve"
+ --action "first seek to understand what they're trying to achieve"
```
In Parlant, Customer-Agent interaction happens asynchronously, to enable more natural customer interactions, rather than forcing a strict and unnatural request-reply mode.
diff --git a/src/parlant/api/sessions.py b/src/parlant/api/sessions.py
index 2a917b341..015c7145e 100644
--- a/src/parlant/api/sessions.py
+++ b/src/parlant/api/sessions.py
@@ -1340,27 +1340,32 @@ async def _add_agent_message(
if params.actions:
actions = [utterance_request_dto_to_utterance_request(a) for a in params.actions]
correlation_id = await application.utter(session, actions)
-
+ event, *_ = await session_store.list_events(
+ session_id=session_id,
+ correlation_id=correlation_id,
+ kinds=["message"],
+ )
+ return event_to_dto(event)
else:
correlation_id = await application.dispatch_processing_task(session)
- await session_listener.wait_for_events(
- session_id=session_id,
- correlation_id=correlation_id,
- timeout=Timeout(60),
- )
+ await session_listener.wait_for_events(
+ session_id=session_id,
+ correlation_id=correlation_id,
+ timeout=Timeout(60),
+ )
- event = next(
- iter(
- await session_store.list_events(
- session_id=session_id,
- correlation_id=correlation_id,
- kinds=["status"],
+ event = next(
+ iter(
+ await session_store.list_events(
+ session_id=session_id,
+ correlation_id=correlation_id,
+ kinds=["status"],
+ )
)
)
- )
- return event_to_dto(event)
+ return event_to_dto(event)
async def _add_human_agent_message_on_behalf_of_ai_agent(
session_id: SessionIdPath,
diff --git a/src/parlant/core/engines/alpha/engine.py b/src/parlant/core/engines/alpha/engine.py
index 09e4c3632..a94ef48ac 100644
--- a/src/parlant/core/engines/alpha/engine.py
+++ b/src/parlant/core/engines/alpha/engine.py
@@ -149,19 +149,19 @@ async def utter(
interaction_state = await self._load_interaction_state(context)
try:
with self._logger.operation(
- f"Uttering actions '{[r.action for r in requests]}' for session {context.session_id}"
+ f"Uttering in session {context.session_id} using actions '{[r.action for r in requests]}'"
):
await self._do_utter(context, interaction_state, event_emitter, requests)
return True
except asyncio.CancelledError:
- self._logger.warning(f"Uttering for session {context.session_id} was cancelled.")
+ self._logger.warning(f"Uttering in session {context.session_id} was cancelled.")
return False
except Exception as exc:
formatted_exception = traceback.format_exception(type(exc), exc, exc.__traceback__)
self._logger.error(
- f"Error during uttering for session {context.session_id}: {formatted_exception}"
+ f"Error during uttering in session {context.session_id}: {formatted_exception}"
)
await event_emitter.emit_status_event(
@@ -176,7 +176,7 @@ async def utter(
except BaseException as exc:
self._logger.critical(
- f"Critical error during uttering for session {context.session_id}: "
+ f"Critical error during uttering in session {context.session_id}: "
f"{traceback.format_exception(type(exc), exc, exc.__traceback__)}"
)
raise
diff --git a/tests/api/test_sessions.py b/tests/api/test_sessions.py
index e4f1535c9..7810e4891 100644
--- a/tests/api/test_sessions.py
+++ b/tests/api/test_sessions.py
@@ -1033,4 +1033,5 @@ async def test_that_an_agent_message_can_be_generated_from_utterance_requests(
)
assert len(events) == 1
+ assert events[0]["id"] == event["id"]
assert "thinking" in events[0]["data"]["message"].lower()
diff --git a/tests/core/common/engines/alpha/steps/engines.py b/tests/core/common/engines/alpha/steps/engines.py
index 2703accb9..5bfff00fa 100644
--- a/tests/core/common/engines/alpha/steps/engines.py
+++ b/tests/core/common/engines/alpha/steps/engines.py
@@ -14,7 +14,7 @@
import asyncio
from typing import cast
-from pytest_bdd import given, when
+from pytest_bdd import given, when, parsers
from unittest.mock import AsyncMock
from parlant.core.agents import Agent, AgentId, AgentStore
@@ -22,7 +22,7 @@
from parlant.core.engines.alpha.engine import AlphaEngine
from parlant.core.engines.alpha.message_event_generator import MessageEventGenerator
from parlant.core.emissions import EmittedEvent
-from parlant.core.engines.types import Context
+from parlant.core.engines.types import Context, UtteranceReason, UtteranceRequest
from parlant.core.emission.event_buffer import EventBuffer
from parlant.core.sessions import SessionId, SessionStore
@@ -45,6 +45,26 @@ def given_a_faulty_message_production_mechanism(
generator.generate_events = AsyncMock(side_effect=Exception()) # type: ignore
+@step(
+ given,
+ parsers.parse('an utterance request "{action}", to {do_something}'),
+)
+def given_a_follow_up_utterance_request(
+ context: ContextOfTest, action: str, do_something: str
+) -> UtteranceRequest:
+ utterance_request = UtteranceRequest(
+ action=action,
+ reason={
+ "follow up with the customer": UtteranceReason.FOLLOW_UP,
+ "buy time": UtteranceReason.BUY_TIME,
+ }[do_something],
+ )
+
+ context.actions.append(utterance_request)
+
+ return utterance_request
+
+
@step(when, "processing is triggered", target_fixture="emitted_events")
def when_processing_is_triggered(
context: ContextOfTest,
diff --git a/tests/core/common/engines/alpha/steps/events.py b/tests/core/common/engines/alpha/steps/events.py
index 6561da4c4..b948e5c26 100644
--- a/tests/core/common/engines/alpha/steps/events.py
+++ b/tests/core/common/engines/alpha/steps/events.py
@@ -20,7 +20,6 @@
from parlant.core.customers import CustomerStore
from parlant.core.engines.alpha.utils import emitted_tool_event_to_dict
from parlant.core.emissions import EmittedEvent
-from parlant.core.engines.types import UtteranceReason, UtteranceRequest
from parlant.core.nlp.moderation import ModerationTag
from parlant.core.sessions import (
MessageEventData,
@@ -195,36 +194,6 @@ def given_a_flagged_customer_message(
return session.id
-@step(
- given,
- parsers.parse('an utterance with action of "{action}", to follow up with the customer'),
-)
-def given_a_follow_up_utterance_request(
- context: ContextOfTest,
- action: str,
-) -> UtteranceRequest:
- utterance_request = UtteranceRequest(action=action, reason=UtteranceReason.FOLLOW_UP)
-
- context.actions.append(utterance_request)
-
- return utterance_request
-
-
-@step(
- given,
- parsers.parse('an utterance with action of "{action}", to buy time'),
-)
-def given_a_buy_time_utterance_request(
- context: ContextOfTest,
- action: str,
-) -> UtteranceRequest:
- utterance_request = UtteranceRequest(action=action, reason=UtteranceReason.BUY_TIME)
-
- context.actions.append(utterance_request)
-
- return utterance_request
-
-
@step(
when,
parsers.parse("the last {num_messages:d} messages are deleted"),
diff --git a/tests/core/stable/engines/alpha/features/baseline/utterances.feature b/tests/core/stable/engines/alpha/features/baseline/utterances.feature
index 0f9545524..5f0bf7b3a 100644
--- a/tests/core/stable/engines/alpha/features/baseline/utterances.feature
+++ b/tests/core/stable/engines/alpha/features/baseline/utterances.feature
@@ -4,20 +4,14 @@ Feature: Utterances
And an agent
And an empty session
- Scenario: A buy-time message is determined by the actions sent from utter engine operation
- Given an utterance with action of "inform the user that more information is coming", to buy time
+ Scenario: The agent utters a message aligned with an action to buy time
+ Given an utterance request "inform the customer that more information is coming", to buy time
When uttering is triggered
Then a single message event is emitted
- And the message contains that more information is coming
+ And the message mentions that more information is coming
- Scenario: A buy-time message of thinking as an utter action
- Given an utterance with action of "tell the user 'Thinking...'", to buy time
+ Scenario: The agent utters a message aligned with an action to follow up with the customer
+ Given an utterance request "suggest proceeding to checkout", to follow up with the customer
When uttering is triggered
Then a single message event is emitted
- And the message contains thinking
-
- Scenario: A follow-up message is determined by the actions sent from utter engine operation
- Given an utterance with action of "ask the user if he need assistant with the blue-yellow feature", to follow up with the customer
- When uttering is triggered
- Then a single message event is emitted
- And the message contains asking the user if he need help with the blue-yellow feature
\ No newline at end of file
+ And the message mentions proceeding to checkout