Chat Protocol Integration - ChatAgent object Guide (Experimental)
Overview
In the context of uAgents Framework, the ChatAgent class is an experimental, high-level abstraction built on top of Agent class. It is designed to make it easy to turn an existing uAgents agent into a conversational, tool-using LLM agent with minimal extra wiring.
At a glance, the ChatAgent class allows you to:
- Integrate an LLM (ASI:One, OpenAI, Anthropic, etc.).
- Automatically expose protocol handlers as LLM tools.
- Maintain conversation history for multi-turn interactions by default.
If you don’t need an LLM or conversational behavior, a standard Agent is still the right choice.
If you already understand protocols, Agents Communication
and Handles
in uAgents,
ChatAgent lets you reuse that logic directly in an LLM-powered chat interface. You can use the ChatAgent class is used to integrate the Chat Protocol with ease but remember, this is still an experimental feature. You can explore the following example for a deeper overview on how to create a Chat Protocol compatible uAgent
. Essentially, you can build the same kind of ASI:One compatible uAgents with both resources — but
ChatAgent is a higher-level abstraction that removes most of that manual work you need to take care when integrating the Chat Protocol into the Agent. In fact, ChatAgent works by:
- Taking your existing protocol handlers.
- Converting them into LLM tools.
- Letting the LLM choose which tool to call.
- Executing the corresponding handler.
- Feeding the result back into the conversation.
You focus on defining what your uAgent can do, not how the LLM calls it.
Installation
Install uAgents with LLM support enabled:
pip install uagents[llm]Or, if you want everything included:
pip install uagents[all]Example
Below is a minimal word counter chat uAgent:
from uagents import Context, Field, Model, Protocol
from uagents.experimental.chat_agent import ChatAgent
agent = ChatAgent(name="chat-bob", mailbox=True)
proto = Protocol(name="WordCounter", version="0.1.0")
class WordCountRequest(Model):
text: str = Field(..., description="Text to count words in")
class WordCountResponse(Model):
count: int = Field(..., description="Number of words")
def count_words(s: str) -> int:
return len([w for w in s.split() if w.strip()])
@proto.on_message(WordCountRequest)
async def handle_word_count_request(ctx: Context, sender: str, msg: WordCountRequest):
ctx.logger.info(f"Received word count request from {sender}: {msg.text[:50]}...")
word_count = count_words(msg.text)
ctx.logger.info(f"Word count: {word_count}")
await ctx.send(
sender, WordCountResponse(count=word_count)
)
agent.include(proto)
if __name__ == "__main__":
agent.run()The handler is automatically exposed to the LLM as a tool through the ChatAgent class. By including the protocol, the agent’s manifest is published and registered, allowing other agents and ASI:One to discover its word-counting capability. At runtime, the LLM can then decide to invoke the WordCountRequest tool whenever it needs to determine the number of words in a given piece of text.
LLM Configuration
ChatAgent connects to a language model through an LLMConfig. This configuration setup defines which provider is used and how the model behaves. In most cases, you can rely on the default configuration. The LLM behavior is controlled using two objects:
LLMConfig: provider, model, endpoint, and credentials.LLMParams: generation and tool-calling behavior.
For each LLM you want to integrate, you will need to set multiple parameters, including the API Keys related to the AI Model you select.
Default Configuration (ASI:One)
This configuration uses ASI:One out of the box and requires no additional setup. You will need to retrieve an ASI:One API Key. You can do this here
.
from uagents.experimental.chat_agent import ChatAgent, LLMConfig
asione_config = LLMConfig.asi1(
api_key="YOUR_ASIONE_API_KEY",
)
agent = ChatAgent(
name="MathChat",
llm_config=asione_config
)OpenAI Configuration Example
from uagents.experimental.chat_agent import LLMParams, LLMConfig
openai_config = LLMConfig(
provider="openai",
api_key="YOUR_OPENAI_API_KEY",
model="gpt-5-mini",
url="https://api.openai.com/v1",
parameters=LLMParams(temperature=1),
)
agent = ChatAgent(name="MathChat", llm_config=openai_config)Anthropic Configuration Example
from uagents.experimental.chat_agent import LLMParams, LLMConfig
claude_config = LLMConfig(
provider="anthropic",
api_key="YOUR_ANTHROPIC_API_KEY",
model="claude-4-5-haiku",
url="https://api.anthropic.com/v1/messages",
parameters=LLMParams(),
)
agent = ChatAgent(name="MathChat", llm_config=claude_config)Note: You can swap configurations without changing any agentic logic.
LLM Defaults Parameters and Behaviour
By default, ChatAgent is configured for tool-first and deterministic behavior.
Default Parameters
-
temperature = 0.0-> Makes responses as deterministic as possible. -
max_tokens = 1024-> Maximum length of each model response. -
tool_choice = "required"-> If tools are available, the model must choose one instead of responding directly on the first step. -
parallel_tool_calls = False-> Only one tool call is allowed per turn. -
System prompt: A built-in prompt enforces rules such as:- Use tools when available;
- Select exactly one tool;
- Base reasoning on the latest user message.
-
LLMParamsusesextra="allow", which means you can pass through provider-specific parameters (e.g.top_p,frequency_penalty,stop, etc.) to tailor behavior for your model without changing the core code.
Conclusions
ChatAgent provides a streamlined way to build ASI:One compatible uAgents that are both discoverable and conversational, without requiring you to manually implement the Chat Protocol or any other LLM integration. By publishing standard protocols and registering through Agentverse, a ChatAgent uAgent participates fully in the existing discovery pipeline, while gaining built-in memory and tool-based reasoning. This allows you to focus on defining clear, useful capabilities, confident that your uAgent can be found, understood, and effectively used by ASI:One, Agentverse and the wider Agents ecosystem. For additional information, check out our example for a deeper overview on how to create a Chat Protocol compatible uAgent step-by-step without the use of the
ChatAgent class.