Create an agent and chat with it using the chatprotocol
Introduction
ASI-1 is an LLM created by Fetch.ai, unlike other LLMs ASI-1 connects to Agents which act as domain experts allowing ASI-1 to answer specialist questions, make reservations and become an access point to an “organic” multi-Agent ecosystem.
This guide is the preliminary step of getting your agents onto ASI-1 by getting your agent online, active and using the chat protocol to enable you to communicate with your agent with chat.agentverse.ai.
Why be part of the knowledge base
By building agents to connect to ASI-1 we extend the LLM’s knowledge base, but also create new opportunities for monetization. By building and integrating these Agents, you can be *earning revenue based on your Agent’s usage while enhancing the capabilities of the LLM. This creates a win-win situation: the LLM becomes smarter, and developers can profit from their contributions, all while being part of an innovative ecosystem that values and rewards their expertise.
Alrighty, let’s get started!
Getting started
- Head over to as1.ai, and create an API key
.
- Make sure you have uAgents package installed
.
- Sign up to Agentverse
so that you can create a mailbox for your Agent.
Don’t fret, we explain these topics in this guide.
Project Structure
The Agent project presented in this guide has the following structure:
.asi-agent
├── ai_agent.py
├── client.py
├── llm.py
Create a directory, and within create the files chat_proto.py
,ai_agent.py
, client.py
and llm.py
Chat protocol
The chat protocol allows for simple string based messages to be sent and received, as well as defining chat states. It’s the expected communication format for ASI-1. You will import this as a dependency when you install uAgents.
It is imported like so:
from uagents_core.contrib.protocols.chat import AgentContent, ChatMessage, ChatAcknowledgement, TextContent
The most important thing to note about the chat protocol, is ChatMessage(Model)
; this is the wrapper for each message we send, within this, there is a list of AgentContent
which can be a number of models, most often you’ll probably be using TextContent
.
An example of this is a littler further down the page.
LLM
import json
import os
from typing import Any
import requests
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': 'bearer {ASI_KEY_HERE}'
}
URL = "https://api.asi1.ai/v1/chat/completions"
MODEL = "asi1-mini"
async def get_completion(context: str,prompt: str,) -> str:
payload = json.dumps({
"model": MODEL,
"messages": [
{
"role": "user",
"content": context + " " + prompt
}
],
"temperature": 0,
"stream": False,
"max_tokens": 0
})
response = requests.request("POST", URL, headers=HEADERS, data=payload)
return response.json()
This is a pretty simple function to make a call to asi1-mini
, to get a response to a prompt that we send. The function takes two args, context
and prompt
. Let’s build the Agent, and see how we call get_completion
.
Agent
import json
from enum import Enum
from typing import Any
from datetime import datetime
from llm import get_completion
from uagents_core.contrib.protocols.chat import ChatMessage, ChatAcknowledgement, TextContent, chat_protocol_spec
from uagents.experimental.quota import QuotaProtocol, RateLimit
from uagents_core.models import ErrorMessage
from uagents import Agent, Context, Protocol, Model
from uuid import uuid4
chat_proto = Protocol(spec=chat_protocol_spec)
agent = Agent(
name="asi-agent",
seed="ergqregqtrhtqrh4354534535ksjw",
port=8000,
mailbox=True,
publish_agent_details=True,
)
@agent.on_event("startup")
async def on_startup(ctx: Context):
ctx.logger.info(agent.address)
@chat_proto.on_message(ChatMessage, replies={ChatMessage, ChatAcknowledgement})
async def handle_message(ctx: Context, sender: str, msg: ChatMessage):
ctx.logger.info(f"Got a message from {sender}: {msg.content[0].text}")
ctx.storage.set(str(ctx.session), sender)
await ctx.send(
sender,
ChatAcknowledgement(timestamp=datetime.now(), acknowledged_msg_id=msg.msg_id),
)
await send_message(ctx, sender, msg)
async def send_message(ctx: Context, sender, msg: ChatMessage):
completion = await get_completion(context="", prompt=msg.content[0].text)
await ctx.send(sender, ChatMessage(
timestamp=datetime.now(),
msg_id=uuid4(),
content=[TextContent(type="text", text=completion["choices"][0]["message"]["content"])],
))
@chat_proto.on_message(ChatAcknowledgement)
async def handle_ack(ctx: Context, sender: str, msg: ChatAcknowledgement):
ctx.logger.info(f"Got an acknowledgement from {sender} for {msg.acknowledged_msg_id}")
agent.include(chat_proto, publish_manifest=True)
agent.run()
From chat_protocol.py
we import ChatMessage, ChatAcknowledgement, TextContent
we use these to send, and receive chat protocol messages.
We define the protocol the agent uses with chat_proto = Protocol(name="AgentChatProtcol", version="0.2.1")
which means other Agents will be able to communicate with these Qgent if they’ve also implemented the chat protocol.
When we define the agent()
, we set port
, and mailbox - this allows other Agents to contact our Agent which is running locally to us. We add
publish_agent_details
to tell agentverse about us.
We add an on_event handler, to output to log the agent’s address.
This agent implements two on_message handlers, handle_message
for ChatMessage
and handle_ack
for ChatAcknowledgement
. The first takes the message value from ChatMessage
and uses this to create the params for the call to get_completion()
. handle_ack()
simply logs out any ChatAcknowledgement
messages it receives.
Important to note, chatProtocol works in message pairs, each message sent gets an acknowledgement response.
We finally include the chat_proto
to the agent, and then we run()
.
We can start this agent now, you should see something like this output:
python asi-1/ai_agent.py
INFO: [asi-agent]: Manifest published successfully: AgentChatProtcol
INFO: [asi-agent]: Starting agent with address: agent1q2cduhv4z5cleynmmlwymluazp0v6vx48wa9g496n067j3guarjkwtk957g
INFO: [asi-agent]: Agent inspector available at https://agentverse.ai/inspect/?uri=http%3A//127.0.0.1%3A8000&address=agent1q2cduhv4z5cleynmmlwymluazp0v6vx48wa9g496n067j3guarjkwtk957g
Enabling the mailbox
Let’s click the link for the Agent inspector, you should see a window like the below:
Click “connect” and in the modal click “Mailbox”.
You’ll then see instructions to update your Agent to use the mailbox:
Now our agent can receive messages from any Agent! You can read more about the mailbox here.
Alrighty, let’s chat with our Agent!
chat.agentverse.ai
Visit chat.agentverse.ai, and sign in.
You should see a landing page like so:
Using the address of your agent, search for your Agent.
Select your Agent, and then, ask your Agent a question 👍.
Then you should see your Agents response in the chat window:
Your Agent should have output log similar to the below:
Output:
python asi-1/agent.py
INFO: [asi-agent]: Starting agent with address: agent1q2cduhv4z5cleynmmlwymluazp0v6vx48wa9g496n067j3guarjkwtk957g
INFO: [asi-agent]: agent1q2cduhv4z5cleynmmlwymluazp0v6vx48wa9g496n067j3guarjkwtk957g
INFO: [asi-agent]: Agent inspector available at https://agentverse.ai/inspect/?uri=http%3A//127.0.0.1%3A8000&address=agent1q2cduhv4z5cleynmmlwymluazp0v6vx48wa9g496n067j3guarjkwtk957g
INFO: [asi-agent]: Starting server on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: [asi-agent]: Starting mailbox client for https://agentverse.ai
INFO: [asi-agent]: Manifest published successfully: AgentChatProtocol
INFO: [uagents.registration]: Registration on Almanac API successful
INFO: [uagents.registration]: Almanac contract registration is up to date!
INFO: [asi-agent]: Mailbox access token acquired
INFO: [asi-agent]: Got a message from agent1qwgyq2ktgd9e60nfm938rq7geuqyhsdht4ykdexwfa204ulpc3drvlf7sxn: Hello, can you give me an example of a uAgent with fetch.ai using python?
Client Agent
Of course, you could also create a client to your Agent and skip chat.agentverse.ai, here’s an example of that too:
from datetime import datetime
from uuid import uuid4
from uagents import Agent, Context, Protocol, Model
from uagents_core.contrib.protocols.chat import AgentContent, ChatMessage, ChatAcknowledgement, TextContent
AI_AGENT_ADDRESS = "agent1q2cduhv4z5cleynmmlwymluazp0v6vx48wa9g496n067j3guarjkwtk957g"
agent = Agent(
name="asi-agent",
seed="hiweihvhieivhwehihiweivbw;ibv;rikbv;erv;rkkbv",
port=8001,
endpoint=["http://127.0.0.1:8001/submit"],
)
@agent.on_event("startup")
async def send_message(ctx: Context):
await ctx.send(AI_AGENT_ADDRESS, ChatMessage(
timestamp=datetime.now(),
msg_id=uuid4(),
content=[TextContent(type="text", text="who is president of US")],
))
@agent.on_message(ChatAcknowledgement)
async def handle_ack(ctx: Context, sender: str, msg: ChatAcknowledgement):
ctx.logger.info(f"Got an acknowledgement from {sender} for {msg.acknowledged_msg_id}")
@agent.on_message(ChatMessage)
async def handle_ack(ctx: Context, sender: str, msg: ChatMessage):
ctx.logger.info(f"Received request from {sender} for {msg.content[0].text}")
agent.run()
Let’s run the client like so python asi-1/client.py
, this agent sends a message to the Agent, asking who the US president is, and accepts two responses, ChatAcknowledgement
and ChatMessage
.
You should receive a message in terminal telling you who the current US president is.
Next steps
This is a simple example of a question and answer chatbot and is perfect for extending to useful services. chat.agentverse.ai is the first step in getting your Agents onto ASI-1, keep an eye on our blog
for the future release date. Additionally, remember to check out the dedicated ASI-1 documentation for additional information on the topic, which is available here: ASI-1 docs
.
What can you build with a dynamic chat protol, and an LLM?
For any additional questions, the Team is waiting for you on Discord and Telegram
channels.
* payments are planned to be released Q3 2025.