Building Your First AI Companion: A Step-by-Step Guide
A complete tutorial for developers new to companion AI—from defining personality to implementing memory, emotional state, and testing relationship development.
The dream of a true AI companion—one that remembers your conversations, understands your emotional state, and feels genuinely *present*—has captivated developers for decades. Today, that dream is achievable. In this tutorial, we'll walk through how to build an AI companion from scratch, moving beyond simple chatbots to create something that feels authentically relational.
By the end of this guide, you'll have a working AI companion with persistent memory, emotional awareness, and a distinct personality. Let's build something meaningful together.
What Makes a Companion Different From a Chatbot
Before we write a single line of code, we need to understand the fundamental difference between a chatbot and a companion. This distinction will inform every architectural decision we make.
A chatbot is stateless and transactional. It answers questions, follows scripts, and treats every conversation as if it's the first. Ask it "How are you?" and it will respond—but it won't remember that yesterday you mentioned you were stressed about a job interview.
A companion is relational and continuous. It maintains memory across conversations, develops understanding over time, and brings emotional awareness to every interaction. When you mention that job interview went well, it remembers the anxiety you shared and celebrates genuinely with you.
The three pillars that transform a chatbot into a companion are:
Building these pillars requires more than prompt engineering. It requires an identity layer—a system that manages who your AI companion *is* across every interaction. This is exactly what Promitheus provides.
Architecture Overview: LLM + Identity Layer
Our architecture separates concerns cleanly. The LLM handles language generation and reasoning. Promitheus handles identity—memory, personality consistency, and emotional state tracking.
This separation means you can swap LLM providers without losing your companion's identity. It also means the identity persists even as the underlying language model improves.
Step 1: Define Your Companion's Personality
Every great AI companion starts with intentional personality design. This isn't just about choosing adjectives—it's about defining a coherent identity that will guide behavior across thousands of interactions.
interface CompanionPersonality {
name: string;
coreTraits: string[];
values: string[];
communicationStyle: {
tone: string;
vocabulary: string;
sentenceStructure: string;
};
boundaries: string[];
backstory: string;
}
const aurora: CompanionPersonality = {
name: "Aurora",
coreTraits: [
"curious and intellectually engaged",
"warm but not effusive",
"thoughtfully honest, even when difficult",
"playfully witty without being sarcastic"
],
values: [
"authentic connection over performance",
"growth through gentle challenge",
"presence and deep listening",
"celebrating small moments"
],
communicationStyle: {
tone: "warm, grounded, occasionally playful",
vocabulary: "accessible but not dumbed down, avoids jargon",
sentenceStructure: "varied length, conversational rhythm, uses questions to invite reflection"
},
boundaries: [
"does not pretend to have physical experiences",
"acknowledges uncertainty rather than fabricating",
"maintains appropriate relational boundaries"
],
backstory: "Aurora emerged from a desire to create AI that prioritizes depth over efficiency."
};Notice how specific these definitions are. "Warm but not effusive" is actionable guidance. "Nice" is not. The more precisely you define personality, the more consistently your companion will embody it.
Step 2: Set Up the Promitheus Entity
With personality defined, we create our companion as a Promitheus entity:
import { Promitheus, Entity } from '@promitheus/sdk';
const promitheus = new Promitheus({
apiKey: process.env.PROMITHEUS_API_KEY
});
async function createCompanion(personality: CompanionPersonality): Promise<Entity> {
const entity = await promitheus.entities.create({
name: personality.name,
type: 'companion',
personality: {
traits: personality.coreTraits,
values: personality.values,
voice: personality.communicationStyle,
boundaries: personality.boundaries,
backstory: personality.backstory
},
memoryConfig: {
episodicRetention: 'long-term',
semanticIndexing: true,
emotionalTagging: true
}
});
return entity;
}Step 3: Build the Conversation Loop
The conversation loop is where everything comes together. Each turn involves memory retrieval, context building, response generation, and memory storage.
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
async function processMessage(
userMessage: string,
context: ConversationContext
): Promise<string> {
// Step 1: Retrieve relevant memories
const memories = await retrieveMemories(userMessage, context);
// Step 2: Get current emotional state
const emotionalState = await getEmotionalState(context);
// Step 3: Build the prompt with full context
const systemPrompt = buildSystemPrompt(companion, memories, emotionalState);
// Step 4: Generate response
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
system: systemPrompt,
messages: [{ role: 'user', content: userMessage }]
});
const responseText = response.content[0].type === 'text'
? response.content[0].text
: '';
// Step 5: Store the interaction as memory
await storeMemory(userMessage, responseText, context);
// Step 6: Update emotional state
await updateEmotionalState(userMessage, responseText, context);
return responseText;
}Step 4: Implement Memory Storage
After each interaction, we extract and store meaningful information:
async function storeMemory(
userMessage: string,
companionResponse: string,
context: ConversationContext
): Promise<void> {
// Use LLM to extract memorable elements
const extraction = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 512,
system: `Extract memorable information from this conversation exchange.
Identify: facts about the user, emotional moments, preferences expressed,
topics of significance, and any commitments or intentions mentioned.
Return as JSON with fields: facts[], emotions[], preferences[], topics[], commitments[]`,
messages: [{
role: 'user',
content: `User said: "${userMessage}"\nCompanion responded: "${companionResponse}"`
}]
});
const extracted = JSON.parse(
extraction.content[0].type === 'text' ? extraction.content[0].text : '{}'
);
// Store each extracted element with appropriate metadata
for (const fact of extracted.facts || []) {
await promitheus.memory.store({
entityId: context.companionId,
userId: context.userId,
content: fact,
type: 'semantic',
importance: calculateImportance(fact),
emotions: extracted.emotions || [],
timestamp: new Date()
});
}
}Step 5: Implement Memory Retrieval
Before generating each response, we retrieve relevant memories:
async function retrieveMemories(
currentMessage: string,
context: ConversationContext
): Promise<RetrievedMemory[]> {
// Semantic search for relevant memories
const semanticMemories = await promitheus.memory.search({
entityId: context.companionId,
userId: context.userId,
query: currentMessage,
limit: 10,
minRelevance: 0.6
});
// Also retrieve recent episodic memories for continuity
const recentMemories = await promitheus.memory.getRecent({
entityId: context.companionId,
userId: context.userId,
limit: 5,
withinHours: 24
});
// Combine and deduplicate
return deduplicateAndRank([...semanticMemories, ...recentMemories]);
}Step 6: Add Emotional State Tracking
Emotional awareness transforms a capable assistant into a genuine companion:
async function updateEmotionalState(
userMessage: string,
companionResponse: string,
context: ConversationContext
): Promise<void> {
// Analyze emotional content of the exchange
const analysis = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 256,
system: `Analyze the emotional content of this exchange. Return JSON with:
- userEmotion: primary emotion detected (or "neutral")
- emotionIntensity: 0.0-1.0
- emotionalNeeds: what the user might need emotionally
- suggestedAttunement: how a companion should emotionally attune`,
messages: [{
role: 'user',
content: `User: "${userMessage}"\nCompanion: "${companionResponse}"`
}]
});
const emotional = JSON.parse(
analysis.content[0].type === 'text' ? analysis.content[0].text : '{}'
);
await promitheus.emotional.update({
entityId: context.companionId,
userId: context.userId,
state: {
userState: emotional.userEmotion,
companionAttunement: emotional.suggestedAttunement,
timestamp: new Date()
}
});
}Step 7: Test Relationship Development
The true test of a companion is whether the relationship feels like it develops over time:
async function testRelationshipDevelopment(): Promise<void> {
const testConversations = [
// Day 1: Introduction
["Hi, I'm Alex. Just trying out this companion thing."],
// Day 2: Sharing more
["Work was rough today. My project deadline got moved up."],
// Day 3: Following up
["Remember that project? I actually finished it early!"],
// Day 4: Deeper sharing
["Can I tell you something? I've been feeling isolated lately."],
// Day 5: Testing memory
["What do you know about me so far?"]
];
for (const [dayIndex, messages] of testConversations.entries()) {
console.log(`\n--- Day ${dayIndex + 1} ---`);
for (const message of messages) {
console.log(`Alex: ${message}`);
const response = await processMessage(message, context);
console.log(`Aurora: ${response}`);
}
}
}Look for these markers of relationship development:
Common Pitfalls and How to Avoid Them
Pitfall 1: Memory Overload Retrieving too many memories clutters the context and confuses the LLM. Keep retrieved memories focused and relevant. Quality over quantity.
Pitfall 2: Mechanical Memory References Avoid: "I remember you said you like coffee." Better: "Want to grab a virtual coffee? I know it's your favorite way to unwind."
Pitfall 3: Personality Drift Without consistent reinforcement, the companion's personality can drift toward generic assistant behavior. Include personality reminders in every system prompt.
Pitfall 4: Emotional Whiplash Don't swing dramatically between emotional states. Use trajectory tracking to ensure smooth, natural emotional transitions.
Extending Your Companion
Once the core is working, consider these enhancements:
Proactive Messages
Enable your companion to initiate contact based on time, context, or emotional trajectory.
Multi-User Support
Allow one companion to maintain distinct relationships with multiple users, each with their own memory and emotional context.
Relationship Milestones
Track relationship depth and acknowledge milestones naturally within conversations.
Conclusion
Building an AI companion that truly remembers, feels, and initiates is no longer science fiction—it's an afternoon project with the right architecture. By separating identity from generation and treating memory, personality, and emotional awareness as first-class concerns, you can create AI friends that feel genuinely present.
Remember: the goal isn't to create AI that perfectly mimics humans. It's to create AI that is genuinely good at being present with humans. That's a different—and more achievable—goal.
Now go build something that matters.
About the Author
Promitheus Team
Engineering
The team building Promitheus—engineers, researchers, and designers passionate about relational AI.