Configuring Memory

Agent can dock external knowledge bases (using PostgreSQL, Pinecone, and IPFS) and use them during runtime.

The Memory Engine in the AXES framework empowers agents with the capability to retain, retrieve, and utilize information for intelligent and context-aware interactions. It serves as a core component for creating both long-term memory (persistent knowledge) and short-term memory (session-specific or temporary context). This section explains how developers can implement external memory systems for agents using PostgreSQL, Pinecone, and IPFS, along with mechanisms to maintain user-specific interaction histories.

import { Client as PgClient } from "pg";
import { PineconeClient } from "@pinecone-database/pinecone";
import axios from "axios";

// PostgreSQL Configuration
const postgresConfig = {
  host: "localhost",
  port: 5432,
  user: "your_user",
  password: "your_password",
  database: "agent_knowledge",
};

// Pinecone Configuration
const pineconeConfig = {
  apiKey: "your-pinecone-api-key",
  environment: "your-pinecone-environment",
  indexName: "agent-knowledge",
};

// IPFS Gateway
const ipfsGateway = "https://ipfs.io/ipfs/";

// Agent Knowledge Docking and Runtime
class Agent {
  private postgresClient: PgClient;
  private pineconeClient: PineconeClient;
  private ipfsGateway: string;

  constructor() {
    // Initialize PostgreSQL client
    this.postgresClient = new PgClient(postgresConfig);

    // Initialize Pinecone client
    this.pineconeClient = new PineconeClient();
    this.pineconeClient.init({
      apiKey: pineconeConfig.apiKey,
      environment: pineconeConfig.environment,
    });

    this.ipfsGateway = ipfsGateway;
  }

  // Connect to external knowledge sources
  async connectKnowledgeBases() {
    try {
      await this.postgresClient.connect();
      console.log("Connected to PostgreSQL.");

      console.log("Connected to Pinecone.");
    } catch (error) {
      console.error("Error connecting to knowledge bases:", error);
    }
  }

  // Query PostgreSQL for structured knowledge
  async queryPostgres(query: string, params: any[] = []): Promise<any> {
    try {
      const result = await this.postgresClient.query(query, params);
      return result.rows;
    } catch (error) {
      console.error("Error querying PostgreSQL:", error);
    }
  }

  // Query Pinecone for semantic search
  async queryPinecone(vector: number[], topK: number = 5): Promise<any> {
    try {
      const index = this.pineconeClient.Index(pineconeConfig.indexName);
      const queryResult = await index.query({
        vector,
        topK,
        includeMetadata: true,
      });
      return queryResult.matches;
    } catch (error) {
      console.error("Error querying Pinecone:", error);
    }
  }

  // Fetch data from IPFS
  async fetchFromIPFS(hash: string): Promise<any> {
    try {
      const response = await axios.get(`${this.ipfsGateway}${hash}`);
      return response.data;
    } catch (error) {
      console.error("Error fetching from IPFS:", error);
    }
  }

  // Runtime logic to use external knowledge
  async useKnowledge(input: string) {
    console.log(`Agent received input: ${input}`);

    // Example: Query PostgreSQL
    console.log("Querying PostgreSQL...");
    const postgresData = await this.queryPostgres(
      "SELECT * FROM knowledge WHERE topic = $1",
      [input]
    );
    console.log("PostgreSQL Data:", postgresData);

    // Example: Query Pinecone (with mock embedding vector)
    console.log("Querying Pinecone...");
    const pineconeData = await this.queryPinecone([0.1, 0.2, 0.3, 0.4, 0.5]);
    console.log("Pinecone Data:", pineconeData);

    // Example: Fetch from IPFS
    console.log("Fetching from IPFS...");
    const ipfsData = await this.fetchFromIPFS("QmExampleHashForKnowledge");
    console.log("IPFS Data:", ipfsData);

    // Combine knowledge and generate response
    const combinedKnowledge = {
      postgres: postgresData,
      pinecone: pineconeData,
      ipfs: ipfsData,
    };
    console.log("Combined Knowledge:", combinedKnowledge);

    // Use combined knowledge for a response
    return `Based on my knowledge, here is the response for "${input}": ${JSON.stringify(
      combinedKnowledge
    )}`;
  }

  // Disconnect knowledge sources
  async disconnectKnowledgeBases() {
    try {
      await this.postgresClient.end();
      console.log("Disconnected from PostgreSQL.");
    } catch (error) {
      console.error("Error disconnecting from PostgreSQL:", error);
    }
  }
}

// Example usage
(async () => {
  const agent = new Agent();

  await agent.connectKnowledgeBases();

  const response = await agent.useKnowledge("Artificial Intelligence");
  console.log("Agent Response:", response);

  await agent.disconnectKnowledgeBases();
})();

Key Components of the Memory Engine

  1. Long-Term Memory

    • Purpose: Stores persistent knowledge that agents can access and use across multiple sessions or tasks.

    • Implementation:

      • PostgreSQL: Used for structured data storage, enabling agents to retrieve historical records, structured datasets, and facts efficiently.

      • Pinecone: Provides a vector database for storing and searching embeddings of unstructured data (e.g., text, documents). This is particularly useful for semantic search and retrieval-augmented generation (RAG).

      • IPFS (InterPlanetary File System): A decentralized storage system for large files, documents, and media. This allows agents to access and manage immutable, content-addressable knowledge bases.

  2. Short-Term Memory

    • Purpose: Maintains temporary context or interaction data relevant to the current session. This ensures the agent responds intelligently within the scope of the conversation or task.

    • Implementation:

      • In-Memory Databases: Temporary storage solutions (e.g., Redis or in-memory objects) can be used for fast and transient data access.

      • Session Management: Short-term memory is cleared or reset after a session ends, ensuring agents only retain data as long as necessary.

  3. User Memory

    • Purpose: Stores user-specific data, preferences, and interaction histories to enable personalized and dynamic agent responses.

    • Implementation:

      • PostgreSQL: Stores structured user data like preferences, past queries, and interaction logs.

      • Pinecone: Stores user-specific embeddings for retrieving semantically similar past interactions or preferences.

      • IPFS: Can store user-specific documents or media shared during interactions for future reference.


How Memory Works in the Agent Framework

  1. Data Ingestion:

    • External knowledge (documents, datasets, media) is ingested into the memory engine using PostgreSQL for structured data, Pinecone for embeddings, and IPFS for large or immutable files.

    • User interaction data is dynamically added to the memory engine as agents interact with users.

  2. Memory Retrieval:

    • Structured Data: SQL queries are used to retrieve historical records from PostgreSQL.

    • Semantic Search: Pinecone enables agents to search for similar content based on embeddings, supporting natural language queries.

    • Immutable Data Access: IPFS allows agents to fetch files or documents using content-based hashes.

  3. Memory Updates:

    • Agents can update their long-term memory by appending new knowledge or interaction summaries into the database and vector store.

    • Short-term memory resets automatically after the session concludes.


Benefits of the Memory Engine

  • Persistent Knowledge: Agents can access an ever-growing repository of structured and unstructured knowledge to improve their performance over time.

  • Personalized Interactions: User-specific memories ensure agents provide tailored experiences based on historical interactions and preferences.

  • Scalability: Leveraging PostgreSQL, Pinecone, and IPFS ensures the memory engine can handle large datasets and scale seamlessly with agent needs.

  • Decentralization and Security: IPFS provides decentralized and secure storage for sensitive data, reducing reliance on centralized systems.


Best Practices

  • Optimize Pinecone Embeddings: Use domain-specific embeddings to ensure accurate semantic search results.

  • Leverage Postgres Indexing: Utilize indexing and efficient query patterns for quick retrieval of structured data.

  • Ensure Privacy: Encrypt sensitive user data before storing it in PostgreSQL, Pinecone, or IPFS to maintain user trust and compliance with data privacy regulations.

  • Periodic Cleanup: Regularly clean or archive unused data to maintain the memory engine's efficiency and relevance.


The Memory Engine is integral to the AXES framework, providing agents with the ability to store and retrieve information, adapt to user preferences, and offer intelligent, contextually aware responses. With a combination of cutting-edge storage technologies, developers can create robust and scalable memory systems for their agents.

Last updated