Skip to main content

rag-routes

Table of Contents

post

Retrieves relevant chunks from the knowledge base based on a query.

post

Runs a specific monitoring query to retrieve relevant knowledge base chunks for proactive alerts.

post

Retrieves the full Chain-of-Thought reasoning for a given insight.

post

Generates a dynamic page layout with insights based on a given context.

dataSchema

Manages AI-driven functionalities for the ChainAlign application. This includes processing transcripts, generating dynamic UI pages based on alerts or context, enriching news articles with AI insights, and handling transcription requests. It integrates with various external services like Gemini, Zep, RAG, and Firebase.

dataSchema

This file defines the "universe" of data that the AI can query and chart. It serves as the primary "knowledge source" for the RAG (Retrieval-Augmented Generation) system, ensuring the LLM only attempts to access data that actually exists.

AIManager

Manages AI-driven functionalities for the ChainAlign application. This class provides methods for processing user input, generating dynamic content, and integrating with various AI and data services.

Parameters

  • sessionId string The session ID for conversation history persistence (e.g., with Zep). (optional, default "default-sop-session")

initZepSession

Initializes a Zep session for conversation history persistence. This method is called fire-and-forget during AIManager instantiation. It handles cases where the session already exists gracefully.

Returns Promise<void> A promise that resolves when the Zep session is initialized or confirmed to exist.

processTranscript

Processes a transcribed utterance to generate a structured chart query.

Parameters

  • transcript string The user's transcribed speech.
  • tenantId string The ID of the tenant.
  • userId string The ID of the user.

Returns [Promise][83]<[object][84]> A structured JSON object for the charting engine.

addMessageToZep

Adds a message to the Zep session.

Parameters

  • content string The message content.
  • role string The role of the sender (e.g., "user", "ai").

constructAugmentedPrompt

Constructs the detailed, context-aware prompt for the Gemini LLM.

Parameters

  • transcript string The user's current request.
  • relevantDocuments [Array][85]<[string][82]> An array of relevant document snippets from the general vector DB.
  • insightContext string Formatted string of relevant S&OP insights from the knowledge base.

Returns string The full prompt to be sent to the LLM.

generateAlertPage

Generates a dynamic UI page JSON based on a detected alert. It fetches alert data from a backend endpoint and uses Gemini to construct a structured JSON object that describes the layout and components of an alert page.

Parameters

  • tenantId string The ID of the tenant.
  • userId string The ID of the user.

Returns [Promise][83]<([object][84] | null)> A JSON object representing the dynamic page layout, or null if no alert is detected or an error occurs.

generateDynamicPageFromContext

Generates a dynamic UI page JSON based on unstructured and structured context. This method first checks for active alerts and, if none, constructs a prompt for Gemini to generate a structured JSON object for dynamic UI rendering, typically for insights display.

Parameters

  • unstructuredContext object Unstructured context from sources like uploaded documents.
  • structuredContext object Structured context from databases.
  • tenantId string The ID of the tenant.
  • userId string The ID of the user.
  • Throws Error If an error occurs during transcription or storage.

Returns [Promise][83]<[string][82]> A promise that resolves with the transcribed text.

getTranscriptions

Retrieves all stored transcriptions from Firestore, ordered by timestamp.

  • Throws Error If an error occurs during retrieval.

Returns [Promise][83]<[Array][85]<[object][84]>> A promise that resolves with an array of transcription objects.

handleNLPRequest

Handles a Natural Language Processing (NLP) request by processing text and generating a structured query. It uses processTranscript internally and optionally stores the result in Firestore.

Parameters

  • text string The input text for NLP processing.
  • Throws Error Throws an error if redaction fails or the LLM call encounters an issue.

Returns [Promise][83]<([string][82] | [object][84] | [Array][85]<[number][88]>)> A promise that resolves to the LLM's response. The type depends on jsonResponse and llmModel (e.g., number[] for embeddings).

getGoogleGenerativeAI

Initializes and returns a singleton instance of GoogleGenerativeAI.

Returns GoogleGenerativeAI The initialized GoogleGenerativeAI instance.

getFullReasoning

Fetches the full Chain-of-Thought reasoning and sources for a given insight. Orchestrates calls to the RAG microservice and structures the response.

Parameters

  • insightId string The ID of the insight.
  • userContext object Context about the current user (e.g., userId, role).
  • pageContext object Context about the current page (e.g., pageType, metadata).

Returns [Promise][83]<[object][84]> The structured cotData.

trackCoTUsage

Tracks CoT usage for learning purposes.

Parameters

  • insightId string The ID of the insight.
  • eventType string The type of CoT interaction (e.g., 'full_reasoning_requested').