Skip to main content

Socratic Inquiry Engine (SIE)

Document Version History

VersionDateChangesAuthor
2.02025-11-14Major Update: Added Sections 6-15 (Technical Architecture, Database Schema with Junction Tables, LLM Integration, GraphRAG Integration, Performance, Security, Testing, Migration). Integrated 8 junction tables and 6 supporting entity tables for proper many-to-many relationships.Pramod Prasanth
1.02024-XX-XXInitial version: Sections 1-5 (Executive Summary, Architectural Placement, SIE Components, Multi-Perspective Synthesis, UI/UX Requirements)Pramod Prasanth

1. Executive Summary

FieldDescription
Feature NameSocratic Inquiry Engine (SIE) & Multi-Perspective Synthesis
Architectural GoalImplement the philosophical mandate of autonomy-preserving AI 1 by replacing definitive "answers" with strategic questions and explicit trade-off analysis.
System ImpactTransforms the Dynamic Reasoning Layer from an answer generator to a strategic thinking partner. Reduces inference cost by minimizing ambiguous, exhaustive reasoning cycles.
Target WorkflowHigh-stakes decision workflows (e.g., S&OP alignment, regulatory compliance response, MRP II overrides).
MVP Success MetricMandatory completion of the "Questions Considered" screen before scenario evaluation is permitted. Logging of the user's justification in the Audit Trail.

2. Architectural Placement and Flow

The SIE is a low-latency, lightweight agent that sits upstream of the computationally expensive Scenario Synthesis engine within the Dynamic Reasoning Layer.

The workflow is now forced into three sequential stages, preventing the user from bypassing the critical inquiry phase:

  1. Stage 1: Complexity Trigger & Inquiry Generation (The Block): When a complex decision (e.g., "PFAS Ban Response") is initiated, the system detects ambiguity, blocks the display of scenarios, and instantiates the SIE.
  2. Stage 2: Human Judgment & State Update (The Dialogue): The SIE presents the Socratic questions. The user must provide input. This input updates the Problem State Vector with confirmed strategic priorities.
  3. Stage 3: Focused Scenario Synthesis (The Answer): The fully refined Problem State Vector is passed to the powerful LLM for scenario generation, resulting in a streamlined, lower-cost, and more accurate final output.

3. Layer 1: Socratic Inquiry Engine (SIE) - Question Generation

The SIE is composed of the Question Generation Module (QGM) and the Contextual Alignment Buffer (CAB).

3.1. Question Generation Module (QGM)

The QGM generates a portfolio of high-leverage questions based on GraphRAG context before querying the user.

IDFunctionalityRequirement DetailsAlignment
1.1Context RetrievalQGM must retrieve organizational context from the GraphRAG based on the decision type: Strategic North: Board memos, strategic priorities.3 Historical Failures: Past incidents, failed projects ("What did we learn?"). Constraints: Real-time capacity utilization, budget limits.Structural Foundation: Grounds the inquiry in verifiable enterprise truth, not general platitudes.4
1.2Socratic Prompt EngineeringLLM prompt must strictly adhere to the five Socratic principles (Assumptions, Perspectives, Trade-offs, Failure Modes, Second-Order Effects). The output must be questions only.5Autonomy Preservation: Prevents the AI from providing suggestive "nudges" or answers, enforcing genuine decentralized truth-seeking.1
1.3Trade-off ExposureQGM must include at least one question that forces the user to choose between competing organizational goals (e.g., "margin over volume" vs. "compliance over efficiency").Strategic Alignment: Shifts S&OP focus from feasibility ("Can we supply?") to value optimization ("What plan optimizes ROIC?").3

3.2. Critic and Validation Module (CVM) - Phase 2 Implementation

(Note: While the CVM (Challenge Framework) is philosophically necessary, its implementation should be deferred to Phase 2 to focus the MVP on the core UI and logic of the SIE.)

3.3. Contextual Alignment Buffer (CAB)

The CAB is the database layer that stores the inputs and outcomes of the Socratic dialogue.

IDFunctionalityRequirement DetailsAlignment
1.4Human Judgment CaptureAfter the user considers the questions, capture the full text of the user's responses into a dedicated field in the Audit Trail.Trust Layer / Auditable Guardrails: Logs the human judgment (the explicit reason for the decision), fulfilling the justification requirement.3
1.5Adaptive Learning FlagFlag the specific questions that the user identified as the most impactful (e.g., if the user marked a question as "Insightful"). Track whether that question led to a subsequent change in the final decision.Economic Moat: Captures proprietary human reasoning patterns, allowing the system to learn which questions maximize Decision Value per Token (DVT).2

4. Layer 2: Multi-Perspective Scenario Synthesis

Once the questions are answered, the system proceeds to generate scenarios, but the output structure is fundamentally different from a traditional "answer machine."

IDFunctionalityRequirement DetailsAlignment
2.1Perspectives SynthesisScenarios must include a mandatory synthesis view from at least four distinct domains (e.g., Finance, Operations, Compliance, Historical Context/Lessons Learned).Trade-off Visibility: Prevents siloed decision-making by forcing consideration of downstream and cross-functional consequences.8
2.2Unresolved Questions FlagThe LLM must analyze the scenario and explicitly list 3-5 outstanding uncertainties (e.g., "Assumes Metco qualification timeline is 12 weeks—unconfirmed").Due Diligence Enforcement: Prevents premature commitment by forcing the user to chase down the missing, high-risk data (e.g., "Who haven't we consulted?") before approving the execution.
2.3Explicit Trade-OffsEvery scenario must contain a mandatory, concise Trade-Off Section that answers: "What we optimize for" and "What we sacrifice."Decision Clarity: Forces the user to explicitly acknowledge the strategic cost of their choice, embedding the trade-off analysis into the final auditable record.
2.4No Single RecommendationThe final output must not include a "Recommended Scenario" or "Optimal Answer" tag. Scenarios are presented as equally viable options with different consequence profiles.Autonomy Preservation: Maintains human agency by leaving the final weighing of strategic priorities (e.g., long-term market access vs. short-term margin hit) to the executive.1

5. UI/UX Requirements (Screen Flow)

The UI must enforce the Socratic flow, preventing users from accessing Stage 2 until Stage 1 is complete.

5.1. Screen 1: Strategic Question Dashboard (Mandatory Block)

IDRequirementRationale
3.1Mandatory BlockMust appear immediately after the user initiates a high-complexity decision. The "Show Me Scenarios" button is disabled until the user has clicked on (acknowledged) all questions.
3.2Input CaptureProvide a mandatory text area beneath each question for the user to document their "thoughts" or "justification." (This input is the Human Judgment Input for the Audit Trail).
3.3Context SnippetsQuestions must be accompanied by relevant, retrieved GraphRAG snippets (e.g., the specific clause from the "Board Memo" or the data from the "2024 Incident Report") to ground the inquiry in reality.10

5.2. Screen 2: Multi-Perspective Scenario Evaluation

IDRequirementRationale
3.4Comparative ViewScenarios must be presented in a side-by-side or tabbed comparative view that highlights the differences in the Trade-Offs Section and the Unresolved Questions.
3.5Decision CaptureThe final "Approve Execution" button logs the selected Scenario ID, the Human Judgment Input (from Screen 1), and the final Unresolved Questions state into the Trust Layer for the immutable audit trail.

6. Technical Architecture

6.1. Service Layer Design

SocraticInquiryService (new service)

Core Methods:

  • generateQuestions(decisionContext, tenantId) - Orchestrates question generation via LLM and GraphRAG
  • captureHumanJudgment(questionId, userResponse, userId) - Stores user input in CAB
  • calculateProblemStateVector(decisionId) - Aggregates judgments into refined state
  • flagImpactfulQuestion(questionId, userId, impactRating) - Tracks learning signals

Integration Points:

  • GraphRAG/Cognee: Context retrieval for question generation (Strategic North, Historical Failures, Constraints)
  • AIManager: LLM orchestration for dynamic question generation
  • ReasoningBankService: Store reasoning chains from Socratic dialogue
  • NotificationService: Alert users when questions are ready
  • AuditTrailService: Immutable logging of human judgments

6.2. API Endpoints

GraphQL Mutations

mutation GenerateSocraticQuestions($input: DecisionContextInput!) {
generateSocraticQuestions(input: $input) {
questionId
questionText
category # Assumptions, Perspectives, Trade-offs, Failure-Modes, Second-Order-Effects
contextSnippets {
source
text
metadata
}
requiredResponse
}
}

mutation CaptureHumanJudgment($input: HumanJudgmentInput!) {
captureHumanJudgment(input: $input) {
success
auditTrailId
}
}

mutation FlagImpactfulQuestion($questionId: ID!, $impactRating: Int!) {
flagImpactfulQuestion(questionId: $questionId, impactRating: $impactRating) {
success
}
}

GraphQL Queries

query GetSocraticQuestions($decisionId: ID!) {
socraticQuestions(decisionId: $decisionId) {
questionId
questionText
category
contextSnippets {
source
text
}
userResponse
answeredAt
impactRating
}
}

query GetProblemStateVector($decisionId: ID!) {
problemStateVector(decisionId: $decisionId) {
vectorId
strategicPriorities
acknowledgedTradeoffs
unresolvedUncertainties
humanJustification
}
}

6.3. Data Flow Diagram

User Initiates Complex Decision

[Complexity Detector] → Triggers SIE

[SocraticInquiryService.generateQuestions()]

[GraphRAG Context Retrieval] ← Cognee API

[AIManager.generateSocraticPrompt()] → Gemini API

[Question Portfolio] → Frontend (Screen 1)

User Answers Questions

[captureHumanJudgment()] → CAB Database + Audit Trail

[calculateProblemStateVector()]

[Scenario Synthesis] → Multi-Perspective Output (Screen 2)

7. Database Schema

7.1. socratic_questions Table

CREATE TABLE socratic_questions (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
decision_id UUID NOT NULL REFERENCES decisions(id),
question_text TEXT NOT NULL,
category VARCHAR(50) NOT NULL, -- 'assumptions', 'perspectives', 'trade-offs', 'failure-modes', 'second-order-effects'
context_snippets JSONB, -- Array of {source, text, metadata}
generated_by VARCHAR(50) DEFAULT 'dynamic-llm', -- 'dynamic-llm' or 'hardcoded' (for migration)
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(decision_id, question_text)
);

CREATE INDEX idx_socratic_questions_decision ON socratic_questions(decision_id);
CREATE INDEX idx_socratic_questions_tenant ON socratic_questions(tenant_id);
CREATE INDEX idx_socratic_questions_category ON socratic_questions(category);

7.2. human_judgments Table (CAB)

CREATE TABLE human_judgments (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
question_id UUID NOT NULL REFERENCES socratic_questions(id),
user_id UUID NOT NULL REFERENCES users(id),
decision_id UUID NOT NULL REFERENCES decisions(id),
response_text TEXT NOT NULL,
impact_rating INT CHECK (impact_rating BETWEEN 1 AND 5), -- User-rated impact
answered_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
led_to_decision_change BOOLEAN DEFAULT NULL, -- Tracked post-decision for learning
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX idx_human_judgments_question ON human_judgments(question_id);
CREATE INDEX idx_human_judgments_user ON human_judgments(user_id);
CREATE INDEX idx_human_judgments_decision ON human_judgments(decision_id);

7.3. problem_state_vectors Table

CREATE TABLE problem_state_vectors (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
decision_id UUID NOT NULL REFERENCES decisions(id) UNIQUE,
strategic_priorities JSONB, -- Extracted from human judgments
acknowledged_tradeoffs JSONB, -- Key trade-offs user explicitly acknowledged
unresolved_uncertainties JSONB, -- Questions user flagged as "need more data"
human_justification TEXT, -- Aggregated justification text
vector_embedding VECTOR(768), -- For future semantic similarity analysis
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX idx_problem_state_vectors_decision ON problem_state_vectors(decision_id);
CREATE INDEX idx_problem_state_vectors_tenant ON problem_state_vectors(tenant_id);
CREATE INDEX idx_problem_state_vectors_embedding ON problem_state_vectors USING ivfflat(vector_embedding vector_cosine_ops);

7.4. audit_trail Enhancement

-- Extend existing audit_trail table with SIE-specific columns
ALTER TABLE audit_trail ADD COLUMN IF NOT EXISTS socratic_session_id UUID REFERENCES decisions(id);
ALTER TABLE audit_trail ADD COLUMN IF NOT EXISTS human_judgment_id UUID REFERENCES human_judgments(id);
ALTER TABLE audit_trail ADD COLUMN IF NOT EXISTS problem_state_vector_id UUID REFERENCES problem_state_vectors(id);

7.5. scenarios Table + Junction (Many-to-Many Relationships)

-- Scenarios table (reusable scenario templates)
CREATE TABLE scenarios (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
title VARCHAR(255) NOT NULL,
description TEXT,
template BOOLEAN DEFAULT FALSE, -- TRUE if reusable across decisions
created_by UUID REFERENCES users(id),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX idx_scenarios_tenant ON scenarios(tenant_id);
CREATE INDEX idx_scenarios_template ON scenarios(template) WHERE template = TRUE;

-- Junction table: decision_scenarios (many-to-many)
CREATE TABLE decision_scenarios (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
decision_id UUID NOT NULL REFERENCES decisions(id) ON DELETE CASCADE,
scenario_id UUID NOT NULL REFERENCES scenarios(id) ON DELETE CASCADE,
is_selected BOOLEAN DEFAULT FALSE, -- Only one scenario per decision can be selected
trade_offs JSONB, -- {"optimizes_for": [...], "sacrifices": [...]}
unresolved_questions JSONB, -- Array of outstanding uncertainties
perspectives JSONB, -- {"finance": {...}, "operations": {...}, "compliance": {...}, "historical": {...}}
financial_impact JSONB, -- NPV, ROI, margin impact
risk_score NUMERIC(3,2), -- 0.00 to 5.00
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(decision_id, scenario_id)
);

CREATE INDEX idx_decision_scenarios_decision ON decision_scenarios(decision_id);
CREATE INDEX idx_decision_scenarios_scenario ON decision_scenarios(scenario_id);
CREATE INDEX idx_decision_scenarios_selected ON decision_scenarios(is_selected) WHERE is_selected = TRUE;

Usage:

  • A decision can evaluate multiple scenarios (1-to-many via junction)
  • Scenarios can be templates reused across decisions (many-to-many)
  • Only one scenario per decision can be marked as is_selected

7.6. tags System (Decision & Question Tagging)

-- Tags table (tenant-scoped)
CREATE TABLE tags (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
name VARCHAR(100) NOT NULL,
category VARCHAR(50), -- 'decision_type', 'priority', 'domain', 'compliance', 'strategic'
color VARCHAR(7), -- Hex color code for UI (e.g., '#3B82F6')
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(tenant_id, name, category)
);

CREATE INDEX idx_tags_tenant ON tags(tenant_id);
CREATE INDEX idx_tags_category ON tags(category);

-- Junction: decision_tags
CREATE TABLE decision_tags (
decision_id UUID NOT NULL REFERENCES decisions(id) ON DELETE CASCADE,
tag_id UUID NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
tagged_by UUID REFERENCES users(id),
tagged_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (decision_id, tag_id)
);

CREATE INDEX idx_decision_tags_decision ON decision_tags(decision_id);
CREATE INDEX idx_decision_tags_tag ON decision_tags(tag_id);

-- Junction: socratic_question_tags (optional, for categorizing questions)
CREATE TABLE socratic_question_tags (
question_id UUID NOT NULL REFERENCES socratic_questions(id) ON DELETE CASCADE,
tag_id UUID NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
PRIMARY KEY (question_id, tag_id)
);

CREATE INDEX idx_socratic_question_tags_question ON socratic_question_tags(question_id);
CREATE INDEX idx_socratic_question_tags_tag ON socratic_question_tags(tag_id);

Usage:

  • Tag decisions with multiple tags (e.g., "S&OP", "High Priority", "Compliance", "Q4 2024")
  • Tag Socratic questions for pattern analysis (e.g., "Always leads to decision change")
  • Tags are color-coded for UI

7.7. decision_collaborators (Multi-User Decisions)

-- Junction: decision_collaborators (many-to-many)
CREATE TABLE decision_collaborators (
decision_id UUID NOT NULL REFERENCES decisions(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
role VARCHAR(50) NOT NULL, -- 'initiator', 'reviewer', 'approver', 'contributor', 'observer'
added_by UUID REFERENCES users(id), -- Who added this collaborator
added_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (decision_id, user_id),
CHECK (role IN ('initiator', 'reviewer', 'approver', 'contributor', 'observer'))
);

CREATE INDEX idx_decision_collaborators_decision ON decision_collaborators(decision_id);
CREATE INDEX idx_decision_collaborators_user ON decision_collaborators(user_id);
CREATE INDEX idx_decision_collaborators_role ON decision_collaborators(role);

-- Notification preferences for collaborators
CREATE TABLE decision_collaborator_preferences (
decision_id UUID NOT NULL REFERENCES decisions(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
notify_on_status_change BOOLEAN DEFAULT TRUE,
notify_on_new_comment BOOLEAN DEFAULT TRUE,
notify_on_scenario_added BOOLEAN DEFAULT TRUE,
PRIMARY KEY (decision_id, user_id),
FOREIGN KEY (decision_id, user_id) REFERENCES decision_collaborators(decision_id, user_id) ON DELETE CASCADE
);

Usage:

  • Multiple users can collaborate on a decision with different roles
  • Initiator: Started the decision
  • Reviewer: Provides input during Socratic dialogue
  • Approver: Final approval authority
  • Contributor: Adds scenarios or context
  • Observer: Read-only, receives notifications

7.8. Junction Tables Summary

Complete Junction Table Set for SIE (M70):

Junction TableConnectsPurpose
decision_scenariosdecisions ↔ scenariosMultiple scenarios per decision, reusable templates
decision_tagsdecisions ↔ tagsMulti-tag categorization and filtering
socratic_question_tagssocratic_questions ↔ tagsPattern analysis on question types
decision_collaboratorsdecisions ↔ usersMulti-user collaboration with roles

Supporting Entity Tables:

TablePurpose
scenariosReusable scenario templates
tagsTenant-scoped tagging system
decision_collaborator_preferencesNotification settings per collaborator

Note: Additional junction tables for GraphRAG (M72) and other milestones are defined in their respective FSDs.

8. LLM Integration Specification

8.1. Question Generation Prompt Template

// In SocraticInquiryService.js
const SOCRATIC_PROMPT_TEMPLATE = `
You are a strategic advisor using the Socratic method to help executives make high-stakes decisions.

CONTEXT:
Decision Type: {{decisionType}}
Organization Context:
{{graphRAGContext}}

INSTRUCTIONS:
Generate exactly 5 strategic questions following these categories:
1. Assumptions Challenge: What assumptions are we making?
2. Perspectives: Whose voice haven't we considered?
3. Trade-offs: What are we optimizing for vs. sacrificing?
4. Failure Modes: What could go wrong?
5. Second-Order Effects: What happens after the first move?

CRITICAL RULES:
- Output ONLY questions, no answers or suggestions
- Questions must be specific to this organization's context
- Each question must reference concrete data from the context
- Avoid generic platitudes

OUTPUT FORMAT (JSON):
{
"questions": [
{
"category": "assumptions",
"text": "...",
"contextReference": "..."
},
...
]
}
`;

8.2. Token Budget Management

ResourceBudgetRationale
Question Generation Input2,000 tokensContext + prompt template
Question Generation Output500 tokens5 questions × ~100 tokens
GraphRAG Context Window1,500 tokensTop 3 most relevant snippets

Fallback Strategy: If LLM fails, serve pre-configured default questions for decision type

8.3. Caching Strategy

  • Cache generated questions by hash(decisionType + graphRAGContext) for 24 hours
  • If similar decision context detected, suggest previously successful questions

9. GraphRAG Integration

9.1. Context Retrieval Queries

Strategic North (Board Priorities):

const strategicContext = await cogneeService.query({
query: "What are the current strategic priorities and board mandates?",
filter: { documentType: "board_memo", recency: "last_6_months" },
limit: 3
});

Historical Failures:

const historicalContext = await cogneeService.query({
query: `Past incidents related to ${decisionType}`,
filter: { documentType: "incident_report", severity: "high" },
limit: 2
});

Constraints:

const constraintsContext = await cogneeService.query({
query: `Current capacity utilization and budget constraints for ${decisionScope}`,
filter: { documentType: ["capacity_report", "budget_snapshot"], recency: "last_30_days" },
limit: 2
});

9.2. Context Snippet Formatting

{
"source": "Board Memo Q3 2024",
"text": "Priority #1: Achieve 95% on-time delivery while maintaining 40% gross margin floor.",
"metadata": {
"documentId": "uuid",
"relevanceScore": 0.89,
"retrievedAt": "2024-11-14T10:30:00Z"
}
}

10. Performance Requirements

MetricTargetRationale
Question Generation Latency< 5 secondsSIE must feel "instant" to avoid workflow disruption
GraphRAG Context Retrieval< 2 secondsParallel retrieval for 3 context types
Human Judgment Capture< 500msSimple DB write, no processing
Problem State Vector Calculation< 3 secondsAggregates all judgments, extracts key themes
Concurrent Sessions50 usersTypical enterprise S&OP planning team size

11. Security & Compliance

11.1. Audit Trail Immutability

  • All human_judgments records are append-only (no UPDATE/DELETE)
  • Soft deletes only (add deleted_at column, hide from UI)
  • Cryptographic hash of each audit record stored for tamper detection

11.2. RBAC for Question Visibility

RoleAccess Level
ExecutiveCan see all questions and aggregated judgments
PlannerCan see questions for their domain only (e.g., Supply Planner sees supply questions)
ViewerRead-only access to final scenarios, no access to Socratic questions

11.3. Data Retention

  • Socratic questions and judgments retained for 7 years (regulatory compliance)
  • Problem State Vectors retained indefinitely (proprietary learning asset)

12. Testing Requirements

12.1. Unit Tests (80% coverage target)

  • SocraticInquiryService.generateQuestions() with mocked GraphRAG responses
  • SocraticInquiryService.captureHumanJudgment() validates audit trail creation
  • SocraticInquiryService.calculateProblemStateVector() aggregates judgments correctly
  • Prompt template rendering with various decision contexts

12.2. Integration Tests

  • E2E Flow: Decision creation → Question generation → User input → Scenario synthesis
  • GraphRAG Integration: Verify correct context retrieval for each decision type
  • LLM Fallback: Simulate Gemini API failure, verify default questions served

12.3. Performance Tests

  • Load Test: 50 concurrent question generation requests (target: < 5s p95)
  • Stress Test: 100 users submitting human judgments simultaneously

13. Migration Strategy

13.1. Phase 1: Dual-Mode Operation (Weeks 1-2)

  • Deploy new schema alongside existing hardcoded questions
  • Add generated_by column to track source ('hardcoded' vs 'dynamic-llm')
  • Feature flag: ENABLE_DYNAMIC_SOCRATIC_QUESTIONS (default: false)

13.2. Phase 2: Pilot with 10% Traffic (Week 3)

  • Enable dynamic generation for 10% of new decisions
  • Monitor latency, question quality, user feedback
  • A/B test: Do dynamic questions increase decision confidence scores?

13.3. Phase 3: Full Rollout (Week 4)

  • Enable for 100% of new decisions
  • Deprecate hardcoded question logic (keep for backward compatibility)
  • Backfill: Optionally generate dynamic questions for historical decisions

13.4. Rollback Plan

  • If dynamic LLM fails, fallback to hardcoded questions immediately
  • Feature flag allows instant rollback without code deployment

14. Success Criteria

MVP (M70) Complete When:

✅ Dynamic question generation functional with < 5s latency ✅ All 5 Socratic categories covered in generated questions ✅ GraphRAG context correctly retrieved and embedded in questions ✅ Human judgments captured in audit trail with immutability guarantees ✅ Problem State Vector calculated and passed to scenario synthesis ✅ UI enforces mandatory question completion before scenario access ✅ 80% unit test coverage, all integration tests passing

15. Dependencies

DependencyStatusRequired For
Cognee GraphRAG Integration (M72)50% → needs 100%Context retrieval for QGM
Zep Memory Activation (M73)40% → needs 100%Long-term user reasoning pattern learning
AIManager.js (Gemini Integration)✅ OperationalLLM-based question generation
Audit Trail Schema✅ Exists, needs enhancementImmutable human judgment logging

Document Status: Enhanced with Technical Specifications Last Updated: 2025-11-14 Milestone: M70 - Socratic Question Generation Estimated Implementation: 3 weeks (M70 only), 6 weeks (M70-M73 integrated)