Skip to main content

82. AI-First Design Philosophy

Status: Accepted Date: 2025-07-06

Context

Modern AI, particularly Large Language Models (LLMs), offers powerful new capabilities for data analysis, pattern recognition, and decision support. In the domain of financial markets, this is especially valuable. We need to decide on a strategy for how to incorporate these AI capabilities into the Mercury trading system. We could treat AI as a peripheral feature, or we could make it a core part of the system's design.

Decision

We will adopt an AI-First Design Philosophy. This means that AI, specifically the use of LLMs via the kaido-ollama library, will be treated as a first-class citizen and deeply integrated into the core logic of the application.

This philosophy manifests in several key ways:

  • Core Features Rely on AI: Key system workflows, such as market comparison in the Minerva tournaments or portfolio analysis, will be driven by LLM-powered services.
  • Data Structured for AI: Our data models and API payloads will be designed from the ground up to be easily consumable and understandable by LLMs.
  • Dedicated AI Gateway: A dedicated module, Morpheus (adr://ai-api-gateway), will serve as the single, standardized entry point for all interactions with AI models, encapsulating prompting, parsing, and error handling logic.
  • Human-in-the-Loop Design: While AI-driven, the system is designed to provide analysis and recommendations to a human operator, not to perform fully autonomous trading initially. The AI's outputs are decision-support tools.

Consequences

Positive:

  • Significant Competitive Advantage: Deeply integrating AI allows us to create powerful, unique features (like natural language market summaries or sophisticated pattern detection) that are difficult to replicate with traditional algorithms.
  • Qualitative Analysis: LLMs excel at qualitative and comparative analysis, allowing us to ask questions like "Which of these two markets looks more bullish and why?" in a way that is not possible with purely quantitative tools.
  • Rapid Prototyping: We can rapidly prototype complex analysis features by simply writing new prompts, rather than writing complex new algorithms from scratch.

Negative:

  • Dependency on a Fast-Moving Technology: The field of LLMs is evolving rapidly. Models, APIs, and best practices can change quickly, which may require us to adapt our implementation.
  • Non-Determinism and "Hallucinations": LLMs can sometimes produce incorrect, nonsensical, or non-deterministic outputs. Their results are not as predictable as traditional algorithms.
  • Cost and Performance: Calls to LLM APIs (whether local or cloud-based) can be slow and computationally expensive compared to traditional code.

Mitigation:

  • Abstraction Layer: The Morpheus gateway provides a stable internal interface that isolates the rest of the application from changes in the underlying AI models or libraries.
  • Validation and Guardrails: All AI-generated output will be passed through a strict validation layer. This includes parsing the output into strongly-typed data structures (using tools like Zod) and applying business rule checks. We will never blindly trust or execute raw AI output. We will use techniques like "Chain of Thought" prompting and multi-step validation to improve reliability.
  • Asynchronous Execution: All expensive AI operations will be executed asynchronously via background jobs (adr://async-analysis), ensuring they do not block the main application threads or impact user-facing performance. We will also favor smaller, locally-hosted models for performance-sensitive tasks where possible.