Skip to main content

99. Dynamic TTL Caching for TA Service

Status: Accepted Date: 2025-07-06

Context

The mercury-ta service provides technical analysis data that is computationally expensive to generate. To ensure good performance and to avoid hitting exchange rate limits, the Minerva module caches the responses from this service in Redis. A simple, fixed Time-To-Live (TTL) for all cached data is suboptimal. Data for a 1-minute timeframe becomes stale much faster than data for a 1-day timeframe. Caching 1-day data for only 5 minutes is inefficient, while caching 1-minute data for 24 hours would mean we are serving uselessly stale data.

Decision

We will implement a Dynamic TTL Caching Strategy within the Minerva module, which is the consumer of the mercury-ta service.

The TTL for a cached indicator result will not be a single fixed value, but will be determined dynamically based on the timeframe of the request.

  • Lower Timeframes (e.g., 1m, 5m, 15m) will have a shorter TTL. For example, the TTL might be equal to the timeframe itself (e.g., cache 5-minute data for 5 minutes).
  • Higher Timeframes (e.g., 1h, 4h, 1d) will have a longer TTL. For example, 4-hour data could be cached for an hour, and 1-day data could be cached for several hours.

This logic will be implemented in the BaseIndicatorService (adr://base-indicator-pattern), making it a consistent feature for all indicators. The getHistory method will calculate the appropriate TTL based on the timeframe parameter before writing a new result to the Redis cache.

Consequences

Positive:

  • Optimal Balance of Freshness and Performance: This strategy ensures that we are not serving stale data on short timeframes, while also maximizing the performance benefits of caching for long timeframes. It's a much more efficient use of our cache and the mercury-ta service.
  • Reduced Load on External APIs: By holding onto higher-timeframe data for longer, we significantly reduce the number of redundant calculations and data fetches from the underlying exchange API.
  • Intelligent Resource Management: This is a smarter, more context-aware way to manage our system's resources (both the cache memory and the CPU cycles of the mercury-ta service).

Negative:

  • Slightly More Complex Logic: The caching logic is no longer a simple set(key, value, ttl). It requires a small amount of logic to determine the TTL based on the request parameters.
  • Configuration Management: The mapping between timeframes and TTLs will need to be configured somewhere, adding a small piece of configuration to manage.

Mitigation:

  • Centralized Logic: The dynamic TTL logic will be implemented once, cleanly, inside the BaseIndicatorService. It will not be scattered across the application.
  • Simple Configuration: The TTL mapping will be stored in a simple, well-documented configuration file, making it easy to review and adjust the caching strategy without changing the code. For example: { "1m": 60, "5m": 300, "1h": 3600, ... }.