Closes Stream D per docs/v2/implementation/phase-6-1-resilience-and-observability.md. New Configuration.LocalCache types (alongside the existing single-file LiteDbConfigCache): - GenerationSealedCache — file-per-generation sealed snapshots per decision #148. Each SealAsync writes <cache-root>/<clusterId>/<generationId>.db as a read-only LiteDB file, then atomically publishes the CURRENT pointer via temp-file + File.Replace. Prior-generation files stay on disk for audit. Mixed-generation reads are structurally impossible: ReadCurrentAsync opens the single file named by CURRENT. Corruption of the pointer or the sealed file raises GenerationCacheUnavailableException — fails closed, never falls back silently to an older generation. TryGetCurrentGenerationId returns the pointer value or null for diagnostics. - StaleConfigFlag — thread-safe (Volatile.Read/Write) bool. MarkStale when a read fell back to the cache; MarkFresh when a central-DB read succeeded. Surfaced on /healthz body and Admin /hosts (Stream C wiring already in place). - ResilientConfigReader — wraps a central-DB fetch function with the Stream D.2 pipeline: timeout 2 s → retry N× jittered (skipped when retryCount=0) → fallback to the sealed cache. Toggles StaleConfigFlag per outcome. Read path only — the write path is expected to bypass this wrapper and fail hard on DB outage so inconsistent writes never land. Cancellation passes through and is NOT retried. Configuration.csproj: - Polly.Core 8.6.6 + Microsoft.Extensions.Logging.Abstractions added. Tests (17 new, all pass): - GenerationSealedCacheTests (10): first-boot-no-snapshot throws GenerationCacheUnavailableException (D.4 scenario C), seal-then-read round trip, sealed file is ReadOnly on disk, pointer advances to latest, prior generation file preserved, corrupt sealed file fails closed, missing sealed file fails closed, corrupt pointer fails closed (D.4 scenario B), same generation sealed twice is idempotent, independent clusters don't interfere. - ResilientConfigReaderTests (4): central-DB success returns value + marks fresh; central-DB failure exhausts retries + falls back to cache + marks stale (D.4 scenario A); central-DB + cache both unavailable throws; cancellation not retried. - StaleConfigFlagTests (3): default is fresh; toggles; concurrent writes converge. Full solution dotnet test: 1033 passing (baseline 906, +127 net across Phase 6.1 Streams A/B/C/D). Pre-existing Client.CLI Subscribe flake unchanged. Integration into Configuration read paths (DriverInstance enumeration, LdapGroupRoleMapping fetches, etc.) + the sp_PublishGeneration hook that writes sealed files lands in the Phase 6.1 Stream E / Admin-refresh PR where the DB integration surfaces are already touched. Existing LiteDbConfigCache continues serving its single-file role for the NodeBootstrap path. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
91 lines
3.8 KiB
C#
91 lines
3.8 KiB
C#
using Microsoft.Extensions.Logging;
|
||
using Polly;
|
||
using Polly.Retry;
|
||
using Polly.Timeout;
|
||
|
||
namespace ZB.MOM.WW.OtOpcUa.Configuration.LocalCache;
|
||
|
||
/// <summary>
|
||
/// Wraps a central-DB fetch function with Phase 6.1 Stream D.2 resilience:
|
||
/// <b>timeout 2 s → retry 3× jittered → fallback to sealed cache</b>. Maintains the
|
||
/// <see cref="StaleConfigFlag"/> — fresh on central-DB success, stale on cache fallback.
|
||
/// </summary>
|
||
/// <remarks>
|
||
/// <para>Read-path only per plan. The write path (draft save, publish) bypasses this
|
||
/// wrapper entirely and fails hard on DB outage so inconsistent writes never land.</para>
|
||
///
|
||
/// <para>Fallback is triggered by <b>any exception</b> the fetch raises (central-DB
|
||
/// unreachable, SqlException, timeout). If the sealed cache also fails (no pointer,
|
||
/// corrupt file, etc.), <see cref="GenerationCacheUnavailableException"/> surfaces — caller
|
||
/// must fail the current request (InitializeAsync for a driver, etc.).</para>
|
||
/// </remarks>
|
||
public sealed class ResilientConfigReader
|
||
{
|
||
private readonly GenerationSealedCache _cache;
|
||
private readonly StaleConfigFlag _staleFlag;
|
||
private readonly ResiliencePipeline _pipeline;
|
||
private readonly ILogger<ResilientConfigReader> _logger;
|
||
|
||
public ResilientConfigReader(
|
||
GenerationSealedCache cache,
|
||
StaleConfigFlag staleFlag,
|
||
ILogger<ResilientConfigReader> logger,
|
||
TimeSpan? timeout = null,
|
||
int retryCount = 3)
|
||
{
|
||
_cache = cache;
|
||
_staleFlag = staleFlag;
|
||
_logger = logger;
|
||
var builder = new ResiliencePipelineBuilder()
|
||
.AddTimeout(new TimeoutStrategyOptions { Timeout = timeout ?? TimeSpan.FromSeconds(2) });
|
||
|
||
if (retryCount > 0)
|
||
{
|
||
builder.AddRetry(new RetryStrategyOptions
|
||
{
|
||
MaxRetryAttempts = retryCount,
|
||
BackoffType = DelayBackoffType.Exponential,
|
||
UseJitter = true,
|
||
Delay = TimeSpan.FromMilliseconds(100),
|
||
MaxDelay = TimeSpan.FromSeconds(1),
|
||
ShouldHandle = new PredicateBuilder().Handle<Exception>(ex => ex is not OperationCanceledException),
|
||
});
|
||
}
|
||
|
||
_pipeline = builder.Build();
|
||
}
|
||
|
||
/// <summary>
|
||
/// Execute <paramref name="centralFetch"/> through the resilience pipeline. On full failure
|
||
/// (post-retry), reads the sealed cache for <paramref name="clusterId"/> and passes the
|
||
/// snapshot to <paramref name="fromSnapshot"/> to extract the requested shape.
|
||
/// </summary>
|
||
public async ValueTask<T> ReadAsync<T>(
|
||
string clusterId,
|
||
Func<CancellationToken, ValueTask<T>> centralFetch,
|
||
Func<GenerationSnapshot, T> fromSnapshot,
|
||
CancellationToken cancellationToken)
|
||
{
|
||
ArgumentException.ThrowIfNullOrWhiteSpace(clusterId);
|
||
ArgumentNullException.ThrowIfNull(centralFetch);
|
||
ArgumentNullException.ThrowIfNull(fromSnapshot);
|
||
|
||
try
|
||
{
|
||
var result = await _pipeline.ExecuteAsync(centralFetch, cancellationToken).ConfigureAwait(false);
|
||
_staleFlag.MarkFresh();
|
||
return result;
|
||
}
|
||
catch (Exception ex) when (ex is not OperationCanceledException)
|
||
{
|
||
_logger.LogWarning(ex, "Central-DB read failed after retries; falling back to sealed cache for cluster {ClusterId}", clusterId);
|
||
// GenerationCacheUnavailableException surfaces intentionally — fails the caller's
|
||
// operation. StaleConfigFlag stays unchanged; the flag only flips when we actually
|
||
// served a cache snapshot.
|
||
var snapshot = await _cache.ReadCurrentAsync(clusterId, cancellationToken).ConfigureAwait(false);
|
||
_staleFlag.MarkStale();
|
||
return fromSnapshot(snapshot);
|
||
}
|
||
}
|
||
}
|