Phase 6.1 Stream B.3/B.4/B.5 — MemoryRecycle + ScheduledRecycleScheduler + demand-aware WedgeDetector

Closes out Stream B per docs/v2/implementation/phase-6-1-resilience-and-observability.md.

Core.Abstractions:
- IDriverSupervisor — process-level supervisor contract a Tier C driver's
  out-of-process topology provides (Galaxy Proxy/Supervisor implements this in
  a follow-up Driver.Galaxy wiring PR). Concerns: DriverInstanceId + RecycleAsync.
  Tier A/B drivers don't implement this; Stream B code asserts tier == C before
  ever calling it.

Core.Stability:
- MemoryRecycle — companion to MemoryTracking. On HardBreach, invokes the
  supervisor IFF tier == C AND a supervisor is wired. Tier A/B HardBreach logs
  a promotion-to-Tier-C recommendation and returns false. Soft/None/Warming
  never triggers a recycle at any tier.
- ScheduledRecycleScheduler — Tier C opt-in periodic recycler per decision #67.
  Ctor throws for Tier A/B (structural guard — scheduled recycle on an
  in-process driver would kill every OPC UA session and every co-hosted
  driver). TickAsync(now) advances the schedule by one interval per fire;
  RequestRecycleNowAsync drives an ad-hoc recycle without shifting the cron.
- WedgeDetector — demand-aware per decision #147. Classify(state, demand, now)
  returns:
    * NotApplicable when driver state != Healthy
    * Idle when Healthy + no pending work (bulkhead=0 && monitored=0 && historic=0)
    * Healthy when Healthy + pending work + progress within threshold
    * Faulted when Healthy + pending work + no progress within threshold
  Threshold clamps to min 60 s. DemandSignal.HasPendingWork ORs the three counters.
  The three false-wedge cases the plan calls out all stay Healthy: idle
  subscription-only, slow historian backfill making progress, write-only burst
  with drained bulkhead.

Tests (22 new, all pass):
- MemoryRecycleTests (7): Tier C hard-breach requests recycle; Tier A/B
  hard-breach never requests; Tier C without supervisor no-ops; soft-breach
  at every tier never requests; None/Warming never request.
- ScheduledRecycleSchedulerTests (6): ctor throws for A/B; zero/negative
  interval throws; tick before due no-ops; tick at/after due fires once and
  advances; RequestRecycleNow fires immediately without shifting schedule;
  multiple fires across ticks advance one interval each.
- WedgeDetectorTests (9): threshold clamp to 60 s; unhealthy driver always
  NotApplicable; idle subscription stays Idle; pending+fresh progress stays
  Healthy; pending+stale progress is Faulted; MonitoredItems active but no
  publish is Faulted; MonitoredItems active with fresh publish stays Healthy;
  historian backfill with fresh progress stays Healthy; write-only burst with
  empty bulkhead is Idle; HasPendingWork theory for any non-zero counter.

Full solution dotnet test: 989 passing (baseline 906, +83 for Phase 6.1 so far).
Pre-existing Client.CLI Subscribe flake unchanged.

Stream B complete. Next up: Stream C (health endpoints + structured logging).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Joseph Doherty
2026-04-19 08:03:18 -04:00
parent ef6b0bb8fc
commit 1d9008e354
7 changed files with 562 additions and 0 deletions

View File

@@ -0,0 +1,26 @@
namespace ZB.MOM.WW.OtOpcUa.Core.Abstractions;
/// <summary>
/// Process-level supervisor contract a Tier C driver's out-of-process topology provides
/// (e.g. <c>Driver.Galaxy.Proxy/Supervisor/</c>). Concerns: restart the Host process when a
/// hard fault is detected (memory breach, wedge, scheduled recycle window).
/// </summary>
/// <remarks>
/// Per <c>docs/v2/plan.md</c> decisions #68, #73-74, and #145. Tier A/B drivers do NOT have
/// a supervisor because they run in-process — recycling would kill every OPC UA session and
/// every co-hosted driver. The Core.Stability layer only invokes this interface for Tier C
/// instances after asserting the tier via <see cref="DriverTypeMetadata.Tier"/>.
/// </remarks>
public interface IDriverSupervisor
{
/// <summary>Driver instance this supervisor governs.</summary>
string DriverInstanceId { get; }
/// <summary>
/// Request the supervisor to recycle (terminate + restart) the Host process. Implementations
/// are expected to be idempotent under repeat calls during an in-flight recycle.
/// </summary>
/// <param name="reason">Human-readable reason — flows into the supervisor's logs.</param>
/// <param name="cancellationToken">Cancels the recycle request; an in-flight restart is not interrupted.</param>
Task RecycleAsync(string reason, CancellationToken cancellationToken);
}

View File

@@ -0,0 +1,65 @@
using Microsoft.Extensions.Logging;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
namespace ZB.MOM.WW.OtOpcUa.Core.Stability;
/// <summary>
/// Tier C only process-recycle companion to <see cref="MemoryTracking"/>. On a
/// <see cref="MemoryTrackingAction.HardBreach"/> signal, invokes the supplied
/// <see cref="IDriverSupervisor"/> to restart the out-of-process Host.
/// </summary>
/// <remarks>
/// Per <c>docs/v2/plan.md</c> decisions #74 and #145. Tier A/B hard-breach on an in-process
/// driver would kill every OPC UA session and every co-hosted driver, so for Tier A/B this
/// class logs a <b>promotion-to-Tier-C recommendation</b> and does NOT invoke any supervisor.
/// A future tier-migration workflow acts on the recommendation.
/// </remarks>
public sealed class MemoryRecycle
{
private readonly DriverTier _tier;
private readonly IDriverSupervisor? _supervisor;
private readonly ILogger<MemoryRecycle> _logger;
public MemoryRecycle(DriverTier tier, IDriverSupervisor? supervisor, ILogger<MemoryRecycle> logger)
{
_tier = tier;
_supervisor = supervisor;
_logger = logger;
}
/// <summary>
/// Handle a <see cref="MemoryTracking"/> classification for the driver. For Tier C with a
/// wired supervisor, <c>HardBreach</c> triggers <see cref="IDriverSupervisor.RecycleAsync"/>.
/// All other combinations are no-ops with respect to process state (soft breaches + Tier A/B
/// hard breaches just log).
/// </summary>
/// <returns>True when a recycle was requested; false otherwise.</returns>
public async Task<bool> HandleAsync(MemoryTrackingAction action, long footprintBytes, CancellationToken cancellationToken)
{
switch (action)
{
case MemoryTrackingAction.SoftBreach:
_logger.LogWarning(
"Memory soft-breach on driver {DriverId}: footprint={Footprint:N0} bytes, tier={Tier}. Surfaced to Admin; no action.",
_supervisor?.DriverInstanceId ?? "(unknown)", footprintBytes, _tier);
return false;
case MemoryTrackingAction.HardBreach when _tier == DriverTier.C && _supervisor is not null:
_logger.LogError(
"Memory hard-breach on Tier C driver {DriverId}: footprint={Footprint:N0} bytes. Requesting supervisor recycle.",
_supervisor.DriverInstanceId, footprintBytes);
await _supervisor.RecycleAsync($"Memory hard-breach: {footprintBytes} bytes", cancellationToken).ConfigureAwait(false);
return true;
case MemoryTrackingAction.HardBreach:
_logger.LogError(
"Memory hard-breach on Tier {Tier} in-process driver {DriverId}: footprint={Footprint:N0} bytes. " +
"Recommending promotion to Tier C; NOT auto-killing (decisions #74, #145).",
_tier, _supervisor?.DriverInstanceId ?? "(unknown)", footprintBytes);
return false;
default:
return false;
}
}
}

View File

@@ -0,0 +1,86 @@
using Microsoft.Extensions.Logging;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
namespace ZB.MOM.WW.OtOpcUa.Core.Stability;
/// <summary>
/// Tier C opt-in periodic-recycle driver per <c>docs/v2/plan.md</c> decision #67.
/// A tick method advanced by the caller (fed by a background timer in prod; by test clock
/// in unit tests) decides whether the configured interval has elapsed and, if so, drives the
/// supplied <see cref="IDriverSupervisor"/> to recycle the Host.
/// </summary>
/// <remarks>
/// Tier A/B drivers MUST NOT use this class — scheduled recycle for in-process drivers would
/// kill every OPC UA session and every co-hosted driver. The ctor throws when constructed
/// with any tier other than C to make the misuse structurally impossible.
///
/// <para>Keeps no background thread of its own — callers invoke <see cref="TickAsync"/> on
/// their ambient scheduler tick (Phase 6.1 Stream C's health-endpoint host runs one). That
/// decouples the unit under test from wall-clock time and thread-pool scheduling.</para>
/// </remarks>
public sealed class ScheduledRecycleScheduler
{
private readonly TimeSpan _recycleInterval;
private readonly IDriverSupervisor _supervisor;
private readonly ILogger<ScheduledRecycleScheduler> _logger;
private DateTime _nextRecycleUtc;
/// <summary>
/// Construct the scheduler for a Tier C driver. Throws if <paramref name="tier"/> isn't C.
/// </summary>
/// <param name="tier">Driver tier; must be <see cref="DriverTier.C"/>.</param>
/// <param name="recycleInterval">Interval between recycles (e.g. 7 days).</param>
/// <param name="startUtc">Anchor time; next recycle fires at <paramref name="startUtc"/> + <paramref name="recycleInterval"/>.</param>
/// <param name="supervisor">Supervisor that performs the actual recycle.</param>
/// <param name="logger">Diagnostic sink.</param>
public ScheduledRecycleScheduler(
DriverTier tier,
TimeSpan recycleInterval,
DateTime startUtc,
IDriverSupervisor supervisor,
ILogger<ScheduledRecycleScheduler> logger)
{
if (tier != DriverTier.C)
throw new ArgumentException(
$"ScheduledRecycleScheduler is Tier C only (got {tier}). " +
"In-process drivers must not use scheduled recycle; see decisions #74 and #145.",
nameof(tier));
if (recycleInterval <= TimeSpan.Zero)
throw new ArgumentException("RecycleInterval must be positive.", nameof(recycleInterval));
_recycleInterval = recycleInterval;
_supervisor = supervisor;
_logger = logger;
_nextRecycleUtc = startUtc + recycleInterval;
}
/// <summary>Next scheduled recycle UTC. Advances by <see cref="RecycleInterval"/> on each fire.</summary>
public DateTime NextRecycleUtc => _nextRecycleUtc;
/// <summary>Recycle interval this scheduler was constructed with.</summary>
public TimeSpan RecycleInterval => _recycleInterval;
/// <summary>
/// Tick the scheduler forward. If <paramref name="utcNow"/> is past
/// <see cref="NextRecycleUtc"/>, requests a recycle from the supervisor and advances
/// <see cref="NextRecycleUtc"/> by exactly one interval. Returns true when a recycle fired.
/// </summary>
public async Task<bool> TickAsync(DateTime utcNow, CancellationToken cancellationToken)
{
if (utcNow < _nextRecycleUtc)
return false;
_logger.LogInformation(
"Scheduled recycle due for Tier C driver {DriverId} at {Now:o}; advancing next to {Next:o}.",
_supervisor.DriverInstanceId, utcNow, _nextRecycleUtc + _recycleInterval);
await _supervisor.RecycleAsync("Scheduled periodic recycle", cancellationToken).ConfigureAwait(false);
_nextRecycleUtc += _recycleInterval;
return true;
}
/// <summary>Request an immediate recycle outside the schedule (e.g. MemoryRecycle hard-breach escalation).</summary>
public Task RequestRecycleNowAsync(string reason, CancellationToken cancellationToken) =>
_supervisor.RecycleAsync(reason, cancellationToken);
}

View File

@@ -0,0 +1,81 @@
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
namespace ZB.MOM.WW.OtOpcUa.Core.Stability;
/// <summary>
/// Demand-aware driver-wedge detector per <c>docs/v2/plan.md</c> decision #147.
/// Flips a driver to <see cref="WedgeVerdict.Faulted"/> only when BOTH of the following hold:
/// (a) there is pending work outstanding, AND (b) no progress has been observed for longer
/// than <see cref="Threshold"/>. Idle drivers, write-only burst drivers, and subscription-only
/// drivers whose signals don't arrive regularly all stay Healthy.
/// </summary>
/// <remarks>
/// <para>Pending work signal is supplied by the caller via <see cref="DemandSignal"/>:
/// non-zero Polly bulkhead depth, ≥1 active MonitoredItem, or ≥1 queued historian read
/// each qualifies. The detector itself is state-light: all it remembers is the last
/// <c>LastProgressUtc</c> it saw and the last wedge verdict. No history buffer.</para>
///
/// <para>Default threshold per plan: <c>5 × PublishingInterval</c>, with a minimum of 60 s.
/// Concrete values are driver-agnostic and configured per-instance by the caller.</para>
/// </remarks>
public sealed class WedgeDetector
{
/// <summary>Wedge-detection threshold; pass &lt; 60 s and the detector clamps to 60 s.</summary>
public TimeSpan Threshold { get; }
/// <summary>Whether the driver reported itself <see cref="DriverState.Healthy"/> at construction.</summary>
public WedgeDetector(TimeSpan threshold)
{
Threshold = threshold < TimeSpan.FromSeconds(60) ? TimeSpan.FromSeconds(60) : threshold;
}
/// <summary>
/// Classify the current state against the demand signal. Does not retain state across
/// calls — each call is self-contained; the caller owns the <c>LastProgressUtc</c> clock.
/// </summary>
public WedgeVerdict Classify(DriverState state, DemandSignal demand, DateTime utcNow)
{
if (state != DriverState.Healthy)
return WedgeVerdict.NotApplicable;
if (!demand.HasPendingWork)
return WedgeVerdict.Idle;
var sinceProgress = utcNow - demand.LastProgressUtc;
return sinceProgress > Threshold ? WedgeVerdict.Faulted : WedgeVerdict.Healthy;
}
}
/// <summary>
/// Caller-supplied demand snapshot. All three counters are OR'd — any non-zero means work
/// is outstanding, which is the trigger for checking the <see cref="LastProgressUtc"/> clock.
/// </summary>
/// <param name="BulkheadDepth">Polly bulkhead depth (in-flight capability calls).</param>
/// <param name="ActiveMonitoredItems">Number of live OPC UA MonitoredItems bound to this driver.</param>
/// <param name="QueuedHistoryReads">Pending historian-read requests the driver owes the server.</param>
/// <param name="LastProgressUtc">Last time the driver reported a successful unit of work (read, subscribe-ack, publish).</param>
public readonly record struct DemandSignal(
int BulkheadDepth,
int ActiveMonitoredItems,
int QueuedHistoryReads,
DateTime LastProgressUtc)
{
/// <summary>True when any of the three counters is &gt; 0.</summary>
public bool HasPendingWork => BulkheadDepth > 0 || ActiveMonitoredItems > 0 || QueuedHistoryReads > 0;
}
/// <summary>Outcome of a single <see cref="WedgeDetector.Classify"/> call.</summary>
public enum WedgeVerdict
{
/// <summary>Driver wasn't Healthy to begin with — wedge detection doesn't apply.</summary>
NotApplicable,
/// <summary>Driver claims Healthy + no pending work → stays Healthy.</summary>
Idle,
/// <summary>Driver claims Healthy + has pending work + has made progress within the threshold → stays Healthy.</summary>
Healthy,
/// <summary>Driver claims Healthy + has pending work + has NOT made progress within the threshold → wedged.</summary>
Faulted,
}

View File

@@ -0,0 +1,91 @@
using Microsoft.Extensions.Logging.Abstractions;
using Shouldly;
using Xunit;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
using ZB.MOM.WW.OtOpcUa.Core.Stability;
namespace ZB.MOM.WW.OtOpcUa.Core.Tests.Stability;
[Trait("Category", "Unit")]
public sealed class MemoryRecycleTests
{
[Fact]
public async Task TierC_HardBreach_RequestsSupervisorRecycle()
{
var supervisor = new FakeSupervisor();
var recycle = new MemoryRecycle(DriverTier.C, supervisor, NullLogger<MemoryRecycle>.Instance);
var requested = await recycle.HandleAsync(MemoryTrackingAction.HardBreach, 2_000_000_000, CancellationToken.None);
requested.ShouldBeTrue();
supervisor.RecycleCount.ShouldBe(1);
supervisor.LastReason.ShouldContain("hard-breach");
}
[Theory]
[InlineData(DriverTier.A)]
[InlineData(DriverTier.B)]
public async Task InProcessTier_HardBreach_NeverRequestsRecycle(DriverTier tier)
{
var supervisor = new FakeSupervisor();
var recycle = new MemoryRecycle(tier, supervisor, NullLogger<MemoryRecycle>.Instance);
var requested = await recycle.HandleAsync(MemoryTrackingAction.HardBreach, 2_000_000_000, CancellationToken.None);
requested.ShouldBeFalse("Tier A/B hard-breach logs a promotion recommendation only (decisions #74, #145)");
supervisor.RecycleCount.ShouldBe(0);
}
[Fact]
public async Task TierC_WithoutSupervisor_HardBreach_NoOp()
{
var recycle = new MemoryRecycle(DriverTier.C, supervisor: null, NullLogger<MemoryRecycle>.Instance);
var requested = await recycle.HandleAsync(MemoryTrackingAction.HardBreach, 2_000_000_000, CancellationToken.None);
requested.ShouldBeFalse("no supervisor → no recycle path; action logged only");
}
[Theory]
[InlineData(DriverTier.A)]
[InlineData(DriverTier.B)]
[InlineData(DriverTier.C)]
public async Task SoftBreach_NeverRequestsRecycle(DriverTier tier)
{
var supervisor = new FakeSupervisor();
var recycle = new MemoryRecycle(tier, supervisor, NullLogger<MemoryRecycle>.Instance);
var requested = await recycle.HandleAsync(MemoryTrackingAction.SoftBreach, 1_000_000_000, CancellationToken.None);
requested.ShouldBeFalse("soft-breach is surface-only at every tier");
supervisor.RecycleCount.ShouldBe(0);
}
[Theory]
[InlineData(MemoryTrackingAction.None)]
[InlineData(MemoryTrackingAction.Warming)]
public async Task NonBreachActions_NoOp(MemoryTrackingAction action)
{
var supervisor = new FakeSupervisor();
var recycle = new MemoryRecycle(DriverTier.C, supervisor, NullLogger<MemoryRecycle>.Instance);
var requested = await recycle.HandleAsync(action, 100_000_000, CancellationToken.None);
requested.ShouldBeFalse();
supervisor.RecycleCount.ShouldBe(0);
}
private sealed class FakeSupervisor : IDriverSupervisor
{
public string DriverInstanceId => "fake-tier-c";
public int RecycleCount { get; private set; }
public string? LastReason { get; private set; }
public Task RecycleAsync(string reason, CancellationToken cancellationToken)
{
RecycleCount++;
LastReason = reason;
return Task.CompletedTask;
}
}
}

View File

@@ -0,0 +1,101 @@
using Microsoft.Extensions.Logging.Abstractions;
using Shouldly;
using Xunit;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
using ZB.MOM.WW.OtOpcUa.Core.Stability;
namespace ZB.MOM.WW.OtOpcUa.Core.Tests.Stability;
[Trait("Category", "Unit")]
public sealed class ScheduledRecycleSchedulerTests
{
private static readonly DateTime T0 = new(2026, 4, 19, 0, 0, 0, DateTimeKind.Utc);
private static readonly TimeSpan Weekly = TimeSpan.FromDays(7);
[Theory]
[InlineData(DriverTier.A)]
[InlineData(DriverTier.B)]
public void TierAOrB_Ctor_Throws(DriverTier tier)
{
var supervisor = new FakeSupervisor();
Should.Throw<ArgumentException>(() => new ScheduledRecycleScheduler(
tier, Weekly, T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance));
}
[Fact]
public void ZeroOrNegativeInterval_Throws()
{
var supervisor = new FakeSupervisor();
Should.Throw<ArgumentException>(() => new ScheduledRecycleScheduler(
DriverTier.C, TimeSpan.Zero, T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance));
Should.Throw<ArgumentException>(() => new ScheduledRecycleScheduler(
DriverTier.C, TimeSpan.FromSeconds(-1), T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance));
}
[Fact]
public async Task Tick_BeforeNextRecycle_NoOp()
{
var supervisor = new FakeSupervisor();
var sch = new ScheduledRecycleScheduler(DriverTier.C, Weekly, T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance);
var fired = await sch.TickAsync(T0 + TimeSpan.FromDays(6), CancellationToken.None);
fired.ShouldBeFalse();
supervisor.RecycleCount.ShouldBe(0);
}
[Fact]
public async Task Tick_AtOrAfterNextRecycle_FiresOnce_AndAdvances()
{
var supervisor = new FakeSupervisor();
var sch = new ScheduledRecycleScheduler(DriverTier.C, Weekly, T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance);
var fired = await sch.TickAsync(T0 + Weekly + TimeSpan.FromMinutes(1), CancellationToken.None);
fired.ShouldBeTrue();
supervisor.RecycleCount.ShouldBe(1);
sch.NextRecycleUtc.ShouldBe(T0 + Weekly + Weekly);
}
[Fact]
public async Task RequestRecycleNow_Fires_Immediately_WithoutAdvancingSchedule()
{
var supervisor = new FakeSupervisor();
var sch = new ScheduledRecycleScheduler(DriverTier.C, Weekly, T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance);
var nextBefore = sch.NextRecycleUtc;
await sch.RequestRecycleNowAsync("memory hard-breach", CancellationToken.None);
supervisor.RecycleCount.ShouldBe(1);
supervisor.LastReason.ShouldBe("memory hard-breach");
sch.NextRecycleUtc.ShouldBe(nextBefore, "ad-hoc recycle doesn't shift the cron schedule");
}
[Fact]
public async Task MultipleFires_AcrossTicks_AdvanceOneIntervalEach()
{
var supervisor = new FakeSupervisor();
var sch = new ScheduledRecycleScheduler(DriverTier.C, TimeSpan.FromDays(1), T0, supervisor, NullLogger<ScheduledRecycleScheduler>.Instance);
await sch.TickAsync(T0 + TimeSpan.FromDays(1) + TimeSpan.FromHours(1), CancellationToken.None);
await sch.TickAsync(T0 + TimeSpan.FromDays(2) + TimeSpan.FromHours(1), CancellationToken.None);
await sch.TickAsync(T0 + TimeSpan.FromDays(3) + TimeSpan.FromHours(1), CancellationToken.None);
supervisor.RecycleCount.ShouldBe(3);
sch.NextRecycleUtc.ShouldBe(T0 + TimeSpan.FromDays(4));
}
private sealed class FakeSupervisor : IDriverSupervisor
{
public string DriverInstanceId => "tier-c-fake";
public int RecycleCount { get; private set; }
public string? LastReason { get; private set; }
public Task RecycleAsync(string reason, CancellationToken cancellationToken)
{
RecycleCount++;
LastReason = reason;
return Task.CompletedTask;
}
}
}

View File

@@ -0,0 +1,112 @@
using Shouldly;
using Xunit;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
using ZB.MOM.WW.OtOpcUa.Core.Stability;
namespace ZB.MOM.WW.OtOpcUa.Core.Tests.Stability;
[Trait("Category", "Unit")]
public sealed class WedgeDetectorTests
{
private static readonly DateTime Now = new(2026, 4, 19, 12, 0, 0, DateTimeKind.Utc);
private static readonly TimeSpan Threshold = TimeSpan.FromSeconds(120);
[Fact]
public void SubSixtySecondThreshold_ClampsToSixty()
{
var detector = new WedgeDetector(TimeSpan.FromSeconds(10));
detector.Threshold.ShouldBe(TimeSpan.FromSeconds(60));
}
[Fact]
public void Unhealthy_Driver_AlwaysNotApplicable()
{
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(BulkheadDepth: 5, ActiveMonitoredItems: 10, QueuedHistoryReads: 0, LastProgressUtc: Now.AddMinutes(-10));
detector.Classify(DriverState.Faulted, demand, Now).ShouldBe(WedgeVerdict.NotApplicable);
detector.Classify(DriverState.Degraded, demand, Now).ShouldBe(WedgeVerdict.NotApplicable);
detector.Classify(DriverState.Initializing, demand, Now).ShouldBe(WedgeVerdict.NotApplicable);
}
[Fact]
public void Idle_Subscription_Only_StaysIdle()
{
// Idle driver: bulkhead 0, monitored items 0, no history reads queued.
// Even if LastProgressUtc is ancient, the verdict is Idle, not Faulted.
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(0, 0, 0, Now.AddHours(-12));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Idle);
}
[Fact]
public void PendingWork_WithRecentProgress_StaysHealthy()
{
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(BulkheadDepth: 2, ActiveMonitoredItems: 0, QueuedHistoryReads: 0, LastProgressUtc: Now.AddSeconds(-30));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Healthy);
}
[Fact]
public void PendingWork_WithStaleProgress_IsFaulted()
{
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(BulkheadDepth: 2, ActiveMonitoredItems: 0, QueuedHistoryReads: 0, LastProgressUtc: Now.AddMinutes(-5));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Faulted);
}
[Fact]
public void MonitoredItems_Active_ButNoRecentPublish_IsFaulted()
{
// Subscription-only driver with live MonitoredItems but no publish progress within threshold
// is a real wedge — this is the case the previous "no successful Read" formulation used
// to miss (no reads ever happen).
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(BulkheadDepth: 0, ActiveMonitoredItems: 5, QueuedHistoryReads: 0, LastProgressUtc: Now.AddMinutes(-10));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Faulted);
}
[Fact]
public void MonitoredItems_Active_WithFreshPublish_StaysHealthy()
{
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(BulkheadDepth: 0, ActiveMonitoredItems: 5, QueuedHistoryReads: 0, LastProgressUtc: Now.AddSeconds(-10));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Healthy);
}
[Fact]
public void HistoryBackfill_SlowButMakingProgress_StaysHealthy()
{
// Slow historian backfill — QueuedHistoryReads > 0 but progress advances within threshold.
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(BulkheadDepth: 0, ActiveMonitoredItems: 0, QueuedHistoryReads: 50, LastProgressUtc: Now.AddSeconds(-60));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Healthy);
}
[Fact]
public void WriteOnlyBurst_StaysIdle_WhenBulkheadEmpty()
{
// A write-only driver that just finished a burst: bulkhead drained, no subscriptions, no
// history reads. Idle — the previous formulation would have faulted here because no
// reads were succeeding even though the driver is perfectly healthy.
var detector = new WedgeDetector(Threshold);
var demand = new DemandSignal(0, 0, 0, Now.AddMinutes(-30));
detector.Classify(DriverState.Healthy, demand, Now).ShouldBe(WedgeVerdict.Idle);
}
[Fact]
public void DemandSignal_HasPendingWork_TrueForAnyNonZeroCounter()
{
new DemandSignal(1, 0, 0, Now).HasPendingWork.ShouldBeTrue();
new DemandSignal(0, 1, 0, Now).HasPendingWork.ShouldBeTrue();
new DemandSignal(0, 0, 1, Now).HasPendingWork.ShouldBeTrue();
new DemandSignal(0, 0, 0, Now).HasPendingWork.ShouldBeFalse();
}
}