Compare commits

...

4 Commits

Author SHA1 Message Date
Joseph Doherty
016122841b Phase 6.1 Stream E.2 partial — ResilienceStatusPublisherHostedService persists tracker snapshots to DB
Closes the HostedService half of Phase 6.1 Stream E.2 flagged as a follow-up
when the DriverResilienceStatusTracker shipped in PR #82. The Admin /hosts
column refresh + SignalR push + red-badge visual (Stream E.3) remain
deferred to the visual-compliance pass — this PR owns the persistence
story alone.

Server.Hosting:
- ResilienceStatusPublisherHostedService : BackgroundService. Samples the
  DriverResilienceStatusTracker every TickInterval (default 5 s) and upserts
  each (DriverInstanceId, HostName) counter pair into
  DriverInstanceResilienceStatus via EF. New rows on first sight; in-place
  updates on subsequent ticks.
- PersistOnceAsync extracted public so tests drive one tick directly —
  matches the ScheduledRecycleHostedService pattern for deterministic
  timing.
- Best-effort persistence: a DB outage logs a warning + continues; the next
  tick retries. Never crashes the app on sample failure. Cancellation
  propagates through cleanly.
- Tracks the bulkhead depth / recycle / footprint columns the entity was
  designed for. CurrentBulkheadDepth currently persisted as 0 — the tracker
  doesn't yet expose live bulkhead depth; a narrower follow-up wires the
  Polly bulkhead-depth observer into the tracker.

Tests (6 new in ResilienceStatusPublisherHostedServiceTests):
- Empty tracker → tick is a no-op, zero rows written.
- Single-host counters → upsert a new row with ConsecutiveFailures + breaker
  timestamp + sampled timestamp.
- Second tick updates the existing row in place (not a second insert).
- Multi-host pairs persist independently.
- Footprint counters (Baseline + Current) round-trip.
- TickCount advances on every PersistOnceAsync call.

Full solution dotnet test: 1225 passing (was 1219, +6). Pre-existing
Client.CLI Subscribe flake unchanged.

Production wiring (Program.cs) example:
  builder.Services.AddSingleton<DriverResilienceStatusTracker>();
  builder.Services.AddHostedService<ResilienceStatusPublisherHostedService>();
  // Tracker gets wired into CapabilityInvoker via OtOpcUaServer resolution
  // + the existing Phase 6.1 layer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 14:36:00 -04:00
244a36e03e Merge pull request (#104) - IPerCallHostResolver + decision #144 wire-in 2026-04-19 12:33:23 -04:00
Joseph Doherty
4de94fab0d Phase 6.1 Stream A remaining — IPerCallHostResolver + DriverNodeManager per-call host dispatch (decision #144)
Closes the per-device isolation gap flagged at the Phase 6.1 Stream A wire-up
(PR #78 used driver.DriverInstanceId as the pipeline host for every call, so
multi-host drivers like Modbus with N PLCs shared one pipeline — one dead PLC
poisoned sibling breakers). Decision #144 requires per-device isolation; this
PR wires it without breaking single-host drivers.

Core.Abstractions:
- IPerCallHostResolver interface. Optional driver capability. Drivers with
  multi-host topology (Modbus across N PLCs, AB CIP across a rack, etc.)
  implement this; single-host drivers (Galaxy, S7 against one PLC, OpcUaClient
  against one remote server) leave it alone. Must be fast + allocation-free
  — called once per tag on the hot path. Unknown refs return empty so dispatch
  falls back to single-host without throwing.

Server/OpcUa/DriverNodeManager:
- Captures `driver as IPerCallHostResolver` at construction alongside the
  existing capability casts.
- New `ResolveHostFor(fullReference)` helper returns either the resolver's
  answer or the driver's DriverInstanceId (single-host fallback). Empty /
  whitespace resolver output also falls back to DriverInstanceId.
- Every dispatch site now passes `ResolveHostFor(fullRef)` to the invoker
  instead of `_driver.DriverInstanceId` — OnReadValue, OnWriteValue, all four
  HistoryRead paths. The HistoryRead Events path tolerates fullRef=null and
  falls back to DriverInstanceId for those cluster-wide event queries.
- Drivers without IPerCallHostResolver observe zero behavioural change:
  every call still keys on DriverInstanceId, same as before.

Tests (4 new PerCallHostResolverDispatchTests, all pass):
- DeadPlc_DoesNotOpenBreaker_For_HealthyPlc_With_Resolver — 2 PLCs behind
  one driver; hammer the dead PLC past its breaker threshold; assert the
  healthy PLC's first call succeeds on its first attempt (decision #144).
- EmptyString / unknown-ref fallback behaviour documented via test.
- WithoutResolver_SameHost_Shares_One_Pipeline — regression guard for the
  single-host pre-existing behaviour.
- WithResolver_TwoHosts_Get_Two_Pipelines — builds the CachedPipelineCount
  assertion to confirm the shared-builder cache keys correctly.

Full solution dotnet test: 1219 passing (was 1215, +4). Pre-existing
Client.CLI Subscribe flake unchanged.

Adoption: Modbus driver (#120 follow-up), AB CIP / AB Legacy / TwinCAT
drivers (also #120) implement the interface and return the per-tag PLC host
string. Single-host drivers stay silent and pay zero cost.

Remaining sub-items of #160 still deferred:
- IAlarmSource.SubscribeAlarmsAsync + AcknowledgeAsync invoker wrapping.
  Non-trivial because alarm subscription is push-based from driver through
  IAlarmConditionSink — the wrap has to happen at the driver-to-server glue
  rather than a synchronous dispatch site.
- Roslyn analyzer asserting every capability-interface call routes through
  CapabilityInvoker. Substantial (separate analyzer project + test harness);
  noise-value ratio favors shipping this post-v2-GA once the coverage is
  known-stable.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 12:31:24 -04:00
fdd0bf52c3 Merge pull request (#103) - Phase 6.1 Stream A ResilienceConfig 2026-04-19 12:23:47 -04:00
5 changed files with 466 additions and 6 deletions

View File

@@ -0,0 +1,34 @@
namespace ZB.MOM.WW.OtOpcUa.Core.Abstractions;
/// <summary>
/// Optional driver capability that maps a per-tag full reference to the underlying host
/// name responsible for serving it. Drivers with a one-host topology (Galaxy on one
/// MXAccess endpoint, OpcUaClient against one remote server, S7 against one PLC) do NOT
/// need to implement this — the dispatch layer falls back to
/// <see cref="IDriver.DriverInstanceId"/> as a single-host key.
/// </summary>
/// <remarks>
/// <para>Multi-host drivers (Modbus with N PLCs, hypothetical AB CIP across a rack, etc.)
/// implement this so the Phase 6.1 resilience pipeline can be keyed on
/// <c>(DriverInstanceId, ResolvedHostName, DriverCapability)</c> per decision #144. One
/// dead PLC behind a multi-device Modbus driver then trips only its own breaker; healthy
/// siblings keep serving.</para>
///
/// <para>Implementations must be fast + allocation-free on the hot path — <c>ReadAsync</c>
/// / <c>WriteAsync</c> call this once per tag. A simple <c>Dictionary&lt;string, string&gt;</c>
/// lookup is typical.</para>
///
/// <para>When the fullRef doesn't map to a known host (caller passes an unregistered
/// reference, or the tag was removed mid-flight), implementations should return the
/// driver's default-host string rather than throwing — the invoker falls back to a
/// single-host pipeline for that call, which is safer than tearing down the request.</para>
/// </remarks>
public interface IPerCallHostResolver
{
/// <summary>
/// Resolve the host name for the given driver-side full reference. Returned value is
/// used as the <c>hostName</c> argument to the Phase 6.1 <c>CapabilityInvoker</c> so
/// per-host breaker isolation + per-host bulkhead accounting both kick in.
/// </summary>
string ResolveHost(string fullReference);
}

View File

@@ -0,0 +1,138 @@
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using ZB.MOM.WW.OtOpcUa.Configuration;
using ZB.MOM.WW.OtOpcUa.Configuration.Entities;
using ZB.MOM.WW.OtOpcUa.Core.Resilience;
namespace ZB.MOM.WW.OtOpcUa.Server.Hosting;
/// <summary>
/// Samples <see cref="DriverResilienceStatusTracker"/> at a fixed tick + upserts each
/// <c>(DriverInstanceId, HostName)</c> snapshot into <see cref="DriverInstanceResilienceStatus"/>
/// so Admin <c>/hosts</c> can render live resilience counters across restarts.
/// </summary>
/// <remarks>
/// <para>Closes the HostedService piece of Phase 6.1 Stream E.2 flagged as a follow-up
/// when the tracker shipped in PR #82. The Admin UI column-refresh piece (red badge when
/// ConsecutiveFailures &gt; breakerThreshold / 2 + SignalR push) is still deferred to
/// the visual-compliance pass — this service owns the persistence half alone.</para>
///
/// <para>Tick interval defaults to 5 s. Persistence is best-effort: a DB outage during
/// a tick logs + continues; the next tick tries again with the latest snapshots. The
/// hosted service never crashes the app on sample failure.</para>
///
/// <para><see cref="PersistOnceAsync"/> factored as a public method so tests can drive
/// it directly, matching the <see cref="ScheduledRecycleHostedService.TickOnceAsync"/>
/// pattern for deterministic unit-test timing.</para>
/// </remarks>
public sealed class ResilienceStatusPublisherHostedService : BackgroundService
{
private readonly DriverResilienceStatusTracker _tracker;
private readonly IDbContextFactory<OtOpcUaConfigDbContext> _dbContextFactory;
private readonly ILogger<ResilienceStatusPublisherHostedService> _logger;
private readonly TimeProvider _timeProvider;
/// <summary>Tick interval — how often the tracker snapshot is persisted.</summary>
public TimeSpan TickInterval { get; }
/// <summary>Snapshot of the tick count for diagnostics + test assertions.</summary>
public int TickCount { get; private set; }
public ResilienceStatusPublisherHostedService(
DriverResilienceStatusTracker tracker,
IDbContextFactory<OtOpcUaConfigDbContext> dbContextFactory,
ILogger<ResilienceStatusPublisherHostedService> logger,
TimeProvider? timeProvider = null,
TimeSpan? tickInterval = null)
{
ArgumentNullException.ThrowIfNull(tracker);
ArgumentNullException.ThrowIfNull(dbContextFactory);
_tracker = tracker;
_dbContextFactory = dbContextFactory;
_logger = logger;
_timeProvider = timeProvider ?? TimeProvider.System;
TickInterval = tickInterval ?? TimeSpan.FromSeconds(5);
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogInformation(
"ResilienceStatusPublisherHostedService starting — tick interval = {Interval}",
TickInterval);
while (!stoppingToken.IsCancellationRequested)
{
try
{
await Task.Delay(TickInterval, _timeProvider, stoppingToken).ConfigureAwait(false);
}
catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested)
{
break;
}
await PersistOnceAsync(stoppingToken).ConfigureAwait(false);
}
_logger.LogInformation("ResilienceStatusPublisherHostedService stopping after {TickCount} tick(s).", TickCount);
}
/// <summary>
/// Take one snapshot of the tracker + upsert each pair into the persistence table.
/// Swallows transient exceptions + logs them; never throws from a sample failure.
/// </summary>
public async Task PersistOnceAsync(CancellationToken cancellationToken)
{
TickCount++;
var snapshot = _tracker.Snapshot();
if (snapshot.Count == 0) return;
try
{
await using var db = await _dbContextFactory.CreateDbContextAsync(cancellationToken).ConfigureAwait(false);
var now = _timeProvider.GetUtcNow().UtcDateTime;
foreach (var (driverInstanceId, hostName, counters) in snapshot)
{
var existing = await db.DriverInstanceResilienceStatuses
.FirstOrDefaultAsync(x => x.DriverInstanceId == driverInstanceId && x.HostName == hostName, cancellationToken)
.ConfigureAwait(false);
if (existing is null)
{
db.DriverInstanceResilienceStatuses.Add(new DriverInstanceResilienceStatus
{
DriverInstanceId = driverInstanceId,
HostName = hostName,
LastCircuitBreakerOpenUtc = counters.LastBreakerOpenUtc,
ConsecutiveFailures = counters.ConsecutiveFailures,
CurrentBulkheadDepth = 0, // Phase 6.1 Stream A tracker doesn't emit bulkhead depth yet
LastRecycleUtc = counters.LastRecycleUtc,
BaselineFootprintBytes = counters.BaselineFootprintBytes,
CurrentFootprintBytes = counters.CurrentFootprintBytes,
LastSampledUtc = now,
});
}
else
{
existing.LastCircuitBreakerOpenUtc = counters.LastBreakerOpenUtc;
existing.ConsecutiveFailures = counters.ConsecutiveFailures;
existing.LastRecycleUtc = counters.LastRecycleUtc;
existing.BaselineFootprintBytes = counters.BaselineFootprintBytes;
existing.CurrentFootprintBytes = counters.CurrentFootprintBytes;
existing.LastSampledUtc = now;
}
}
await db.SaveChangesAsync(cancellationToken).ConfigureAwait(false);
}
catch (OperationCanceledException) { throw; }
catch (Exception ex)
{
_logger.LogWarning(ex,
"ResilienceStatusPublisher persistence tick failed; next tick will retry with latest snapshots.");
}
}
}

View File

@@ -35,6 +35,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
private readonly IDriver _driver;
private readonly IReadable? _readable;
private readonly IWritable? _writable;
private readonly IPerCallHostResolver? _hostResolver;
private readonly CapabilityInvoker _invoker;
private readonly ILogger<DriverNodeManager> _logger;
@@ -75,6 +76,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
_driver = driver;
_readable = driver as IReadable;
_writable = driver as IWritable;
_hostResolver = driver as IPerCallHostResolver;
_invoker = invoker;
_authzGate = authzGate;
_scopeResolver = scopeResolver;
@@ -83,6 +85,21 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
protected override NodeStateCollection LoadPredefinedNodes(ISystemContext context) => new();
/// <summary>
/// Resolve the host name fed to the Phase 6.1 CapabilityInvoker for a per-tag call.
/// Multi-host drivers that implement <see cref="IPerCallHostResolver"/> get their
/// per-PLC isolation (decision #144); single-host drivers + drivers that don't
/// implement the resolver fall back to the DriverInstanceId — preserves existing
/// Phase 6.1 pipeline-key semantics for those drivers.
/// </summary>
private string ResolveHostFor(string fullReference)
{
if (_hostResolver is null) return _driver.DriverInstanceId;
var resolved = _hostResolver.ResolveHost(fullReference);
return string.IsNullOrWhiteSpace(resolved) ? _driver.DriverInstanceId : resolved;
}
public override void CreateAddressSpace(IDictionary<NodeId, IList<IReference>> externalReferences)
{
lock (Lock)
@@ -224,7 +241,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
var result = _invoker.ExecuteAsync(
DriverCapability.Read,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => (IReadOnlyList<DataValueSnapshot>)await _readable.ReadAsync([fullRef], ct).ConfigureAwait(false),
CancellationToken.None).AsTask().GetAwaiter().GetResult();
if (result.Count == 0)
@@ -439,7 +456,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
var isIdempotent = _writeIdempotentByFullRef.GetValueOrDefault(fullRef!, false);
var capturedValue = value;
var results = _invoker.ExecuteWriteAsync(
_driver.DriverInstanceId,
ResolveHostFor(fullRef!),
isIdempotent,
async ct => (IReadOnlyList<WriteResult>)await _writable.WriteAsync(
[new DriverWriteRequest(fullRef!, capturedValue)],
@@ -538,7 +555,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => await History.ReadRawAsync(
fullRef,
details.StartTime,
@@ -612,7 +629,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => await History.ReadProcessedAsync(
fullRef,
details.StartTime,
@@ -679,7 +696,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => await History.ReadAtTimeAsync(fullRef, requestedTimes, ct).ConfigureAwait(false),
CancellationToken.None).AsTask().GetAwaiter().GetResult();
@@ -749,7 +766,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
fullRef is null ? _driver.DriverInstanceId : ResolveHostFor(fullRef),
async ct => await History.ReadEventsAsync(
sourceName: fullRef,
startUtc: details.StartTime,

View File

@@ -0,0 +1,110 @@
using Shouldly;
using Xunit;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
using ZB.MOM.WW.OtOpcUa.Core.Resilience;
namespace ZB.MOM.WW.OtOpcUa.Core.Tests.Resilience;
/// <summary>
/// Exercises the per-call host resolver contract against the shared
/// <see cref="DriverResiliencePipelineBuilder"/> + <see cref="CapabilityInvoker"/> — one
/// dead PLC behind a multi-device driver must NOT open the breaker for healthy sibling
/// PLCs (decision #144).
/// </summary>
[Trait("Category", "Unit")]
public sealed class PerCallHostResolverDispatchTests
{
private sealed class StaticResolver : IPerCallHostResolver
{
private readonly Dictionary<string, string> _map;
public StaticResolver(Dictionary<string, string> map) => _map = map;
public string ResolveHost(string fullReference) =>
_map.TryGetValue(fullReference, out var host) ? host : string.Empty;
}
[Fact]
public async Task DeadPlc_DoesNotOpenBreaker_For_HealthyPlc_With_Resolver()
{
// Two PLCs behind one driver. Dead PLC keeps failing; healthy PLC must keep serving.
var builder = new DriverResiliencePipelineBuilder();
var options = new DriverResilienceOptions { Tier = DriverTier.B };
var invoker = new CapabilityInvoker(builder, "drv-modbus", () => options);
var resolver = new StaticResolver(new Dictionary<string, string>
{
["tag-on-dead"] = "plc-dead",
["tag-on-alive"] = "plc-alive",
});
var threshold = options.Resolve(DriverCapability.Read).BreakerFailureThreshold;
for (var i = 0; i < threshold + 3; i++)
{
await Should.ThrowAsync<Exception>(async () =>
await invoker.ExecuteAsync(
DriverCapability.Read,
hostName: resolver.ResolveHost("tag-on-dead"),
_ => throw new InvalidOperationException("plc-dead unreachable"),
CancellationToken.None));
}
// Healthy PLC's pipeline is in a different bucket; the first call should succeed
// without hitting the dead-PLC breaker.
var aliveAttempts = 0;
await invoker.ExecuteAsync(
DriverCapability.Read,
hostName: resolver.ResolveHost("tag-on-alive"),
_ => { aliveAttempts++; return ValueTask.FromResult("ok"); },
CancellationToken.None);
aliveAttempts.ShouldBe(1, "decision #144 — per-PLC isolation keeps healthy PLCs serving");
}
[Fact]
public void Resolver_EmptyString_Treated_As_Single_Host_Fallback()
{
var resolver = new StaticResolver(new Dictionary<string, string>
{
["tag-unknown"] = "",
});
resolver.ResolveHost("tag-unknown").ShouldBe("");
resolver.ResolveHost("not-in-map").ShouldBe("", "unknown refs return empty so dispatch falls back to single-host");
}
[Fact]
public async Task WithoutResolver_SameHost_Shares_One_Pipeline()
{
// Without a resolver all calls share the DriverInstanceId pipeline — that's the
// pre-decision-#144 behavior single-host drivers should keep.
var builder = new DriverResiliencePipelineBuilder();
var options = new DriverResilienceOptions { Tier = DriverTier.A };
var invoker = new CapabilityInvoker(builder, "drv-single", () => options);
await invoker.ExecuteAsync(DriverCapability.Read, "drv-single",
_ => ValueTask.FromResult("a"), CancellationToken.None);
await invoker.ExecuteAsync(DriverCapability.Read, "drv-single",
_ => ValueTask.FromResult("b"), CancellationToken.None);
builder.CachedPipelineCount.ShouldBe(1, "single-host drivers share one pipeline");
}
[Fact]
public async Task WithResolver_TwoHosts_Get_Two_Pipelines()
{
var builder = new DriverResiliencePipelineBuilder();
var options = new DriverResilienceOptions { Tier = DriverTier.B };
var invoker = new CapabilityInvoker(builder, "drv-modbus", () => options);
var resolver = new StaticResolver(new Dictionary<string, string>
{
["tag-a"] = "plc-a",
["tag-b"] = "plc-b",
});
await invoker.ExecuteAsync(DriverCapability.Read, resolver.ResolveHost("tag-a"),
_ => ValueTask.FromResult(1), CancellationToken.None);
await invoker.ExecuteAsync(DriverCapability.Read, resolver.ResolveHost("tag-b"),
_ => ValueTask.FromResult(2), CancellationToken.None);
builder.CachedPipelineCount.ShouldBe(2, "each host keyed on its own pipeline");
}
}

View File

@@ -0,0 +1,161 @@
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging.Abstractions;
using Shouldly;
using Xunit;
using ZB.MOM.WW.OtOpcUa.Configuration;
using ZB.MOM.WW.OtOpcUa.Core.Resilience;
using ZB.MOM.WW.OtOpcUa.Server.Hosting;
namespace ZB.MOM.WW.OtOpcUa.Server.Tests;
[Trait("Category", "Unit")]
public sealed class ResilienceStatusPublisherHostedServiceTests : IDisposable
{
private static readonly DateTime T0 = new(2026, 4, 19, 12, 0, 0, DateTimeKind.Utc);
private sealed class FakeClock : TimeProvider
{
public DateTime Utc { get; set; } = T0;
public override DateTimeOffset GetUtcNow() => new(Utc, TimeSpan.Zero);
}
private sealed class InMemoryDbContextFactory : IDbContextFactory<OtOpcUaConfigDbContext>
{
private readonly DbContextOptions<OtOpcUaConfigDbContext> _options;
public InMemoryDbContextFactory(string dbName)
{
_options = new DbContextOptionsBuilder<OtOpcUaConfigDbContext>()
.UseInMemoryDatabase(dbName)
.Options;
}
public OtOpcUaConfigDbContext CreateDbContext() => new(_options);
}
private readonly string _dbName = $"resilience-pub-{Guid.NewGuid():N}";
private readonly InMemoryDbContextFactory _factory;
private readonly OtOpcUaConfigDbContext _readCtx;
public ResilienceStatusPublisherHostedServiceTests()
{
_factory = new InMemoryDbContextFactory(_dbName);
_readCtx = _factory.CreateDbContext();
}
public void Dispose() => _readCtx.Dispose();
[Fact]
public async Task EmptyTracker_Tick_NoOp_NoRowsWritten()
{
var tracker = new DriverResilienceStatusTracker();
var host = new ResilienceStatusPublisherHostedService(
tracker, _factory, NullLogger<ResilienceStatusPublisherHostedService>.Instance);
await host.PersistOnceAsync(CancellationToken.None);
host.TickCount.ShouldBe(1);
(await _readCtx.DriverInstanceResilienceStatuses.CountAsync()).ShouldBe(0);
}
[Fact]
public async Task SingleHost_OnePairWithCounters_UpsertsNewRow()
{
var clock = new FakeClock();
var tracker = new DriverResilienceStatusTracker();
tracker.RecordFailure("drv-1", "plc-a", T0);
tracker.RecordFailure("drv-1", "plc-a", T0);
tracker.RecordBreakerOpen("drv-1", "plc-a", T0.AddSeconds(1));
var host = new ResilienceStatusPublisherHostedService(
tracker, _factory, NullLogger<ResilienceStatusPublisherHostedService>.Instance,
timeProvider: clock);
clock.Utc = T0.AddSeconds(2);
await host.PersistOnceAsync(CancellationToken.None);
var row = await _readCtx.DriverInstanceResilienceStatuses.SingleAsync();
row.DriverInstanceId.ShouldBe("drv-1");
row.HostName.ShouldBe("plc-a");
row.ConsecutiveFailures.ShouldBe(2);
row.LastCircuitBreakerOpenUtc.ShouldBe(T0.AddSeconds(1));
row.LastSampledUtc.ShouldBe(T0.AddSeconds(2));
}
[Fact]
public async Task SecondTick_UpdatesExistingRow_InPlace()
{
var clock = new FakeClock();
var tracker = new DriverResilienceStatusTracker();
tracker.RecordFailure("drv-1", "plc-a", T0);
var host = new ResilienceStatusPublisherHostedService(
tracker, _factory, NullLogger<ResilienceStatusPublisherHostedService>.Instance,
timeProvider: clock);
clock.Utc = T0.AddSeconds(5);
await host.PersistOnceAsync(CancellationToken.None);
// Second tick: success resets the counter.
tracker.RecordSuccess("drv-1", "plc-a", T0.AddSeconds(6));
clock.Utc = T0.AddSeconds(10);
await host.PersistOnceAsync(CancellationToken.None);
(await _readCtx.DriverInstanceResilienceStatuses.CountAsync()).ShouldBe(1, "one row, updated in place");
var row = await _readCtx.DriverInstanceResilienceStatuses.SingleAsync();
row.ConsecutiveFailures.ShouldBe(0);
row.LastSampledUtc.ShouldBe(T0.AddSeconds(10));
}
[Fact]
public async Task MultipleHosts_BothPersist_Independently()
{
var tracker = new DriverResilienceStatusTracker();
tracker.RecordFailure("drv-1", "plc-a", T0);
tracker.RecordFailure("drv-1", "plc-a", T0);
tracker.RecordFailure("drv-1", "plc-b", T0);
var host = new ResilienceStatusPublisherHostedService(
tracker, _factory, NullLogger<ResilienceStatusPublisherHostedService>.Instance);
await host.PersistOnceAsync(CancellationToken.None);
var rows = await _readCtx.DriverInstanceResilienceStatuses
.OrderBy(r => r.HostName)
.ToListAsync();
rows.Count.ShouldBe(2);
rows[0].HostName.ShouldBe("plc-a");
rows[0].ConsecutiveFailures.ShouldBe(2);
rows[1].HostName.ShouldBe("plc-b");
rows[1].ConsecutiveFailures.ShouldBe(1);
}
[Fact]
public async Task FootprintCounters_Persist()
{
var tracker = new DriverResilienceStatusTracker();
tracker.RecordFootprint("drv-1", "plc-a",
baselineBytes: 100_000_000, currentBytes: 150_000_000, T0);
var host = new ResilienceStatusPublisherHostedService(
tracker, _factory, NullLogger<ResilienceStatusPublisherHostedService>.Instance);
await host.PersistOnceAsync(CancellationToken.None);
var row = await _readCtx.DriverInstanceResilienceStatuses.SingleAsync();
row.BaselineFootprintBytes.ShouldBe(100_000_000);
row.CurrentFootprintBytes.ShouldBe(150_000_000);
}
[Fact]
public async Task TickCount_Advances_OnEveryCall()
{
var tracker = new DriverResilienceStatusTracker();
var host = new ResilienceStatusPublisherHostedService(
tracker, _factory, NullLogger<ResilienceStatusPublisherHostedService>.Instance);
await host.PersistOnceAsync(CancellationToken.None);
await host.PersistOnceAsync(CancellationToken.None);
await host.PersistOnceAsync(CancellationToken.None);
host.TickCount.ShouldBe(3);
}
}