Compare commits

...

2 Commits

Author SHA1 Message Date
Joseph Doherty
4de94fab0d Phase 6.1 Stream A remaining — IPerCallHostResolver + DriverNodeManager per-call host dispatch (decision #144)
Closes the per-device isolation gap flagged at the Phase 6.1 Stream A wire-up
(PR #78 used driver.DriverInstanceId as the pipeline host for every call, so
multi-host drivers like Modbus with N PLCs shared one pipeline — one dead PLC
poisoned sibling breakers). Decision #144 requires per-device isolation; this
PR wires it without breaking single-host drivers.

Core.Abstractions:
- IPerCallHostResolver interface. Optional driver capability. Drivers with
  multi-host topology (Modbus across N PLCs, AB CIP across a rack, etc.)
  implement this; single-host drivers (Galaxy, S7 against one PLC, OpcUaClient
  against one remote server) leave it alone. Must be fast + allocation-free
  — called once per tag on the hot path. Unknown refs return empty so dispatch
  falls back to single-host without throwing.

Server/OpcUa/DriverNodeManager:
- Captures `driver as IPerCallHostResolver` at construction alongside the
  existing capability casts.
- New `ResolveHostFor(fullReference)` helper returns either the resolver's
  answer or the driver's DriverInstanceId (single-host fallback). Empty /
  whitespace resolver output also falls back to DriverInstanceId.
- Every dispatch site now passes `ResolveHostFor(fullRef)` to the invoker
  instead of `_driver.DriverInstanceId` — OnReadValue, OnWriteValue, all four
  HistoryRead paths. The HistoryRead Events path tolerates fullRef=null and
  falls back to DriverInstanceId for those cluster-wide event queries.
- Drivers without IPerCallHostResolver observe zero behavioural change:
  every call still keys on DriverInstanceId, same as before.

Tests (4 new PerCallHostResolverDispatchTests, all pass):
- DeadPlc_DoesNotOpenBreaker_For_HealthyPlc_With_Resolver — 2 PLCs behind
  one driver; hammer the dead PLC past its breaker threshold; assert the
  healthy PLC's first call succeeds on its first attempt (decision #144).
- EmptyString / unknown-ref fallback behaviour documented via test.
- WithoutResolver_SameHost_Shares_One_Pipeline — regression guard for the
  single-host pre-existing behaviour.
- WithResolver_TwoHosts_Get_Two_Pipelines — builds the CachedPipelineCount
  assertion to confirm the shared-builder cache keys correctly.

Full solution dotnet test: 1219 passing (was 1215, +4). Pre-existing
Client.CLI Subscribe flake unchanged.

Adoption: Modbus driver (#120 follow-up), AB CIP / AB Legacy / TwinCAT
drivers (also #120) implement the interface and return the per-tag PLC host
string. Single-host drivers stay silent and pay zero cost.

Remaining sub-items of #160 still deferred:
- IAlarmSource.SubscribeAlarmsAsync + AcknowledgeAsync invoker wrapping.
  Non-trivial because alarm subscription is push-based from driver through
  IAlarmConditionSink — the wrap has to happen at the driver-to-server glue
  rather than a synchronous dispatch site.
- Roslyn analyzer asserting every capability-interface call routes through
  CapabilityInvoker. Substantial (separate analyzer project + test harness);
  noise-value ratio favors shipping this post-v2-GA once the coverage is
  known-stable.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 12:31:24 -04:00
fdd0bf52c3 Merge pull request (#103) - Phase 6.1 Stream A ResilienceConfig 2026-04-19 12:23:47 -04:00
3 changed files with 167 additions and 6 deletions

View File

@@ -0,0 +1,34 @@
namespace ZB.MOM.WW.OtOpcUa.Core.Abstractions;
/// <summary>
/// Optional driver capability that maps a per-tag full reference to the underlying host
/// name responsible for serving it. Drivers with a one-host topology (Galaxy on one
/// MXAccess endpoint, OpcUaClient against one remote server, S7 against one PLC) do NOT
/// need to implement this — the dispatch layer falls back to
/// <see cref="IDriver.DriverInstanceId"/> as a single-host key.
/// </summary>
/// <remarks>
/// <para>Multi-host drivers (Modbus with N PLCs, hypothetical AB CIP across a rack, etc.)
/// implement this so the Phase 6.1 resilience pipeline can be keyed on
/// <c>(DriverInstanceId, ResolvedHostName, DriverCapability)</c> per decision #144. One
/// dead PLC behind a multi-device Modbus driver then trips only its own breaker; healthy
/// siblings keep serving.</para>
///
/// <para>Implementations must be fast + allocation-free on the hot path — <c>ReadAsync</c>
/// / <c>WriteAsync</c> call this once per tag. A simple <c>Dictionary&lt;string, string&gt;</c>
/// lookup is typical.</para>
///
/// <para>When the fullRef doesn't map to a known host (caller passes an unregistered
/// reference, or the tag was removed mid-flight), implementations should return the
/// driver's default-host string rather than throwing — the invoker falls back to a
/// single-host pipeline for that call, which is safer than tearing down the request.</para>
/// </remarks>
public interface IPerCallHostResolver
{
/// <summary>
/// Resolve the host name for the given driver-side full reference. Returned value is
/// used as the <c>hostName</c> argument to the Phase 6.1 <c>CapabilityInvoker</c> so
/// per-host breaker isolation + per-host bulkhead accounting both kick in.
/// </summary>
string ResolveHost(string fullReference);
}

View File

@@ -35,6 +35,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
private readonly IDriver _driver;
private readonly IReadable? _readable;
private readonly IWritable? _writable;
private readonly IPerCallHostResolver? _hostResolver;
private readonly CapabilityInvoker _invoker;
private readonly ILogger<DriverNodeManager> _logger;
@@ -75,6 +76,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
_driver = driver;
_readable = driver as IReadable;
_writable = driver as IWritable;
_hostResolver = driver as IPerCallHostResolver;
_invoker = invoker;
_authzGate = authzGate;
_scopeResolver = scopeResolver;
@@ -83,6 +85,21 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
protected override NodeStateCollection LoadPredefinedNodes(ISystemContext context) => new();
/// <summary>
/// Resolve the host name fed to the Phase 6.1 CapabilityInvoker for a per-tag call.
/// Multi-host drivers that implement <see cref="IPerCallHostResolver"/> get their
/// per-PLC isolation (decision #144); single-host drivers + drivers that don't
/// implement the resolver fall back to the DriverInstanceId — preserves existing
/// Phase 6.1 pipeline-key semantics for those drivers.
/// </summary>
private string ResolveHostFor(string fullReference)
{
if (_hostResolver is null) return _driver.DriverInstanceId;
var resolved = _hostResolver.ResolveHost(fullReference);
return string.IsNullOrWhiteSpace(resolved) ? _driver.DriverInstanceId : resolved;
}
public override void CreateAddressSpace(IDictionary<NodeId, IList<IReference>> externalReferences)
{
lock (Lock)
@@ -224,7 +241,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
var result = _invoker.ExecuteAsync(
DriverCapability.Read,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => (IReadOnlyList<DataValueSnapshot>)await _readable.ReadAsync([fullRef], ct).ConfigureAwait(false),
CancellationToken.None).AsTask().GetAwaiter().GetResult();
if (result.Count == 0)
@@ -439,7 +456,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
var isIdempotent = _writeIdempotentByFullRef.GetValueOrDefault(fullRef!, false);
var capturedValue = value;
var results = _invoker.ExecuteWriteAsync(
_driver.DriverInstanceId,
ResolveHostFor(fullRef!),
isIdempotent,
async ct => (IReadOnlyList<WriteResult>)await _writable.WriteAsync(
[new DriverWriteRequest(fullRef!, capturedValue)],
@@ -538,7 +555,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => await History.ReadRawAsync(
fullRef,
details.StartTime,
@@ -612,7 +629,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => await History.ReadProcessedAsync(
fullRef,
details.StartTime,
@@ -679,7 +696,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
ResolveHostFor(fullRef),
async ct => await History.ReadAtTimeAsync(fullRef, requestedTimes, ct).ConfigureAwait(false),
CancellationToken.None).AsTask().GetAwaiter().GetResult();
@@ -749,7 +766,7 @@ public sealed class DriverNodeManager : CustomNodeManager2, IAddressSpaceBuilder
{
var driverResult = _invoker.ExecuteAsync(
DriverCapability.HistoryRead,
_driver.DriverInstanceId,
fullRef is null ? _driver.DriverInstanceId : ResolveHostFor(fullRef),
async ct => await History.ReadEventsAsync(
sourceName: fullRef,
startUtc: details.StartTime,

View File

@@ -0,0 +1,110 @@
using Shouldly;
using Xunit;
using ZB.MOM.WW.OtOpcUa.Core.Abstractions;
using ZB.MOM.WW.OtOpcUa.Core.Resilience;
namespace ZB.MOM.WW.OtOpcUa.Core.Tests.Resilience;
/// <summary>
/// Exercises the per-call host resolver contract against the shared
/// <see cref="DriverResiliencePipelineBuilder"/> + <see cref="CapabilityInvoker"/> — one
/// dead PLC behind a multi-device driver must NOT open the breaker for healthy sibling
/// PLCs (decision #144).
/// </summary>
[Trait("Category", "Unit")]
public sealed class PerCallHostResolverDispatchTests
{
private sealed class StaticResolver : IPerCallHostResolver
{
private readonly Dictionary<string, string> _map;
public StaticResolver(Dictionary<string, string> map) => _map = map;
public string ResolveHost(string fullReference) =>
_map.TryGetValue(fullReference, out var host) ? host : string.Empty;
}
[Fact]
public async Task DeadPlc_DoesNotOpenBreaker_For_HealthyPlc_With_Resolver()
{
// Two PLCs behind one driver. Dead PLC keeps failing; healthy PLC must keep serving.
var builder = new DriverResiliencePipelineBuilder();
var options = new DriverResilienceOptions { Tier = DriverTier.B };
var invoker = new CapabilityInvoker(builder, "drv-modbus", () => options);
var resolver = new StaticResolver(new Dictionary<string, string>
{
["tag-on-dead"] = "plc-dead",
["tag-on-alive"] = "plc-alive",
});
var threshold = options.Resolve(DriverCapability.Read).BreakerFailureThreshold;
for (var i = 0; i < threshold + 3; i++)
{
await Should.ThrowAsync<Exception>(async () =>
await invoker.ExecuteAsync(
DriverCapability.Read,
hostName: resolver.ResolveHost("tag-on-dead"),
_ => throw new InvalidOperationException("plc-dead unreachable"),
CancellationToken.None));
}
// Healthy PLC's pipeline is in a different bucket; the first call should succeed
// without hitting the dead-PLC breaker.
var aliveAttempts = 0;
await invoker.ExecuteAsync(
DriverCapability.Read,
hostName: resolver.ResolveHost("tag-on-alive"),
_ => { aliveAttempts++; return ValueTask.FromResult("ok"); },
CancellationToken.None);
aliveAttempts.ShouldBe(1, "decision #144 — per-PLC isolation keeps healthy PLCs serving");
}
[Fact]
public void Resolver_EmptyString_Treated_As_Single_Host_Fallback()
{
var resolver = new StaticResolver(new Dictionary<string, string>
{
["tag-unknown"] = "",
});
resolver.ResolveHost("tag-unknown").ShouldBe("");
resolver.ResolveHost("not-in-map").ShouldBe("", "unknown refs return empty so dispatch falls back to single-host");
}
[Fact]
public async Task WithoutResolver_SameHost_Shares_One_Pipeline()
{
// Without a resolver all calls share the DriverInstanceId pipeline — that's the
// pre-decision-#144 behavior single-host drivers should keep.
var builder = new DriverResiliencePipelineBuilder();
var options = new DriverResilienceOptions { Tier = DriverTier.A };
var invoker = new CapabilityInvoker(builder, "drv-single", () => options);
await invoker.ExecuteAsync(DriverCapability.Read, "drv-single",
_ => ValueTask.FromResult("a"), CancellationToken.None);
await invoker.ExecuteAsync(DriverCapability.Read, "drv-single",
_ => ValueTask.FromResult("b"), CancellationToken.None);
builder.CachedPipelineCount.ShouldBe(1, "single-host drivers share one pipeline");
}
[Fact]
public async Task WithResolver_TwoHosts_Get_Two_Pipelines()
{
var builder = new DriverResiliencePipelineBuilder();
var options = new DriverResilienceOptions { Tier = DriverTier.B };
var invoker = new CapabilityInvoker(builder, "drv-modbus", () => options);
var resolver = new StaticResolver(new Dictionary<string, string>
{
["tag-a"] = "plc-a",
["tag-b"] = "plc-b",
});
await invoker.ExecuteAsync(DriverCapability.Read, resolver.ResolveHost("tag-a"),
_ => ValueTask.FromResult(1), CancellationToken.None);
await invoker.ExecuteAsync(DriverCapability.Read, resolver.ResolveHost("tag-b"),
_ => ValueTask.FromResult(2), CancellationToken.None);
builder.CachedPipelineCount.ShouldBe(2, "each host keyed on its own pipeline");
}
}