10 KiB
ADR-002 — Driver-vs-virtual dispatch: how DriverNodeManager routes reads, writes, and subscriptions across driver tags and virtual (scripted) tags
Status: Accepted 2026-04-20 — Option B (single NodeManager + NodeSource tag on the resolver output); Options A and C explicitly rejected.
Related phase: Phase 7 — Scripting Runtime + Scripted Alarms Stream G.
Related tasks: #237 Phase 7 Stream G — Address-space integration.
Related ADRs: ADR-001 — Equipment node walker (this ADR extends the walker + resolver it established).
Context
Phase 7 introduces virtual tags — OPC UA variables whose values are computed by user-authored C# scripts against other tags (driver or virtual). Per design decision #2 in the Phase 7 plan, virtual tags live in the Equipment tree alongside driver tags (not a separate /Virtual/... namespace). An operator browsing Enterprise/Site/Area/Line/Equipment/ sees a flat list of children that includes both driver-sourced variables (e.g. SpeedSetpoint coming from a Modbus tag) and virtual variables (e.g. LineRate computed from SpeedSetpoint × 0.95).
From the operator's perspective there is no difference. From the server's perspective there is a big one: a read / write / subscribe on a driver node must dispatch to a driver's IReadable / IWritable / ISubscribable implementation; the same operation on a virtual node must dispatch to the VirtualTagEngine. The existing DriverNodeManager (shipped in Phase 1, extended by ADR-001) only knows about the driver case today.
The question is how the dispatch should branch. Three options considered.
Options
Option A — A separate VirtualTagNodeManager sibling to DriverNodeManager
Register a second INodeManager with the OPC UA stack dedicated to virtual-tag nodes. Each tag landed under an Equipment folder would be owned by whichever NodeManager materialized it; mixed folders would have children belonging to two different managers.
Pros:
- Clean separation — virtual-tag code never touches driver code paths.
- Independent lifecycle: restart the virtual-tag engine without touching drivers.
Cons:
- ADR-001's
EquipmentNodeWalkerwas designed as a single walker producing a single tree under one NodeManager. Forking into two walkers (one per source) risks the UNS / Equipment folders existing twice (once per manager) with different child sets, and the OPC UA stack treating them as distinct nodes. - Mixed equipment folders: when a Line has 3 driver tags + 2 virtual tags, a client browsing the Line folder expects to see 5 children. Two NodeManagers each claiming ownership of the same folder adds the browse-merge problem the stack doesn't do cleanly.
- ACL binding (Phase 6.2 trie): one scope per Equipment folder, resolved by
NodeScopeResolver. Two NodeManagers means two resolution paths or shared resolution logic — cross-manager coupling that defeats the separation. - Audit pathways (Phase 6.2
IAuditLogger) and resilience wrappers (Phase 6.1CapabilityInvoker) are wired into the existingDriverNodeManager. Duplicating them into a second manager doubles the surface that the Roslyn analyzer from Phase 6.1 Stream A follow-up must keep honest.
Rejected because the sharing of folders (Equipment nodes owning both kinds of children) is the common case, not the exception. Two NodeManagers would fight for ownership on every Equipment node.
Option B — Single DriverNodeManager, NodeScopeResolver returns a NodeSource tag, dispatch branches on source
NodeScopeResolver (established in ADR-001) already joins nodes against the config DB to produce a ScopeId for ACL enforcement. Extend it to also return a NodeSource enum (Driver or Virtual). DriverNodeManager dispatch methods check the source and route:
internal sealed class DriverNodeManager : CustomNodeManager2
{
private readonly IReadOnlyDictionary<string, IDriver> _drivers;
private readonly IVirtualTagEngine _virtualTagEngine;
private readonly NodeScopeResolver _resolver;
protected override async Task ReadValueAsync(NodeId nodeId, ...)
{
var scope = _resolver.Resolve(nodeId);
// ... ACL check via Phase 6.2 trie (unchanged)
return scope.Source switch
{
NodeSource.Driver => await _drivers[scope.DriverInstanceId].ReadAsync(...),
NodeSource.Virtual => await _virtualTagEngine.ReadAsync(scope.VirtualTagId, ...),
};
}
}
Pros:
- Single address-space tree.
EquipmentNodeWalkeremits one folder per Equipment node and hangs both driver and virtual children under it. Browse / subscribe fan-out / ACL resolution all happen in one NodeManager with one mental model. - ACL binding works identically for both kinds. A user with
ReadEquipmentonLine1/Pump_7can read every child, driver-sourced or virtual. - Phase 6.1 resilience wrapping + Phase 6.2 audit logging apply uniformly. The
CapabilityInvokeranalyzer stays correct without new exemptions. - Adding future source kinds (e.g. a "derived tag" that's neither a driver read nor a script evaluation) is a single-enum-case addition — no new NodeManager.
Cons:
NodeScopeResolverbecomes slightly chunkier — it now carries dispatch metadata in addition to ACL scope. We own that complexity; the payoff is one tree, one lifecycle.- A bug in the dispatch branch could leak a driver call into the virtual path or vice versa. Mitigated by an xUnit theory in Stream G.4 that mixes both kinds in one Equipment folder and asserts each routes correctly.
Accepted.
Option C — Virtual tag engine registers as a synthetic IDriver
Implement a VirtualTagDriverAdapter that wraps VirtualTagEngine and registers it alongside real drivers through the existing DriverTypeRegistry. Then DriverNodeManager dispatches everything through driver plumbing — virtual tags are just "a driver with no wire."
Pros:
- Reuses every existing
IDriverpathway without modification. - Dispatch branch is trivial because there's no branch — everything routes through driver plumbing.
Cons:
DriverInstanceis the wrong shape for virtual-tag config: noDriverType, noHostAddress, no connectivity probe, no lifecycle-initialization parameters, no NSSM wrapper. Forcing it to fit means adding null columns / sentinel values everywhere.IDriver.InitializeAsync/IRediscoverablesemantics don't match a scripting engine — the engine doesn't "discover" tags against a wire, it compiles scripts against a config snapshot.- The resilience Polly wrappers are calibrated for network-bound calls (timeout / retry / circuit breaker). Applying them to a script evaluation is either a pointless passthrough or wrong tuning.
- The Admin UI would need special-casing in every driver-config screen to hide fields that don't apply. The shape mismatch leaks everywhere.
Rejected because the fit is worse than Option B's lightweight dispatch branch. The pretense of uniformity would cost more than the branch it avoids.
Decision
Option B is accepted.
NodeScopeResolver.Resolve(nodeId) returns a NodeScope record with:
public sealed record NodeScope(
string ScopeId, // ACL scope ID — unchanged from ADR-001
NodeSource Source, // NEW: Driver or Virtual
string? DriverInstanceId, // populated when Source=Driver
string? VirtualTagId); // populated when Source=Virtual
public enum NodeSource
{
Driver,
Virtual,
}
DriverNodeManager holds a single reference to IVirtualTagEngine alongside its driver dictionary. Read / Write / Subscribe dispatch pattern-matches on scope.Source and routes accordingly. Writes to a virtual node from an OPC UA client return BadUserAccessDenied because per Phase 7 decision #6, virtual tags are writable only from scripts via ctx.SetVirtualTag. That check lives in DriverNodeManager before the dispatch branch — a dedicated ACL rule rather than a capability of the engine.
Dispatch tests (Phase 7 Stream G.4) must cover at minimum:
- Mixed Equipment folder (driver + virtual children) browses with all children visible
- Read routes to the correct backend for each source kind
- Subscribe delivers changes from both kinds on the same subscription
- OPC UA client write to a virtual node returns
BadUserAccessDeniedwithout invoking the engine - Script-driven write to a virtual node (via
ctx.SetVirtualTag) updates the value + fires subscription notifications
Consequences
EquipmentNodeWalker(ADR-001) gains an extra input channel: the config DB'sVirtualTagtable alongside the existingTagtable. Walker emits both kinds of children under each Equipment folder with theNodeSourcetag set per row.NodeScopeResolvergains aNodeSourcereturn value. The change is additive (ADR-001'sScopeIdfield is unchanged), so Phase 6.2's ACL trie keeps working without modification.DriverNodeManagergains a dispatch branch but the shape of everyI*call into drivers is unchanged. Phase 6.1's resilience wrapping applies identically to the driver branch; the virtual branch wraps separately (virtual tag evaluation errors map toBadInternalErrorper Phase 7 decision #11, not through the Polly pipeline).- Adding a future source kind (e.g. an alias tag, a cross-cluster federation tag) is one enum case + one dispatch arm + the equivalent walker extension. The architecture is extensible without rewrite.
Not Decided (revisitable)
- Whether
IVirtualTagEngineshould live alongsideIDriverinCore.Abstractionsor stay in the Phase 7 project. Plan currently keeps it in Phase 7'sCore.VirtualTagsproject because it's not a driver capability. If Phase 7 Stream G discovers significant shared surface, promote later — not blocking. - Whether server-side method calls from OPC UA clients (e.g. a future "force-recompute-this-virtual-tag" admin method) should route through the same dispatch. Out of scope — virtual tags have no method nodes today; scripted alarm method calls (
OneShotShelveetc.) route through their ownScriptedAlarmEnginepath per Phase 7 Stream C.6.
References
- Phase 7 — Scripting Runtime + Scripted Alarms Stream G
- ADR-001 — Equipment node walker
docs/v2/plan.mddecision #110 (Tag-to-Equipment binding)docs/v2/plan.mddecision #120 (UNS hierarchy requirements)- Phase 6.2
NodeScopeResolverACL join