Files
lmxopcua/docs/v2/implementation/adr-002-driver-vs-virtual-dispatch.md
Joseph Doherty 2a74daf228 ADR-002 — driver-vs-virtual dispatch: DriverNodeManager routes reads/writes/subscriptions across driver tags and virtual (scripted) tags via a single NodeManager with a NodeSource tag on NodeScopeResolver's output. Locks the architecture decision Phase 7 Stream G was going to have to make anyway — documenting it up front so the stream implementation can reference the chosen shape instead of rediscovering it. Option A (separate VirtualTagNodeManager sibling) rejected because shared Equipment folders owning both driver and virtual children would force two NodeManagers to fight for ownership on every Equipment node — the common case, not the exception — defeating the separation. Option C (virtual engine registers as a synthetic IDriver through DriverTypeRegistry) rejected because DriverInstance shape is wrong for scripting config (no DriverType, no HostAddress, no connectivity probe, no NSSM wrapper), IDriver.InitializeAsync semantics don't match script compilation, Polly resilience wrappers calibrated for network calls would either passthrough pointlessly or tune wrong, and Admin UI would need special-casing everywhere to hide fields that don't apply. Option B (single DriverNodeManager, NodeScopeResolver returns NodeSource enum alongside ScopeId, dispatch branches on source) accepted because it preserves one address-space tree with one walker, ACL binding works identically for both kinds, Phase 6.1 resilience + Phase 6.2 audit apply uniformly to the driver branch without needing Roslyn analyzer exemptions, and adding future source kinds is a single-enum-case addition. NodeScopeResolver.Resolve returns NodeScope(ScopeId, NodeSource, DriverInstanceId?, VirtualTagId?); DriverNodeManager pattern-matches on scope.Source and routes to either the driver dictionary or IVirtualTagEngine. OPC UA client writes to a virtual node return BadUserAccessDenied before the dispatch branch because Phase 7 decision #6 restricts virtual-tag writes to scripts via ctx.SetVirtualTag. Dispatch test coverage specified for Stream G.4: mixed Equipment folders browsing correctly, read routing per source kind, subscription fan-out across both kinds, the BadUserAccessDenied guard on virtual writes, and script-driven writes firing subscription notifications. ADR-001's walker gains the VirtualTag config-DB table as an additional input channel alongside Tag; NodeScopeResolver's ScopeId return stays unchanged so Phase 6.2's ACL trie needs no modification. Consequences flagged: whether IVirtualTagEngine lives in Core.Abstractions vs Phase 7's Core.VirtualTags project, and whether future server-side methods on virtual nodes would route through this dispatch, both marked out-of-scope for ADR-002.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 16:08:01 -04:00

10 KiB
Raw Blame History

ADR-002 — Driver-vs-virtual dispatch: how DriverNodeManager routes reads, writes, and subscriptions across driver tags and virtual (scripted) tags

Status: Accepted 2026-04-20 — Option B (single NodeManager + NodeSource tag on the resolver output); Options A and C explicitly rejected.

Related phase: Phase 7 — Scripting Runtime + Scripted Alarms Stream G.

Related tasks: #237 Phase 7 Stream G — Address-space integration.

Related ADRs: ADR-001 — Equipment node walker (this ADR extends the walker + resolver it established).

Context

Phase 7 introduces virtual tags — OPC UA variables whose values are computed by user-authored C# scripts against other tags (driver or virtual). Per design decision #2 in the Phase 7 plan, virtual tags live in the Equipment tree alongside driver tags (not a separate /Virtual/... namespace). An operator browsing Enterprise/Site/Area/Line/Equipment/ sees a flat list of children that includes both driver-sourced variables (e.g. SpeedSetpoint coming from a Modbus tag) and virtual variables (e.g. LineRate computed from SpeedSetpoint × 0.95).

From the operator's perspective there is no difference. From the server's perspective there is a big one: a read / write / subscribe on a driver node must dispatch to a driver's IReadable / IWritable / ISubscribable implementation; the same operation on a virtual node must dispatch to the VirtualTagEngine. The existing DriverNodeManager (shipped in Phase 1, extended by ADR-001) only knows about the driver case today.

The question is how the dispatch should branch. Three options considered.

Options

Option A — A separate VirtualTagNodeManager sibling to DriverNodeManager

Register a second INodeManager with the OPC UA stack dedicated to virtual-tag nodes. Each tag landed under an Equipment folder would be owned by whichever NodeManager materialized it; mixed folders would have children belonging to two different managers.

Pros:

  • Clean separation — virtual-tag code never touches driver code paths.
  • Independent lifecycle: restart the virtual-tag engine without touching drivers.

Cons:

  • ADR-001's EquipmentNodeWalker was designed as a single walker producing a single tree under one NodeManager. Forking into two walkers (one per source) risks the UNS / Equipment folders existing twice (once per manager) with different child sets, and the OPC UA stack treating them as distinct nodes.
  • Mixed equipment folders: when a Line has 3 driver tags + 2 virtual tags, a client browsing the Line folder expects to see 5 children. Two NodeManagers each claiming ownership of the same folder adds the browse-merge problem the stack doesn't do cleanly.
  • ACL binding (Phase 6.2 trie): one scope per Equipment folder, resolved by NodeScopeResolver. Two NodeManagers means two resolution paths or shared resolution logic — cross-manager coupling that defeats the separation.
  • Audit pathways (Phase 6.2 IAuditLogger) and resilience wrappers (Phase 6.1 CapabilityInvoker) are wired into the existing DriverNodeManager. Duplicating them into a second manager doubles the surface that the Roslyn analyzer from Phase 6.1 Stream A follow-up must keep honest.

Rejected because the sharing of folders (Equipment nodes owning both kinds of children) is the common case, not the exception. Two NodeManagers would fight for ownership on every Equipment node.

Option B — Single DriverNodeManager, NodeScopeResolver returns a NodeSource tag, dispatch branches on source

NodeScopeResolver (established in ADR-001) already joins nodes against the config DB to produce a ScopeId for ACL enforcement. Extend it to also return a NodeSource enum (Driver or Virtual). DriverNodeManager dispatch methods check the source and route:

internal sealed class DriverNodeManager : CustomNodeManager2
{
    private readonly IReadOnlyDictionary<string, IDriver> _drivers;
    private readonly IVirtualTagEngine _virtualTagEngine;
    private readonly NodeScopeResolver _resolver;

    protected override async Task ReadValueAsync(NodeId nodeId, ...)
    {
        var scope = _resolver.Resolve(nodeId);
        // ... ACL check via Phase 6.2 trie (unchanged)
        return scope.Source switch
        {
            NodeSource.Driver  => await _drivers[scope.DriverInstanceId].ReadAsync(...),
            NodeSource.Virtual => await _virtualTagEngine.ReadAsync(scope.VirtualTagId, ...),
        };
    }
}

Pros:

  • Single address-space tree. EquipmentNodeWalker emits one folder per Equipment node and hangs both driver and virtual children under it. Browse / subscribe fan-out / ACL resolution all happen in one NodeManager with one mental model.
  • ACL binding works identically for both kinds. A user with ReadEquipment on Line1/Pump_7 can read every child, driver-sourced or virtual.
  • Phase 6.1 resilience wrapping + Phase 6.2 audit logging apply uniformly. The CapabilityInvoker analyzer stays correct without new exemptions.
  • Adding future source kinds (e.g. a "derived tag" that's neither a driver read nor a script evaluation) is a single-enum-case addition — no new NodeManager.

Cons:

  • NodeScopeResolver becomes slightly chunkier — it now carries dispatch metadata in addition to ACL scope. We own that complexity; the payoff is one tree, one lifecycle.
  • A bug in the dispatch branch could leak a driver call into the virtual path or vice versa. Mitigated by an xUnit theory in Stream G.4 that mixes both kinds in one Equipment folder and asserts each routes correctly.

Accepted.

Option C — Virtual tag engine registers as a synthetic IDriver

Implement a VirtualTagDriverAdapter that wraps VirtualTagEngine and registers it alongside real drivers through the existing DriverTypeRegistry. Then DriverNodeManager dispatches everything through driver plumbing — virtual tags are just "a driver with no wire."

Pros:

  • Reuses every existing IDriver pathway without modification.
  • Dispatch branch is trivial because there's no branch — everything routes through driver plumbing.

Cons:

  • DriverInstance is the wrong shape for virtual-tag config: no DriverType, no HostAddress, no connectivity probe, no lifecycle-initialization parameters, no NSSM wrapper. Forcing it to fit means adding null columns / sentinel values everywhere.
  • IDriver.InitializeAsync / IRediscoverable semantics don't match a scripting engine — the engine doesn't "discover" tags against a wire, it compiles scripts against a config snapshot.
  • The resilience Polly wrappers are calibrated for network-bound calls (timeout / retry / circuit breaker). Applying them to a script evaluation is either a pointless passthrough or wrong tuning.
  • The Admin UI would need special-casing in every driver-config screen to hide fields that don't apply. The shape mismatch leaks everywhere.

Rejected because the fit is worse than Option B's lightweight dispatch branch. The pretense of uniformity would cost more than the branch it avoids.

Decision

Option B is accepted.

NodeScopeResolver.Resolve(nodeId) returns a NodeScope record with:

public sealed record NodeScope(
    string ScopeId,            // ACL scope ID — unchanged from ADR-001
    NodeSource Source,         // NEW: Driver or Virtual
    string? DriverInstanceId,  // populated when Source=Driver
    string? VirtualTagId);     // populated when Source=Virtual

public enum NodeSource
{
    Driver,
    Virtual,
}

DriverNodeManager holds a single reference to IVirtualTagEngine alongside its driver dictionary. Read / Write / Subscribe dispatch pattern-matches on scope.Source and routes accordingly. Writes to a virtual node from an OPC UA client return BadUserAccessDenied because per Phase 7 decision #6, virtual tags are writable only from scripts via ctx.SetVirtualTag. That check lives in DriverNodeManager before the dispatch branch — a dedicated ACL rule rather than a capability of the engine.

Dispatch tests (Phase 7 Stream G.4) must cover at minimum:

  • Mixed Equipment folder (driver + virtual children) browses with all children visible
  • Read routes to the correct backend for each source kind
  • Subscribe delivers changes from both kinds on the same subscription
  • OPC UA client write to a virtual node returns BadUserAccessDenied without invoking the engine
  • Script-driven write to a virtual node (via ctx.SetVirtualTag) updates the value + fires subscription notifications

Consequences

  • EquipmentNodeWalker (ADR-001) gains an extra input channel: the config DB's VirtualTag table alongside the existing Tag table. Walker emits both kinds of children under each Equipment folder with the NodeSource tag set per row.
  • NodeScopeResolver gains a NodeSource return value. The change is additive (ADR-001's ScopeId field is unchanged), so Phase 6.2's ACL trie keeps working without modification.
  • DriverNodeManager gains a dispatch branch but the shape of every I* call into drivers is unchanged. Phase 6.1's resilience wrapping applies identically to the driver branch; the virtual branch wraps separately (virtual tag evaluation errors map to BadInternalError per Phase 7 decision #11, not through the Polly pipeline).
  • Adding a future source kind (e.g. an alias tag, a cross-cluster federation tag) is one enum case + one dispatch arm + the equivalent walker extension. The architecture is extensible without rewrite.

Not Decided (revisitable)

  • Whether IVirtualTagEngine should live alongside IDriver in Core.Abstractions or stay in the Phase 7 project. Plan currently keeps it in Phase 7's Core.VirtualTags project because it's not a driver capability. If Phase 7 Stream G discovers significant shared surface, promote later — not blocking.
  • Whether server-side method calls from OPC UA clients (e.g. a future "force-recompute-this-virtual-tag" admin method) should route through the same dispatch. Out of scope — virtual tags have no method nodes today; scripted alarm method calls (OneShotShelve etc.) route through their own ScriptedAlarmEngine path per Phase 7 Stream C.6.

References