Files
lmxopcua/docs/v2/plan.md

68 KiB
Raw Blame History

Next Phase Plan — OtOpcUa v2: Multi-Driver Architecture

Status: DRAFT — brainstorming in progress, do NOT execute until explicitly approved.

Branch: v2 Created: 2026-04-16

Vision

Rename from LmxOpcUa to OtOpcUa and evolve from a single-protocol OPC UA server (Galaxy/MXAccess only) into a multi-driver OPC UA server where:

  • The common core owns the OPC UA server, address space management, session/security/subscription machinery, and client-facing concerns.
  • Driver modules are pluggable backends that each know how to connect to a specific data source, discover its tags/hierarchy, and shuttle live data back through the core to OPC UA clients.
  • Drivers implement composable capability interfaces — a driver only implements what it supports (e.g. subscriptions, alarms, history).
  • The existing Galaxy/MXAccess integration becomes the first driver module, proving the abstraction works against real production use.

Target Drivers

Driver Protocol Capability Profile Notes
Galaxy MXAccess COM + Galaxy DB Read, Write, Subscribe, Alarms, HDA Existing v1 logic, out-of-process (.NET 4.8 x86)
Modbus TCP MB-TCP Read, Write, Subscribe (polled) Flat register model, config-driven tag map. Also covers DL205 via AddressFormat=DL205 (octal translation)
AB CIP EtherNet/IP CIP Read, Write, Subscribe (polled) ControlLogix/CompactLogix, symbolic tag addressing
AB Legacy EtherNet/IP PCCC Read, Write, Subscribe (polled) SLC 500/MicroLogix, file-based addressing
Siemens S7 S7comm (ISO-on-TCP) Read, Write, Subscribe (polled) S7-300/400/1200/1500, DB/M/I/Q addressing
TwinCAT ADS (Beckhoff) Read, Write, Subscribe (native) Symbol-based, native ADS notifications
FOCAS FOCAS2 (FANUC CNC) Read, Write, Subscribe (polled) CNC data model (axes, spindle, PMC, macros)
OPC UA Client OPC UA Read, Write, Subscribe, Alarms, HDA Gateway/aggregation — proxy a remote server

Driver Characteristics That Shape the Interface

Concern Galaxy Modbus TCP AB CIP AB Legacy S7 TwinCAT FOCAS OPC UA Client
Tag discovery DB query Config DB Config DB Config DB Config DB Symbol upload CNC query + Config DB Browse remote
Hierarchy Rich tree Flat (user groups) Flat or program-scoped Flat (file-based) Flat (DB/area) Symbol tree Functional (axes/spindle/PMC) Mirror remote
Data types mx_data_type Raw registers (user-typed) CIP typed File-typed (N=INT16, F=FLOAT) S7 typed IEC 61131-3 Scaled integers + structs Full OPC UA
Native subscriptions Yes (MXAccess) No (polled) No (polled) No (polled) No (polled) Yes (ADS notifications) No (polled) Yes (OPC UA)
Alarms Yes No No No No Possible (ADS state) Yes (CNC alarms) Yes (A&C)
History Yes (Historian) No No No No No No Yes (HistoryRead)

Note: AutomationDirect DL205 PLCs are supported by the Modbus TCP driver via AddressFormat=DL205 (octal V/X/Y/C/T/CT address translation over H2-ECOM100 module, port 502). No separate driver needed.


Architecture — Key Decisions & Open Questions

1. Common Core Boundary

Core owns:

  • OPC UA server lifecycle (startup, shutdown, session management)
  • Security (transport profiles, authentication, authorization)
  • Address space tree management (add/remove/update nodes)
  • Subscription engine (create, publish, transfer)
  • Status dashboard / health reporting
  • Redundancy
  • Configuration framework
  • Namespace allocation per driver

Driver owns:

  • Data source connection management
  • Tag/hierarchy discovery
  • Data type mapping (driver types → OPC UA types)
  • Read/write translation
  • Alarm sourcing (if supported)
  • Historical data access (if supported)

Decided:

  • Each driver instance manages its own polling internally — the core does not provide a shared poll scheduler.
  • Multiple instances of the same driver type are supported (e.g. two Modbus TCP drivers for different device groups).
  • One namespace index per driver instance (each instance gets its own NamespaceUri).

Decided:

  • Drivers register nodes via a builder/context API (IAddressSpaceBuilder) provided by the core. Core owns the tree; driver streams AddFolder / AddVariable calls as it discovers nodes. Supports incremental/large address spaces without forcing the driver to buffer the whole tree.

2. Driver Capability Interfaces

Composable — a driver implements only what it supports:

IDriver                    — required: lifecycle, metadata, health
├── ITagDiscovery          — discover tags/hierarchy from the backend
├── IReadable              — on-demand read
├── IWritable              — on-demand write
├── ISubscribable          — data change subscriptions (native or driver-managed polling)
├── IAlarmSource           — alarm events and acknowledgment
└── IHistoryProvider       — historical data reads

Note: ISubscribable covers both native subscriptions (Galaxy MXAccess advisory, OPC UA monitored items) and driver-internal polled subscriptions (Modbus, AB CIP). The driver owns its polling loop — the core just sees OnDataChange callbacks regardless of mechanism.

Capability matrix:

Interface Galaxy Modbus TCP AB CIP AB Legacy S7 TwinCAT FOCAS OPC UA Client
IDriver Y Y Y Y Y Y Y Y
ITagDiscovery Y Y (config DB) Y (config DB) Y (config DB) Y (config DB) Y (symbol upload) Y (built-in + config DB) Y (browse)
IReadable Y Y Y Y Y Y Y Y
IWritable Y Y Y Y Y Y Y (limited) Y
ISubscribable Y (native) Y (polled) Y (polled) Y (polled) Y (polled) Y (native ADS) Y (polled) Y (native)
IAlarmSource Y Y (CNC alarms) Y
IHistoryProvider Y Y

Decided:

  • Data change callback uses shared data models (DataValue with value, StatusCode quality, timestamp). Every driver maps to the same OPC UA StatusCode space — drivers define which quality codes they can produce but the model is universal.
  • Driver isolation: each driver instance runs independently. A crash or disconnect in one driver sets Bad quality on its own nodes only — no impact on other driver instances. The core must catch and contain driver failures.

Resilience — Polly

Decided: Use Polly v8+ (Microsoft.Extensions.Resilience) as the resilience layer across all drivers and the configuration subsystem.

Polly provides composable resilience pipelines rather than hand-rolled retry/circuit-breaker logic. Each driver instance (and each device within a driver) gets its own pipeline so failures are isolated at the finest practical level.

Where Polly applies:

Component Pipeline Strategies Purpose
Driver device connection Per device Retry (exp. backoff) + CircuitBreaker + Timeout Reconnect to offline PLC/device, stop hammering after N failures, bound connection attempts
Driver read ops Per device Timeout + Retry Reads are idempotent — retry transient failures freely
Driver write ops Per device Timeout only by default Writes are NOT auto-retried — a timeout may fire after the device already accepted the command; replaying non-idempotent field actions (pulses, acks, recipe steps, counter increments) can cause duplicate operations
Driver poll loop Per device CircuitBreaker When a device is consistently unreachable, open circuit and probe periodically instead of polling at full rate
Galaxy IPC (Proxy → Host) Per proxy Retry (backoff) + CircuitBreaker Reconnect when Galaxy Host service restarts, stop retrying if Host is down for extended period
Config DB polling Singleton Retry (backoff) + Fallback (use cache) Central DB unreachable → fall back to LiteDB cache, keep retrying in background
Config DB startup Singleton Retry (backoff) + Fallback (use cache) If DB is briefly unavailable at startup, retry before falling back to cache

How it integrates:

IHostedService (per driver instance)
  ├── Per-device ReadPipeline
  │     ├── Timeout          — bound how long a read can take
  │     ├── Retry            — transient failure recovery with jitter (SAFE: reads are idempotent)
  │     └── CircuitBreaker   — stop polling dead devices, probe periodically
  │                            on break: set device tags to Bad quality
  │                            on reset: resume normal polling, restore quality
  │
  └── Per-device WritePipeline
        ├── Timeout          — bound how long a write can take
        └── (NO retry by default)  — opt-in per tag via TagConfig.WriteIdempotent = true
                                     OR via a CAS (compare-and-set) wrapper that verifies
                                     the device state before each retry attempt

ConfigurationService
  └── ResiliencePipeline
        ├── Retry            — transient DB connectivity issues
        └── Fallback         — serve from LiteDB cache on sustained outage

Write-retry policy (per the adversarial review, finding #1):

  • Default: no automatic retry on writes. A timeout bubbles up as a write failure; the OPC UA client decides whether to re-issue.
  • Opt-in per tag via TagConfig.WriteIdempotent = true — explicit assertion by the configurer that replaying the same write has no side effect (e.g. setpoint overwrite, steady-state mode selection).
  • Opt-in via CAS (compare-and-set): before retrying, read the current value; retry only if the device still holds the pre-write value. Drivers whose protocol supports atomic read-modify-write (e.g. Modbus mask-write, OPC UA writes with expected-value) can plug this in.
  • Documented never-retry cases: edge-triggered acks, pulse outputs, monotonic counters, recipe-step advances, alarm acknowledgments, any "fire-and-forget" command register.

Polly integration points:

  • Microsoft.Extensions.Resilience for DI-friendly pipeline registration
  • TelemetryListener feeds circuit-breaker state changes into the status dashboard (operators see which devices are in open/half-open/closed state)
  • Per-driver/per-device pipeline configuration from the central config DB (retry counts, backoff intervals, circuit breaker thresholds can be tuned per device)

Decided:

  • Capability discovery uses interface checks via is (e.g. if (driver is IAlarmSource a) ...). The interface is the capability — no redundant flag enum to keep in sync.
  • ITagDiscovery is discovery-only. Drivers with a change signal (Galaxy deploy time, OPC UA server change notifications) additionally implement an optional IRediscoverable sub-interface; the core subscribes and rebuilds the affected subtree. Static drivers (Modbus, S7, etc. whose tags only change via a published config generation) don't implement it.

3. Runtime & Target Framework

Decided: .NET 10, C#, x64 for everything — except where explicitly required.

Component Target Reason
Core, Core.Abstractions .NET 10 x64 Default
Server .NET 10 x64 Default
Configuration .NET 10 x64 Default
Admin .NET 10 x64 Blazor Server
Driver.ModbusTcp .NET 10 x64 Default
Driver.AbCip .NET 10 x64 Default
Driver.OpcUaClient .NET 10 x64 Default
Client.CLI .NET 10 x64 Default
Client.UI .NET 10 x64 Avalonia
Driver.Galaxy .NET Framework 4.8 x86 MXAccess COM interop requires 32-bit

Critical implication: The Galaxy driver cannot load in-process with a .NET 10 x64 server. It must run as an out-of-process driver — a separate .NET 4.8 x86 process that the core communicates with over IPC.

Decided: Named pipes with MessagePack serialization for IPC.

  • Galaxy Host always runs on the same machine (MXAccess needs local ArchestrA Platform)
  • Named pipes are fast, no port allocation, built into both .NET 4.8 (System.IO.Pipes) and .NET 10
  • Galaxy.Shared defines request/response message types serialized with MessagePack over length-prefixed frames
  • MessagePack-CSharp (MessagePack NuGet) supports .NET Framework 4.6.1+ and .NET Standard 2.0+ — works on both sides
  • Compact binary format, faster than JSON, good fit for high-frequency data change callbacks
  • Simpler than gRPC on .NET 4.8 (which needs legacy Grpc.Core native library)

Decided: Galaxy Host is a separate Windows service.

  • Independent lifecycle from the OtOpcUa Server
  • Can be restarted without affecting the main server or other drivers
  • Galaxy.Proxy detects connection loss, sets Bad quality on Galaxy nodes, reconnects when Host comes back
  • Installed/managed via standard Windows service tooling
┌──────────────────────────────────┐  named pipe  ┌───────────────────────────┐
│  OtOpcUa Server (.NET 10 x64)   │◄────────────►│  Galaxy Host Service      │
│  Windows Service                 │              │  Windows Service           │
│  (Microsoft.Extensions.Hosting)  │              │  (.NET 4.8 x86)           │
│                                  │              │                           │
│  Core                           │              │  MxAccessBridge           │
│    ├── Driver.ModbusTcp (in-proc)│              │  GalaxyRepository         │
│    ├── Driver.AbCip    (in-proc) │              │  GalaxyDriverService      │
│    └── GalaxyProxy    (in-proc)──┼──────────────┼──AlarmTracking            │
│                                  │              │  HDA Plugin               │
└──────────────────────────────────┘              └───────────────────────────┘

Notes for future work:

  • The Proxy/Host/Shared split is a general pattern — any future driver with process-isolation requirements (bitness mismatch, unstable native dependency, license boundary) can reuse the same three-project layout.
  • Reusability of LmxNodeManager as a "generic driver node manager" will be assessed during Phase 2 interface extraction.

4. Galaxy/MXAccess as Out-of-Process Driver

Current tightly-coupled pieces to refactor:

  • LmxNodeManager — mixes OPC UA node management with MXAccess-specific logic
  • MxAccessBridge — COM thread, subscriptions, reconnect
  • GalaxyRepository — SQL queries for hierarchy/attributes
  • Alarm tracking tied to MXAccess subscription model
  • HDA via Wonderware Historian plugin

All of these stay in the Galaxy Host process (.NET 4.8 x86). The GalaxyProxy in the main server implements the standard driver interfaces and forwards over IPC.

Decided:

  • Refactor is incremental: extract IDriver / ISubscribable / ITagDiscovery etc. against the existing LmxNodeManager first (still in-process on v2 branch), validate the system still runs, then move the implementation behind the IPC boundary into Galaxy.Host. Keeps the system runnable at each step and de-risks the out-of-process move.
  • Parity test: run the existing v1 IntegrationTests suite against the v2 Galaxy driver (same Galaxy, same expectations) plus a scripted Client.CLI walkthrough (connect / browse / read / write / subscribe / history / alarms) on a dev Galaxy. Automated regression + human-observable behavior.

4. Configuration Model — Centralized MSSQL + Local Cache

Deployment topology — server clusters:

Sites deploy OtOpcUa as 2-node clusters to provide non-transparent OPC UA redundancy (per v1 — RedundancySupport.Warm / Hot, no VIP/load-balancer involvement; clients see both endpoints in ServerUriArray and pick by ServiceLevel). Single-node deployments are the same model with NodeCount = 1. The config schema treats this uniformly: every server is a member of a ServerCluster with 1 or 2 ClusterNode members.

Within a cluster, both nodes serve identical address spaces — defining tags twice would invite drift — so driver definitions, device configs, tag definitions, and poll groups attach to ClusterId, not to individual nodes. Per-node overrides exist only for physical-machine settings that legitimately differ (host, port, ApplicationUri, redundancy role, machine cert) and for the rare driver setting that must differ per node (e.g. MxAccess.ClientName so Galaxy distinguishes them). Overrides are minimal by intent.

Architecture:

┌─────────────────────────────────┐
│  Central Config DB (MSSQL)      │
│                                 │
│  - Server clusters (1 or 2 nodes)│
│  - Cluster nodes (physical srvs)│
│  - Driver assignments (per cluster)│
│  - Tag definitions (per cluster)│
│  - Device configs (per cluster) │
│  - Per-node overrides (minimal) │
│  - Schemaless driver config     │
│    (JSON; cluster-level + node  │
│     override JSON)              │
└──────────┬──────────────────────┘
           │  poll / change detection
           ▼
       ┌─── Cluster LINE3-OPCUA ────────────────────┐
       │                                            │
┌──────┴──────────────────┐    ┌──────────────────┴──┐
│  Node LINE3-OPCUA-A     │    │  Node LINE3-OPCUA-B │
│  RedundancyRole=Primary │    │  RedundancyRole=Secondary │
│                         │    │                     │
│  appsettings.json:      │    │  appsettings.json:  │
│    - MSSQL conn string  │    │    - MSSQL conn str │
│    - ClusterId          │    │    - ClusterId      │
│    - NodeId             │    │    - NodeId         │
│    - Local cache path   │    │    - Local cache path│
│                         │    │                     │
│  Local cache (LiteDB)   │    │  Local cache (LiteDB)│
└─────────────────────────┘    └─────────────────────┘

How it works:

  1. Each OtOpcUa node has a minimal appsettings.json with just: MSSQL connection string, its ClusterId and NodeId, a local machine-bound client certificate (or gMSA credential), and local cache file path. OPC UA port and ApplicationUri come from the central DB (ClusterNode.OpcUaPort / ClusterNode.ApplicationUri), not from local config — they're cluster topology, not local concerns.
  2. On startup, the node authenticates to the central DB using a credential bound to its NodeId — a client cert or SQL login per node, NOT a shared DB login. The DB-side authorization layer enforces that the authenticated principal may only read config for its NodeId's ClusterId. A self-asserted NodeId with the wrong credential is rejected. A node may not read another cluster's config, even if both clusters belong to the same admin team.
  3. The node requests its current config generation from the central DB: "give me the latest published generation for cluster X." Generations are cluster-scoped — one generation = one cluster's full configuration snapshot.
  4. The node receives the cluster-level config (drivers, devices, tags, poll groups) plus its own ClusterNode row (physical attributes + override JSON). It merges node overrides onto cluster-level driver configs at apply time.
  5. Config is cached locally in a LiteDB file keyed by generation number — if the central DB is unreachable at startup, the node boots from the latest cached generation.
  6. The node polls the central DB for a new published generation. When a new generation is published, the node downloads it, diffs it against its current one, and applies only the affected drivers/devices/tags (surgical application against an atomic snapshot).
  7. Both nodes of a cluster apply the same generation, but apply timing can differ slightly (network jitter, polling phase). During the apply window, one node may be on generation N and the other on N+1; this is acceptable because OPC UA non-transparent redundancy already accommodates per-endpoint state divergence and ServiceLevel will dip on the node that's mid-apply.
  8. If generation application fails mid-flight, the node rolls back to the previous generation and surfaces the failure in the status dashboard; admins can publish a corrective generation or explicitly roll back the cluster.
  9. The central DB is the single source of truth for fleet management — all tag definitions, device configs, driver assignments, and cluster topology live there, versioned by generation.

Central DB schema (conceptual):

ServerCluster                   ← top-level deployment unit (1 or 2 nodes)
  - ClusterId (PK)
  - Name                         ← human-readable e.g. "LINE3-OPCUA"
  - Site                         ← grouping for fleet management e.g. "PlantA"
  - NodeCount (1 | 2)
  - RedundancyMode (None | Warm | Hot)   ← None when NodeCount=1
  - NamespaceUri                 ← shared by both nodes (per v1 redundancy model)
  - Enabled
  - Notes

ClusterNode                     ← physical OPC UA server within a cluster
  - NodeId (PK)                  ← stable per physical machine, e.g. "LINE3-OPCUA-A"
  - ClusterId (FK)
  - RedundancyRole (Primary | Secondary | Standalone)
  - Host                         ← machine hostname / IP
  - OpcUaPort                    ← typically 4840 on each machine
  - DashboardPort                ← typically 8081
  - ApplicationUri               ← MUST be unique per node per OPC UA spec.
                                   Convention: urn:{Host}:OtOpcUa (hostname-embedded).
                                   Unique index enforced fleet-wide, not just per-cluster
                                   — two clusters sharing an ApplicationUri would confuse
                                   any client that browses both.
                                   Stored explicitly, NOT derived from Host at runtime —
                                   OPC UA clients pin trust to ApplicationUri (part of
                                   the cert validation chain), so silent rewrites would
                                   break client trust.
  - ServiceLevelBase             ← Primary 200, Secondary 150 by default
  - DriverConfigOverridesJson    ← per-node overrides keyed by DriverInstanceId,
                                   merged onto cluster-level DriverConfig at apply.
                                   Minimal by intent — only settings that genuinely
                                   differ per node (e.g. MxAccess.ClientName).
  - Enabled
  - LastSeenAt

ClusterNodeCredential           ← 1:1 or 1:N with ClusterNode
  - CredentialId (PK)
  - NodeId (FK)                  ← bound to the physical node, NOT the cluster
  - Kind (SqlLogin | ClientCertThumbprint | ADPrincipal | gMSA)
  - Value                        ← login name, thumbprint, SID, etc.
  - Enabled
  - RotatedAt

ConfigGeneration                ← atomic, immutable snapshot of one cluster's config
  - GenerationId (PK)            ← monotonically increasing
  - ClusterId (FK)               ← cluster-scoped — every generation belongs to one cluster
  - PublishedAt
  - PublishedBy
  - Status (Draft | Published | Superseded | RolledBack)
  - ParentGenerationId (FK)      ← rollback target
  - Notes

DriverInstance                  ← rows reference GenerationId; new generations = new rows
  - DriverInstanceRowId (PK)
  - GenerationId (FK)
  - DriverInstanceId             ← stable logical ID across generations
  - ClusterId (FK)               ← driver lives at the cluster level — both nodes
                                   instantiate it identically (modulo node overrides)
  - Name
  - DriverType (Galaxy | ModbusTcp | AbCip | OpcUaClient | …)
  - NamespaceUri                 ← per-driver namespace within the cluster's URI scope
  - Enabled
  - DriverConfig (JSON)          ← schemaless, driver-type-specific settings.
                                   Per-node overrides applied via
                                   ClusterNode.DriverConfigOverridesJson at apply time.

Device (for multi-device drivers like Modbus, CIP)
  - DeviceRowId (PK)
  - GenerationId (FK)
  - DeviceId                     ← stable logical ID
  - DriverInstanceId (FK)
  - Name
  - DeviceConfig (JSON)          ← host, port, unit ID, slot, etc.

Tag
  - TagRowId (PK)
  - GenerationId (FK)
  - TagId                        ← stable logical ID
  - DeviceId (FK) or DriverInstanceId (FK)
  - Name
  - FolderPath                   ← address space hierarchy
  - DataType
  - AccessLevel (Read | ReadWrite)
  - WriteIdempotent (bool)       ← opt-in for write retry eligibility (see Polly section)
  - TagConfig (JSON)             ← register address, poll group, scaling, etc.

PollGroup
  - PollGroupRowId (PK)
  - GenerationId (FK)
  - PollGroupId                  ← stable logical ID
  - DriverInstanceId (FK)
  - Name
  - IntervalMs

ClusterNodeGenerationState      ← tracks which generation each NODE has applied
  - NodeId (PK, FK)              ← per-node, not per-cluster — both nodes of a
                                   2-node cluster track independently
  - CurrentGenerationId (FK)
  - LastAppliedAt
  - LastAppliedStatus (Applied | RolledBack | Failed)
  - LastAppliedError

Authorization model (server-side, enforced in DB):

  • All config reads go through stored procedures that take the authenticated principal from SESSION_CONTEXT / SUSER_SNAME() / CURRENT_USER and cross-check it against ClusterNodeCredential.Value for the requesting NodeId. A principal asking for config of a ClusterId that does not contain its NodeId gets rejected, not just filtered.
  • Cross-cluster reads are forbidden even within the same site or admin scope — every config read carries the requesting NodeId and is checked.
  • Admin UI connects with a separate elevated principal that has read/write on all clusters and generations.
  • Publishing a generation is a stored procedure that validates the draft, computes the diff vs. the previous generation, and flips Status to Published atomically within a transaction. The publish is cluster-scoped — publishing a new generation for one cluster does not affect any other cluster.

appsettings.json stays minimal:

{
  "Cluster": {
    "ClusterId": "LINE3-OPCUA",
    "NodeId": "LINE3-OPCUA-A"
    // OPC UA port, ApplicationUri, redundancy role all come from central DB
  },
  "ConfigDatabase": {
    // The connection string MUST authenticate as a principal bound to this NodeId.
    // Options (pick one per deployment):
    //   - Integrated Security + gMSA (preferred on AD-joined hosts)
    //   - Client certificate (Authentication=ActiveDirectoryMsi or cert-auth)
    //   - SQL login scoped via ClusterNodeCredential table (rotate regularly)
    // A shared DB login across nodes is NOT supported — the server-side
    // authorization layer will reject cross-cluster config reads.
    "ConnectionString": "Server=configsrv;Database=OtOpcUaConfig;Authentication=...;...",
    "GenerationPollIntervalSeconds": 30,
    "LocalCachePath": "config_cache.db"
  },
  "Security": { /* transport/auth settings  still local */ }
}

Decided:

  • Central MSSQL database is the single source of truth for all configuration.
  • Top-level deployment unit is ServerCluster with 1 or 2 ClusterNode members. Single-node and 2-node deployments use the same schema; single-node is a cluster of one.
  • Driver, device, tag, and poll-group config attaches to ClusterId, not to individual nodes. Both nodes of a cluster serve identical address spaces.
  • Per-node overrides are minimal by intentClusterNode.DriverConfigOverridesJson is the only override mechanism, scoped to driver-config settings that genuinely must differ per node (e.g. MxAccess.ClientName). Tags and devices have no per-node override path.
  • ApplicationUri is auto-suggested but never auto-rewritten. When an operator creates a new ClusterNode in Admin, the UI prefills urn:{Host}:OtOpcUa. If the operator later changes Host, the UI surfaces a warning that ApplicationUri is not updated automatically — OPC UA clients pin trust to it, and a silent rewrite would force every client to re-pair. Operator must explicitly opt in to changing it.
  • Each node identifies itself by NodeId and ClusterId and authenticates with a credential bound to its NodeId; the DB enforces the mapping server-side. A self-asserted NodeId is not accepted, and a node may not read another cluster's config.
  • Local LiteDB cache for offline startup resilience, keyed by generation.
  • JSON columns for driver-type-specific config (schemaless per driver type, structured at the fleet level).
  • Multiple instances of the same driver type supported within one cluster.
  • Each device in a driver instance appears as a folder node in the address space.

Decided (rollout model):

  • Config is versioned as immutable, cluster-scoped generations. Admin authors a draft for a cluster, then publishes it in a single transaction. Nodes only ever observe a fully-published generation — never a half-edited mix of rows.
  • One generation = one cluster's full configuration snapshot. Publishing a generation for one cluster does not affect any other cluster.
  • Each node polls for the latest generation for its cluster, diffs it against its current applied generation, and surgically applies only the affected drivers/devices/tags. Surgical application is safe because the source snapshot is atomic.
  • Both nodes of a cluster apply the same generation independently — the apply timing can differ slightly. During the apply window, one node may be on generation N while the other is on N+1; this is acceptable because non-transparent redundancy already accommodates per-endpoint state divergence and ServiceLevel will dip on the node that's mid-apply.
  • Rollback: publishing a new generation never deletes old ones. Admins can roll back a cluster to any previous generation; nodes apply the target generation the same way as a forward publish.
  • Applied-state per node is tracked in ClusterNodeGenerationState so Admin can see which nodes have picked up a new publish and detect stragglers or a 2-node cluster that's diverged.
  • If neither the central DB nor a local cache is available, the node fails to start. This is acceptable — there's no meaningful "run with zero config" mode.

Decided:

  • Transport security config (certs, LDAP settings, transport profiles) stays local in appsettings.json per instance. Avoids a bootstrap chicken-and-egg where DB connection credentials would depend on config retrieved from the DB. Matches current v1 deployment model.
  • Generation retention: keep all generations forever. Rollback target is always available; audit trail is complete. Config rows are small and publish cadence is low (days/weeks), so storage cost is negligible versus the utility of a complete history.

Deferred:

  • Event-driven generation notification (SignalR / Service Broker) as an optimisation over poll interval — deferred until polling proves insufficient.

5. Project Structure

All projects target .NET 10 x64 unless noted.

src/
  # ── Configuration layer ──
  ZB.MOM.WW.OtOpcUa.Configuration/       # Central DB schema (EF), change detection,
                                          #   local LiteDB cache, config models (.NET 10)
  ZB.MOM.WW.OtOpcUa.Admin/               # Blazor Server admin UI + API for managing the
                                          #   central config DB (.NET 10)

  # ── Core + Server ──
  ZB.MOM.WW.OtOpcUa.Core/                # OPC UA server, address space, subscriptions,
                                          #   driver hosting (.NET 10)
  ZB.MOM.WW.OtOpcUa.Core.Abstractions/   # IDriver, IReadable, ISubscribable, etc.
                                          #   thin contract (.NET 10)
  ZB.MOM.WW.OtOpcUa.Server/              # Host (Microsoft.Extensions.Hosting),
                                          #   Windows Service, config bootstrap (.NET 10)

  # ── In-process drivers (.NET 10 x64) ──
  ZB.MOM.WW.OtOpcUa.Driver.ModbusTcp/    # Modbus TCP driver (NModbus)
  ZB.MOM.WW.OtOpcUa.Driver.AbCip/        # Allen-Bradley CIP driver (libplctag)
  ZB.MOM.WW.OtOpcUa.Driver.AbLegacy/     # Allen-Bradley SLC/MicroLogix driver (libplctag)
  ZB.MOM.WW.OtOpcUa.Driver.S7/           # Siemens S7 driver (S7netplus)
  ZB.MOM.WW.OtOpcUa.Driver.TwinCat/      # Beckhoff TwinCAT ADS driver (Beckhoff.TwinCAT.Ads)
  ZB.MOM.WW.OtOpcUa.Driver.Focas/        # FANUC FOCAS CNC driver (Fwlib64.dll P/Invoke)
  ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient/  # OPC UA client gateway driver

  # ── Out-of-process Galaxy driver ──
  ZB.MOM.WW.OtOpcUa.Driver.Galaxy.Proxy/ # In-process proxy that implements IDriver interfaces
                                          #   and forwards over IPC (.NET 10)
  ZB.MOM.WW.OtOpcUa.Driver.Galaxy.Host/  # Separate process: MXAccess COM, Galaxy DB,
                                          #   alarms, HDA. Hosts IPC server (.NET 4.8 x86)
  ZB.MOM.WW.OtOpcUa.Driver.Galaxy.Shared/ # Shared IPC message contracts between Proxy
                                          #   and Host (.NET Standard 2.0)

  # ── Client tooling (.NET 10 x64) ──
  ZB.MOM.WW.OtOpcUa.Client.CLI/          # client CLI
  ZB.MOM.WW.OtOpcUa.Client.UI/           # Avalonia client

tests/
  ZB.MOM.WW.OtOpcUa.Configuration.Tests/
  ZB.MOM.WW.OtOpcUa.Core.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.Galaxy.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.ModbusTcp.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.AbCip.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.AbLegacy.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.S7.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.TwinCat.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.Focas.Tests/
  ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.Tests/
  ZB.MOM.WW.OtOpcUa.IntegrationTests/

Deployment units:

Unit Description Target Deploys to
OtOpcUa Server Windows Service (M.E.Hosting) — OPC UA server + in-process drivers .NET 10 x64 Each site node
Galaxy Host Windows Service — out-of-process MXAccess driver .NET 4.8 x86 Same machine as Server (when Galaxy driver is used)
OtOpcUa Admin Blazor Server config management UI .NET 10 x64 Same server or central management host
OtOpcUa Client CLI Operator CLI tool .NET 10 x64 Any workstation
OtOpcUa Client UI Avalonia desktop client .NET 10 x64 Any workstation

Dependency graph:

Admin ──→ Configuration
Server ──→ Core ──→ Core.Abstractions
              │          ↑
              │     Driver.ModbusTcp, Driver.AbCip, Driver.AbLegacy,
              │     Driver.S7, Driver.TwinCat, Driver.Focas,
              │     Driver.OpcUaClient (in-process)
              │     Driver.Galaxy.Proxy (in-process, forwards over IPC)
              ↓
         Configuration

Galaxy.Proxy ──→ Galaxy.Shared ←── Galaxy.Host
                                   (.NET 4.8 x86, separate process)
  • Core.Abstractions — no dependencies, referenced by Core and all drivers (including Galaxy.Proxy)
  • Configuration — owns central DB access + local cache, referenced by Server and Admin
  • Admin — Blazor Server app, depends on Configuration, can deploy on same server
  • In-process drivers depend on Core.Abstractions only
  • Galaxy.Shared — .NET Standard 2.0 IPC contracts, referenced by both Proxy (.NET 10) and Host (.NET 4.8)
  • Galaxy.Host — standalone .NET 4.8 x86 process, does NOT reference Core or Core.Abstractions
  • Galaxy.Proxy — implements IDriver etc., depends on Core.Abstractions + Galaxy.Shared

Decided:

  • Mono-repo (Decision #31 above).
  • Core.Abstractions is internal-only for now — no standalone NuGet. Keep the contract mutable while the first 8 drivers are being built; revisit publishing after Phase 5 when the shape has stabilized. Design the contract as if it will eventually be public (no leaky types, stable names) to minimize churn later.

5a. LmxNodeManager Reusability Analysis

Investigated 2026-04-17. The existing LmxNodeManager (2923 lines) is the foundation for the new generic node manager — not a rewrite candidate. Categorized inventory:

Bucket Lines % What's here
Already generic ~1310 45% OPC UA plumbing: CreateAddressSpace + topological sort + _nodeMap, Read/Write dispatch, HistoryRead + continuation points, subscription delivery + _pendingDataChanges queue, dispatch thread lifecycle, runtime-status node mechanism, status-code mapping
Generic pattern, Galaxy-coded today ~1170 40% Bad-quality fan-out when a host drops, alarm auto-subscribe (InAlarm+Priority+Description pattern), background-subscribe tracking with shutdown-safe WaitAll, value normalization for arrays, connection-health probe machinery — each is a pattern every driver will need, currently wired to Galaxy types
Truly MXAccess-specific ~290 10% IMxAccessClient calls, MxDataTypeMapper, SecurityClassificationMapper, GalaxyRuntimeProbeManager construction/lifecycle, Historian literal, alarm auto-subscribe trigger
Metadata / comments ~153 5%

Interleaving assessment: concerns are cleanly separated at method boundaries. Read/Write handlers do generic resolution → generic host-status check → isolated _mxAccessClient call. The dispatch loop is fully generic. The only meaningful interleaving is in BuildAddressSpace() where GalaxyAttributeInfo leaks into node creation — fixable by introducing a driver-agnostic DriverAttributeInfo DTO.

Refactor plan:

  1. Rename LmxNodeManagerGenericDriverNodeManager : CustomNodeManager2 and lift the generic blocks unchanged. Swap IMxAccessClient for IDriver (composing IReadable / IWritable / ISubscribable). Swap GalaxyAttributeInfo for a driver-agnostic DriverAttributeInfo { FullName, DriverDataType, IsArray, ArrayDim, SecurityClass, IsHistorized }. Promote GalaxyRuntimeProbeManager to an IHostConnectivityProbe capability interface.
  2. Derive GalaxyNodeManager : GenericDriverNodeManager — driver-specific builder that maps GalaxyAttributeInfo → DriverAttributeInfo, registers MxDataTypeMapper / SecurityClassificationMapper, injects the probe manager.
  3. New drivers (Modbus, S7, etc.) extend GenericDriverNodeManager and implement the capability interfaces. No forking of the OPC UA machinery.

Ordering within Phase 2 (fits the "incremental extraction" approach in Decision #55):

  • (a) Introduce capability interfaces + DriverAttributeInfo in Core.Abstractions.
  • (b) Rename to GenericDriverNodeManager with Galaxy still in-process as the only driver; validate parity against v1 integration tests + CLI walkthrough.
  • (c) Only then move Galaxy behind the IPC boundary into Galaxy.Host.

Each step leaves the system runnable. The generic extraction is effectively free — the class is already mostly generic, just named and typed for Galaxy.


6. Migration Strategy

Decided approach:

Phase 0 — Rename + .NET 10 migration

  1. Rename to OtOpcUa — mechanical rename of namespaces, assemblies, config, and docs
  2. Migrate to .NET 10 x64 — retarget all projects except Galaxy Host

Phase 1 — Core extraction + Configuration layer + Admin scaffold 3. Build Configuration project — central MSSQL schema with ServerCluster, ClusterNode, ClusterNodeCredential, ConfigGeneration, ClusterNodeGenerationState plus the cluster-scoped DriverInstance / Device / Tag / PollGroup tables (EF Core + migrations); server-side authorization stored procs that enforce per-node-bound-to-cluster access from authenticated principals; atomic cluster-scoped publish/rollback stored procs; LiteDB local cache keyed by generation; generation-diff application logic; per-node override merge at apply time. 4. Extract Core.Abstractions — define IDriver, ITagDiscovery, IReadable, IWritable, ISubscribable, IAlarmSource, IHistoryProvider. IWritable contract separates idempotent vs. non-idempotent writes at the interface level. 5. Build Core — generic driver-hosting node manager that delegates to capability interfaces, driver isolation (catch/contain), address space registration, separate Polly pipelines for reads vs. writes per the write-retry policy above. 6. Wire Server — bootstrap from Configuration using an instance-bound credential (cert/gMSA/SQL login), fail fast if the credential is rejected, register drivers, start Core. 7. Scaffold Admin — Blazor Server app with: instance + credential management, draft/publish/rollback generation workflow (diff viewer, "publish to fleet", per-instance override), and core CRUD for drivers/devices/tags. Driver-specific config screens deferred to later phases.

Phase 2 — Galaxy driver (prove the refactor) 8. Build Galaxy.Shared — .NET Standard 2.0 IPC message contracts 9. Build Galaxy.Host — .NET 4.8 x86 process hosting MxAccessBridge, GalaxyRepository, alarms, HDA with IPC server 10. Build Galaxy.Proxy — .NET 10 in-process proxy implementing IDriver interfaces, forwarding over IPC 11. Validate parity — v2 Galaxy driver must pass the same integration tests as v1

Phase 3 — Modbus TCP driver (prove the abstraction) 12. Build Driver.ModbusTcp — NModbus, config-driven tags from central DB, internal poll loop, device-as-folder hierarchy 13. Add Modbus config screens to Admin (first driver-specific config UI)

Phase 4 — PLC drivers 14. Build Driver.AbCip — libplctag, ControlLogix/CompactLogix symbolic tags + Admin config screens 15. Build Driver.AbLegacy — libplctag, SLC 500/MicroLogix file-based addressing + Admin config screens 16. Build Driver.S7 — S7netplus, Siemens S7-300/400/1200/1500 + Admin config screens 17. Build Driver.TwinCat — Beckhoff.TwinCAT.Ads v6, native ADS notifications, symbol upload + Admin config screens

Phase 5 — Specialty drivers 18. Build Driver.Focas — FANUC FOCAS2 P/Invoke, pre-defined CNC tag set, PMC/macro config + Admin config screens 19. Build Driver.OpcUaClient — OPC UA client gateway/aggregation, namespace remapping, subscription proxying + Admin config screens

Decided:

  • Parity test for Galaxy: existing v1 IntegrationTests suite + scripted Client.CLI walkthrough (see Section 4 above).
  • Timeline: no hard deadline. Each phase ships when it's right — tests passing, Galaxy parity bar met. Quality cadence over calendar cadence.
  • FOCAS SDK: license already secured. Phase 5 can proceed as scheduled; Fwlib64.dll available for P/Invoke.

Decision Log

# Decision Rationale Date
1 Work on v2 branch Keep master stable for production 2026-04-16
2 OPC UA core + pluggable driver modules Enable multi-protocol support without forking the server 2026-04-16
3 Rename to OtOpcUa Product is no longer LMX-specific 2026-04-16
4 Composable capability interfaces Drivers vary widely in what they support; flat IDriver would force stubs 2026-04-16
5 Target drivers: Galaxy, Modbus TCP, AB CIP, AB Legacy, S7, TwinCAT, FOCAS, OPC UA Client Full PLC/CNC/SCADA/aggregation coverage 2026-04-16
6 Polling is driver-internal, not core-managed Each driver owns its poll loop; core just sees data change callbacks 2026-04-16
7 Multiple instances of same driver type supported Need e.g. separate Modbus drivers for different device groups 2026-04-16
8 Namespace index per driver instance Each instance gets its own NamespaceUri for clean isolation 2026-04-16
9 Rename to OtOpcUa as step 1 Clean mechanical change before any refactoring 2026-04-16
10 Modbus TCP as second driver Simplest protocol, validates abstraction with flat/polled/config-driven model 2026-04-16
11 Library selections per driver NModbus (Modbus), libplctag (AB CIP + AB Legacy), S7netplus (S7), Beckhoff.TwinCAT.Ads v6 (TwinCAT), Fwlib64.dll P/Invoke (FOCAS), OPC Foundation SDK (OPC UA Client) 2026-04-16
12 Driver isolation — failure contained per instance One driver crash/disconnect must not affect other drivers' nodes or quality 2026-04-16
13 Shared OPC UA StatusCode model for quality Drivers map to the same StatusCode space; each defines which codes it produces 2026-04-16
14 Central MSSQL config database Single source of truth for fleet-wide config — instances, drivers, tags, devices 2026-04-16
15 LiteDB local cache per instance Offline startup resilience — instance boots from cache if central DB is unreachable 2026-04-16
16 JSON columns for driver-specific config Schemaless per driver type, avoids table-per-driver-type explosion 2026-04-16
17 Device-as-folder in address space Multi-device drivers expose Device/Tag hierarchy for intuitive browsing 2026-04-16
18 Minimal appsettings.json (ClusterId + NodeId + DB conn) All real config lives in central DB, not local files. OPC UA port and ApplicationUri come from ClusterNode row, not local config 2026-04-16 / 2026-04-17
19 Blazor Server admin app for config management Separate deployable, manages central MSSQL config DB 2026-04-16
20 Surgical config change detection Instance detects which drivers/devices/tags changed, applies incremental updates 2026-04-16
21 Fail-to-start without DB or cache No meaningful zero-config mode — requires at least cached config 2026-04-16
22 Configuration project owns DB + cache layer Clean separation: Server and Admin both depend on it 2026-04-16
23 .NET 10 x64 default, .NET 4.8 x86 only for Galaxy Host Modern runtime for everything; COM constraint isolated to Galaxy 2026-04-16
24 Galaxy driver is out-of-process .NET 4.8 x86 process can't load into .NET 10 x64; IPC bridge required 2026-04-16
25 Galaxy.Shared (.NET Standard 2.0) for IPC contracts Must be consumable by both .NET 10 Proxy and .NET 4.8 Host 2026-04-16
26 Admin deploys on same server (co-hosted) Simplifies deployment; can also run on separate management host 2026-04-16
27 Admin scaffold early, driver-specific screens deferred Core CRUD for instances/drivers first; per-driver config UI added with each driver 2026-04-16
28 Named pipes for Galaxy IPC Fast, no port conflicts, native to both .NET 4.8 and .NET 10 2026-04-16
29 Galaxy Host is a separate Windows service Independent lifecycle, can restart without affecting main server or other drivers 2026-04-16
30 Drop TopShelf, use Microsoft.Extensions.Hosting Built-in Windows Service support in .NET 10, no third-party dependency 2026-04-16
31 Mono-repo for all drivers Simpler dependency management, single CI pipeline, shared abstractions 2026-04-16
32 MessagePack serialization for Galaxy IPC Binary, fast, works on .NET 4.8+ and .NET 10 via MessagePack-CSharp NuGet 2026-04-16
33 EF Core for Configuration DB Migrations, LINQ queries, standard .NET 10 ORM 2026-04-16
34 Polly v8+ for resilience Retry, circuit breaker, timeout per device/driver — replaces hand-rolled supervision 2026-04-16
35 Per-device resilience pipelines Circuit breaker on Drive1 doesn't affect Drive2, even in same driver instance 2026-04-16
36 Polly for config DB access Retry + fallback to LiteDB cache on sustained DB outage 2026-04-16
37 FOCAS driver uses pre-defined tag set CNC data is functional (axes, spindle, PMC), not user-defined tags — driver exposes fixed node hierarchy populated by specific FOCAS2 API calls 2026-04-16
38 FOCAS PMC + macro variables are user-configured PMC addresses (R, D, G, F, etc.) and macro variable ranges configured in central DB; not auto-discovered 2026-04-16
39 TwinCAT uses native ADS notifications One of 3 drivers with native subscriptions (Galaxy, TwinCAT, OPC UA Client); no polling needed for subscribed tags 2026-04-16
40 TwinCAT no runtime required on server Beckhoff.TwinCAT.Ads v6 supports in-process ADS router; only needs AMS route on target device 2026-04-16
41 AB Legacy (SLC/MicroLogix) as separate driver from AB CIP Different protocol (PCCC vs CIP), different addressing (file-based vs symbolic), severe connection limits (4-8) 2026-04-16
42 S7 driver notes: PUT/GET must be enabled on S7-1200/1500 Disabled by default in TIA Portal; document as prerequisite 2026-04-16
43 DL205 (AutomationDirect) handled by Modbus TCP driver DL205 supports Modbus TCP via H2-ECOM100; no separate driver needed — AddressFormat=DL205 adds octal address translation 2026-04-16
44 No automatic retry on writes by default Write retries are unsafe for non-idempotent field actions — a timeout can fire after the device already accepted the command, and replay duplicates pulses/acks/counters/recipe steps (adversarial review finding #1) 2026-04-16
45 Opt-in write retry via TagConfig.WriteIdempotent or CAS wrapper Retries must be explicit per tag; CAS (compare-and-set) verifies device state before retry where the protocol supports it 2026-04-16
46 Instance identity is credential-bound, not self-asserted Each instance authenticates to the central DB with a credential (cert/gMSA/SQL login) bound to its InstanceId; the DB rejects cross-instance config reads server-side (adversarial review finding #2) 2026-04-16
47 InstanceCredential table + authorization stored procs Credentials and the InstanceId they are authorized for live in the DB; all config reads go through procs that enforce the mapping rather than trusting the client 2026-04-16
48 Config is versioned as immutable generations with atomic publish Admin publishes a whole generation in one transaction; instances only ever observe fully-published generations, never partial multi-row edits (adversarial review finding #3) 2026-04-16
49 Surgical reload applies a generation diff, not raw row deltas The source snapshot is atomic (generation), but applying it to a running instance is still incremental — only affected drivers/devices/tags reload 2026-04-16
50 Explicit rollback via re-publishing a prior generation Generations are never deleted; rollback is just publishing an older generation as the new current, so instances apply it the same way as a forward publish 2026-04-16
51 InstanceGenerationState tracks applied generation per instance Admin can see which instances have picked up a new publish and detect stragglers or failed applies 2026-04-16
52 Address space registration via builder/context API Core owns the tree; driver streams AddFolder/AddVariable on an IAddressSpaceBuilder, avoids buffering the whole tree and supports incremental discovery 2026-04-17
53 Capability discovery via interface checks (is IAlarmSource) The interface is the capability — no redundant flag enum to keep in sync with the implementation 2026-04-17
54 Optional IRediscoverable sub-interface for change-detection Drivers with a native change signal (Galaxy deploy time, OPC UA change notifications) opt in; static drivers skip it 2026-04-17
55 Galaxy refactor is incremental — extract interfaces in place first Refactor LmxNodeManager against new abstractions while still in-process, validate, then move behind IPC. Keeps system runnable at each step 2026-04-17
56 Galaxy parity test = v1 integration suite + scripted CLI walkthrough Automated regression plus human-observable behavior on a dev Galaxy 2026-04-17
57 Transport security config stays local in appsettings.json Avoids bootstrap chicken-and-egg (DB-connection credentials can't depend on config fetched from the DB); matches v1 deployment 2026-04-17
58 Generation retention: keep all generations forever Rollback target always available; audit trail complete; storage cost negligible at publish cadence of days/weeks 2026-04-17
59 Core.Abstractions internal-only for now, no NuGet Keep the contract mutable through the first 8 drivers; design as if public, revisit after Phase 5 2026-04-17
60 No hard deadline — phases deliver when they're right Quality cadence over calendar cadence; Galaxy parity bar must be met before moving on 2026-04-17
61 FOCAS SDK license already secured Phase 5 can proceed; Fwlib64.dll available for P/Invoke with no procurement blocker 2026-04-17
62 LmxNodeManager is the foundation for GenericDriverNodeManager, not a rewrite ~85% of the 2923 lines are generic or generic-in-spirit; only ~10% (~290 lines) are truly MXAccess-specific. Concerns are cleanly separated at method boundaries — refactor is rename + DTO swap, not restructuring 2026-04-17
63 Driver stability tier model (A/B/C) Drivers vary in failure profile (pure managed vs wrapped native vs black-box DLL); tier dictates hosting and protection level. See driver-stability.md 2026-04-17
64 FOCAS is Tier C — out-of-process Windows service from day one Fwlib64.dll is a black-box vendor DLL; an AccessViolationException is uncatchable in modern .NET and would tear down the OPC UA server. Same Proxy/Host/Shared pattern as Galaxy 2026-04-17
65 Cross-cutting stability protections mandatory in all tiers SafeHandle for every native resource, memory watchdog, bounded operation queues, scheduled recycle, crash-loop circuit breaker, post-mortem log — apply to every driver process whether in-proc or isolated 2026-04-17
66 Out-of-process driver pattern is reusable across Tier C drivers Galaxy.Proxy/Host/Shared template generalizes; FOCAS is the second user; future Tier B → Tier C escalations reuse the same three-project template 2026-04-17
67 Tier B drivers may escalate to Tier C on production evidence libplctag (AB CIP/Legacy), S7netplus, TwinCAT.Ads start in-process; promote to isolated host if leaks or crashes appear in field 2026-04-17
68 Crash-loop circuit breaker — 3 crashes/5 min stops respawn Prevents host respawn thrashing when the underlying device or DLL is in a state respawning won't fix; surfaces operator-actionable alert; manual reset via Admin UI 2026-04-17
69 Post-mortem log via memory-mapped file Ring buffer of last-N operations + driver-specific state; survives hard process death including native AV; supervisor reads MMF after corpse is gone — only viable post-mortem path for native crashes 2026-04-17
70 Watchdog thresholds = hybrid multiplier + absolute floor + hard ceiling Pure multipliers misfire on tiny baselines; pure absolute MB doesn't scale across deployment sizes. max(N× baseline, baseline + floor MB) for warn/recycle plus an absolute hard ceiling. Slope detection stays orthogonal 2026-04-17
71 Crash-loop reset = escalating cooldown (1 h → 4 h → 24 h manual) with sticky alerts Manual-only is too rigid for unattended plants; pure auto-reset silently retries forever. Escalating cooldown auto-recovers transient problems but forces human attention on persistent ones; sticky alerts preserve the trail regardless of reset path 2026-04-17
72 Heartbeat cadence = 2 s with 3-miss tolerance (6 s detection) 5 s × 3 = 15 s is too slow against 1 s OPC UA publish intervals; 1 s × 3 = 3 s false-positives on GC pauses and pipe jitter. 2 s × 3 = 6 s is the sweet spot 2026-04-17
73 Process-level protections (RSS watchdog, scheduled recycle) apply ONLY to Tier C isolated host processes Process recycle in the shared server would kill every other in-proc driver, every session, and the OPC UA endpoint — directly contradicts the per-driver isolation invariant. Tier A/B drivers get per-instance allocation tracking + cache flush + no-process-kill instead (adversarial review finding #1) 2026-04-17
74 A Tier A/B driver that needs process-level recycle MUST be promoted to Tier C The only safe way to apply process recycle to a single driver is to give it its own process. If allocation tracking + cache flush can't bound a leak, the answer is isolation, not killing the server 2026-04-17
75 Wedged native calls in Tier C drivers escalate to hard process exit, never handle-free-during-call Calling release functions on a handle with an active native call is undefined behavior — exactly the AV path Tier C is designed to prevent. After grace window, leave the handle Abandoned and Environment.Exit(2). The OS reclaims fds/sockets on exit; the device's connection-timeout reclaims its end (adversarial review finding #2) 2026-04-17
76 Tier C IPC has mandatory pipe ACL + caller SID verification + per-process shared secret Default named-pipe ACL allows any local user to bypass OPC UA auth and issue reads/writes/acks directly against the host. Pipe ACL restricts to server service SID, host verifies caller token on connect, supervisor-generated per-process secret as defense-in-depth (adversarial review finding #3) 2026-04-17
77 FOCAS stability test coverage = TCP stub (functional) + FaultShim native DLL (host-side faults) A TCP stub cannot make Fwlib leak handles or AV — those live inside the P/Invoke boundary. Two artifacts cover the two layers honestly: TCP stub for ~80% of failures (network/protocol), FaultShim for the remaining ~20% (native crashes/leaks). Real-CNC validation remains the only path for vendor-specific Fwlib quirks (adversarial review finding #5) 2026-04-17
78 Per-driver stability treatment is proportional to driver risk Galaxy and FOCAS get full Tier C deep dives in driver-stability.md (different concerns: COM/STA pump vs Fwlib handle pool); TwinCAT, AB CIP, AB Legacy get short Operational Stability Notes in driver-specs.md for their tier-promotion triggers and protocol-specific failure modes; pure-managed Tier A drivers get one paragraph each. Avoids duplicating the cross-cutting protections doc seven times 2026-04-17
79 Top-level deployment unit is ServerCluster with 1 or 2 ClusterNode members Sites deploy 2-node clusters for OPC UA non-transparent redundancy (per v1 — Warm/Hot, no VIP). Single-node deployments are clusters of one. Uniform schema avoids forking the config model 2026-04-17
80 Driver / device / tag / poll-group config attaches to ClusterId, not to individual nodes Both nodes of a cluster serve identical address spaces; defining tags twice would invite drift. One generation = one cluster's complete config 2026-04-17
81 Per-node overrides minimal — ClusterNode.DriverConfigOverridesJson only Some driver settings legitimately differ per node (e.g. MxAccess.ClientName so Galaxy distinguishes them) but the surface is small. Single JSON column merged onto cluster-level DriverConfig at apply time. Tags and devices have no per-node override path 2026-04-17
82 ConfigGeneration is cluster-scoped, not fleet-scoped Publishing a generation for one cluster does not affect any other cluster. Simpler rollout (one cluster at a time), simpler rollback, simpler auth boundary. Fleet-wide synchronized rollouts (if ever needed) become a separate concern — orchestrate per-cluster publishes from Admin 2026-04-17
83 Each node authenticates with its own ClusterNodeCredential bound to NodeId Cluster-scoped auth would be too coarse — both nodes sharing a credential makes credential rotation harder and obscures which node read what. Per-node binding also enforces that Node A cannot impersonate Node B in audit logs 2026-04-17
84 Both nodes apply the same generation independently; brief divergence acceptable OPC UA non-transparent redundancy already handles per-endpoint state divergence; ServiceLevel dips on the node mid-apply and clients fail over. Forcing two-phase commit across nodes would be a complex distributed-system problem with no real upside 2026-04-17
85 OPC UA RedundancySupport.Transparent not adopted in v2 True transparent redundancy needs a VIP/load-balancer in front of the cluster. v1 ships non-transparent (Warm/Hot) with ServerUriArray and client-driven failover; v2 inherits the same model. Revisit only if a customer requirement demands LB-fronted transparency 2026-04-17
86 ApplicationUri auto-suggested as urn:{Host}:OtOpcUa but never auto-rewritten OPC UA clients pin trust to ApplicationUri — it's part of the cert validation chain. Auto-rewriting it when an operator changes Host would silently invalidate every client trust relationship. Admin UI prefills on node creation, warns on Host change, requires explicit opt-in to change. Fleet-wide unique index enforces no two nodes share an ApplicationUri 2026-04-17
87 Concrete schema and stored-proc design lives in config-db-schema.md The plan §4 sketches the conceptual model; the schema doc carries the actual DDL, indexes, stored procs, JSON conventions, and authorization model implementations. Keeps the plan readable while making the schema concrete enough to start implementing 2026-04-17
88 Admin UI is Blazor Server with LDAP-mapped admin roles (FleetAdmin / ConfigEditor / ReadOnly) Blazor Server gives real-time SignalR for live cluster status without a separate SPA build pipeline. LDAP reuses the OPC UA auth provider (no parallel user table). Three roles cover the common ops split; cluster-scoped editor grants deferred to v2.1 2026-04-17
89 Edit path is draft → diff → publish; no in-place edits, no auto-publish Generations are atomic snapshots — every change goes through an explicit publish boundary so operators see what they're committing. The diff viewer is required reading before the publish dialog enables. Bulk operations always preview before commit 2026-04-17
90 Per-node overrides are NOT generation-versioned Overrides are operationally bound to a specific physical machine, not to the cluster's logical config evolution. Editing a node override doesn't create a new generation — it updates ClusterNode.DriverConfigOverridesJson directly and takes effect on next apply. Replacement-node scenarios copy the override via deployment tooling, not by replaying generation history 2026-04-17
91 JSON content validation runs in the Admin app, not via SQL CLR CLR is disabled by default on hardened SQL Server instances; many DBAs refuse to enable it. Admin validates against per-driver JSON schemas before invoking sp_PublishGeneration; the proc enforces structural integrity (FKs, uniqueness, ISJSON) only. Direct proc invocation is already prevented by the GRANT model 2026-04-17
92 Dotted-path syntax for DriverConfigOverridesJson keys (e.g. MxAccess.ClientName) More readable than JSON Pointer in operator UI and CSV exports. Reserved-char escaping documented (\., \\); array indexing uses Items[0].Name 2026-04-17
93 sp_PurgeGenerationsBefore deferred to v2.1; signature pre-specified Initial release keeps all generations forever (decision #58). Purge proc shape locked in now: requires @ConfirmToken UI-shown random hex to prevent script-based mass deletion, CASCADE-deletes via WHERE GenerationId IN (...), audit-log entry with row counts. Surface only when a customer compliance ask demands it 2026-04-17
94 Admin UI component library = MudBlazor SUPERSEDED by #102 (See #102 — switched to Bootstrap 5 for ScadaLink parity) 2026-04-17
95 CSV import dialect = strict CSV (RFC 4180) UTF-8, BOM accepted Excel "Save as CSV (UTF-8)" produces RFC 4180 output and is the documented primary input format. TSV not initially supported 2026-04-17
96 Push-from-DB notification deferred to v2.1; polling is the v2.0 model Tightening apply latency from ~30 s → ~1 s would need SignalR backplane or SQL Service Broker — infrastructure not earning its keep at v2.0 scale. Publish dialog reserves a disabled "Push now" button labeled "Available in v2.1" so the future UX is anchored 2026-04-17
97 Draft auto-save (debounced 500 ms) with explicit Discard; Publish is the only commit Eliminates "lost work" complaints; matches Google Docs / Notion mental model. Auto-save writes to draft rows only — never to Published. Discard requires confirmation dialog 2026-04-17
98 Admin UI ships both light and dark themes SUPERSEDED by #103 (See #103 — light-only to match ScadaLink) 2026-04-17
99 CI tiering: PR-CI uses only in-process simulators; nightly/integration CI runs on dedicated Docker + Hyper-V host Keeps PR builds fast and runnable on minimal build agents; the dedicated integration host runs the heavy simulators (oitc/modbus-server, TwinCAT XAR VM, Snap7 Server, libplctag ab_server). Operational dependency: stand up the dedicated host before Phase 3 2026-04-17
100 Studio 5000 Logix Emulate: pre-release validation tier only, no phase-gate If an org license can be earmarked, designate a golden box for quarterly UDT/Program-scope passes. If not, AB CIP ships validated against ab_server only with documented UAT-time fidelity gap. Don't block Phase 4 on procurement 2026-04-17
101 FOCAS Wireshark capture is a Phase 5 prerequisite identified during Phase 4 Target capture (production CNC, CNC Guide seat, or customer site visit) identified by Phase 4 mid-point; if no target by then, escalate to procurement (CNC Guide license or dev-rig CNC) as a Phase 5 dependency 2026-04-17
102 Admin UI styling = Bootstrap 5 vendored (parity with ScadaLink CentralUI) Operators using both ScadaLink and OtOpcUa Admin see the same login screen, same sidebar, same component vocabulary. ScadaLink ships Bootstrap 5 with a custom dark-sidebar + light-main aesthetic; mirroring it directly outweighs MudBlazor's Blazor-component conveniences. Supersedes #94 2026-04-17
103 Admin UI ships single light theme matching ScadaLink (no dark mode in v2.0) ScadaLink is light-only; cross-app aesthetic consistency outweighs the ergonomic argument for dark mode. Revisit only if ScadaLink adds dark mode. Supersedes #98 2026-04-17
104 Admin auth pattern lifted directly from ScadaLink: LdapAuthService + RoleMapper + JwtTokenService + cookie auth + CookieAuthenticationStateProvider Same login form, same cookie scheme (30-min sliding), same claim shape (Name, DisplayName, Username, Role[], optional ClusterId[] scope), parallel /auth/token endpoint for API clients. Code lives in ZB.MOM.WW.OtOpcUa.Admin.Security (sibling of ScadaLink.Security); consolidate to a shared NuGet only if it later makes operational sense 2026-04-17
105 Cluster-scoped admin grants ship in v2.0 (lifted from v2.1 deferred list) ScadaLink already ships the equivalent site-scoped pattern (PermittedSiteIds claim, IsSystemWideDeployment flag), so we get cluster-scoped grants free by mirroring it. LdapGroupRoleMapping table maps groups → role + cluster scope; users without explicit cluster claims are system-wide 2026-04-17
106 Shared component set copied verbatim from ScadaLink CentralUI DataTable, ConfirmDialog, LoadingSpinner, ToastNotification, TimestampDisplay, RedirectToLogin, NotAuthorizedView. New Admin-specific shared components added to our folder rather than diverging from ScadaLink's set, so the shared vocabulary stays aligned 2026-04-17

Reference Documents

  • Driver Implementation Specifications — per-driver details: connection settings, addressing, data types, libraries, API mappings, error handling, implementation notes
  • Test Data Sources — per-driver simulator/emulator/stub for development and integration testing
  • Driver Stability & Isolation — stability tier model (A/B/C), per-driver hosting decisions, cross-cutting protections, FOCAS and Galaxy deep dives
  • Central Config DB Schema — concrete table definitions, indexes, stored procedures, authorization model, JSON conventions, EF Core migrations approach
  • Admin Web UI — Blazor Server admin app: information architecture, page-by-page workflows, per-driver config screen extensibility, real-time updates, UX rules

Out of Scope / Deferred