Phase 3 PR 68 -- OPC UA Client ITagDiscovery (Full browse) #67

Merged
dohertj2 merged 1 commits from phase-3-pr68-opcua-client-discovery into v2 2026-04-19 01:19:29 -04:00
Owner

Summary

Adds ITagDiscovery to OpcUaClientDriver via recursive Session.BrowseAsync from the configured root (default ObjectsFolder i=85). Objects → sub-folders, Variables → builder.Variable entries with FullName = NodeId.ToString() so reads/writes round-trip without re-resolving.

Safety caps (new options):

  • BrowseRoot — scope restriction for 100k+-node servers
  • MaxDiscoveredNodes = 10_000 — memory bound; graceful degradation on overflow
  • MaxBrowseDepth = 10 — cycle guard

Visited-set prevents infinite loops on back-referenced graphs. Transient browse failures on a subtree don't kill the whole discovery.

Deferred to a follow-up PR: DataType resolution via batch ReadAsync(Attributes.DataType) so DriverAttributeInfo.DriverDataType is accurate instead of the current Int32 placeholder; AccessLevel → SecurityClass; array detection via ValueRank.

Validation

  • 10/10 OpcUaClient.Tests pass
  • dotnet build: 0 errors

Scope

Live-browse coverage against a real remote server is deferred to the in-process-fixture PR (Server project is a candidate host). ISubscribable + IHostConnectivityProbe land in PR 69.

Test plan

  • DiscoverAsync pre-init throws uniformly
  • Null-builder → ArgumentNullException
  • Default caps documented in asserts (10000 / 10 / null)
## Summary Adds `ITagDiscovery` to `OpcUaClientDriver` via recursive `Session.BrowseAsync` from the configured root (default `ObjectsFolder` i=85). Objects → sub-folders, Variables → `builder.Variable` entries with `FullName = NodeId.ToString()` so reads/writes round-trip without re-resolving. **Safety caps** (new options): - `BrowseRoot` — scope restriction for 100k+-node servers - `MaxDiscoveredNodes = 10_000` — memory bound; graceful degradation on overflow - `MaxBrowseDepth = 10` — cycle guard Visited-set prevents infinite loops on back-referenced graphs. Transient browse failures on a subtree don't kill the whole discovery. **Deferred** to a follow-up PR: DataType resolution via batch `ReadAsync(Attributes.DataType)` so `DriverAttributeInfo.DriverDataType` is accurate instead of the current `Int32` placeholder; AccessLevel → SecurityClass; array detection via ValueRank. ## Validation - 10/10 OpcUaClient.Tests pass - `dotnet build`: 0 errors ## Scope Live-browse coverage against a real remote server is deferred to the in-process-fixture PR (`Server` project is a candidate host). `ISubscribable` + `IHostConnectivityProbe` land in PR 69. ## Test plan - [x] `DiscoverAsync` pre-init throws uniformly - [x] Null-builder → `ArgumentNullException` - [x] Default caps documented in asserts (10000 / 10 / null)
dohertj2 added 1 commit 2026-04-19 01:19:25 -04:00
Phase 3 PR 68 -- OPC UA Client ITagDiscovery via recursive browse (Full strategy). Adds ITagDiscovery to OpcUaClientDriver. DiscoverAsync opens a single Remote folder on the IAddressSpaceBuilder and recursively browses from the configured root (default: ObjectsFolder i=85; override via OpcUaClientDriverOptions.BrowseRoot for scoped discovery). Browse uses non-obsolete Session.BrowseAsync(RequestHeader, ViewDescription, uint maxReferences, BrowseDescriptionCollection, ct) with HierarchicalReferences forward, subtypes included, NodeClassMask Object+Variable, ResultMask pulling BrowseName + DisplayName + NodeClass + TypeDefinition. Objects become sub-folders via builder.Folder; Variables become builder.Variable entries with FullName set to the NodeId.ToString() serialization so IReadable/IWritable can round-trip without re-resolving. Three safety caps added to OpcUaClientDriverOptions to bound runaway discovery: (1) MaxBrowseDepth default 10 -- deep enough for realistic OPC UA information models, shallow enough that cyclic graphs can't spin the browse forever. (2) MaxDiscoveredNodes default 10_000 -- caps memory on pathological remote servers. Once the cap is hit, recursion short-circuits and the partially-discovered tree is still projected into the local address space (graceful degradation rather than all-or-nothing). (3) BrowseRoot as an opt-in scope restriction string per driver-specs.md \u00A78 -- defaults to ObjectsFolder but operators with 100k-node servers can point it at a single subtree. Visited-set tracks NodeIds already visited to prevent infinite cycles on graphs with non-strict hierarchy (OPC UA models can have back-references). Transient browse failures on a subtree are swallowed -- the sub-branch stops but the rest of discovery continues, matching the Modbus driver's 'transient poll errors don't kill the loop' pattern. The driver's health surface reflects the network-level cascade via the probe loop (PR 69). Deferred to a follow-up PR: DataType resolution via a batch Session.ReadAsync(Attributes.DataType) after the browse so DriverAttributeInfo.DriverDataType is accurate instead of the current conservative DriverDataType.Int32 default; AccessLevel-derived SecurityClass instead of the current ViewOnly default; array-type detection via Attributes.ValueRank + ArrayDimensions. These need an extra wire round-trip per batch of variables + a NodeId -> DriverDataType mapping table; out of scope for PR 68 to keep browse path landable. Unit tests (OpcUaClientDiscoveryTests, 3 facts): DiscoverAsync_without_initialize_throws_InvalidOperationException (pre-init hits RequireSession); DiscoverAsync_rejects_null_builder (ArgumentNullException); Discovery_caps_are_sensible_defaults (asserts 10000 / 10 / null defaults documented above). NullAddressSpaceBuilder stub implements the full IAddressSpaceBuilder shape including IVariableHandle.MarkAsAlarmCondition (throws NotSupportedException since this PR doesn't wire alarms). Live-browse coverage against a real remote server is deferred to the in-process-server-fixture PR. 10/10 OpcUaClient.Tests pass. dotnet build clean. db56a95819
dohertj2 merged commit 141673fc80 into v2 2026-04-19 01:19:29 -04:00
dohertj2 referenced this issue from a commit 2026-04-19 03:16:40 -04:00
Phase 6 — Draft 4 implementation plans covering v2 unimplemented features + adversarial review + adjustments. After drivers were paused per user direction, audited the v2 plan for features documented-but-unshipped and identified four coherent tracks that had no implementation plan at all. Each plan follows the docs/v2/implementation/phase-*.md template (DRAFT status, branch name, Stream A-E task breakdown, Compliance Checks, Risks, Completion Checklist). docs/v2/implementation/phase-6-1-resilience-and-observability.md (243 lines) covers Polly resilience pipelines wired to every capability interface, Tier A/B/C runtime enforcement (memory watchdog generalized beyond Galaxy, scheduled recycle per decision #67, wedge detection), health endpoints on :4841, structured Serilog with correlation IDs, LiteDB local-cache fallback per decision #36. phase-6-2-authorization-runtime.md (145 lines) wires ACL enforcement on every OPC UA Read/Write/Subscribe/Call path + LDAP-group-to-admin-role grants per decisions #105 and #129 -- runtime permission-trie evaluator over the 6-level Cluster/Namespace/UnsArea/UnsLine/Equipment/Tag hierarchy, per-session cache invalidated on generation-apply + LDAP-cache expiry. phase-6-3-redundancy-runtime.md (165 lines) lands the non-transparent warm/hot redundancy runtime per decisions #79-85: dynamic ServiceLevel node, ServerUriArray peer broadcast, mid-apply dip via sp_PublishGeneration hook, operator-driven role transition (no auto-election -- plan remains explicit about what's out of scope). phase-6-4-admin-ui-completion.md (178 lines) closes Phase 1 Stream E completion-checklist items that never landed: UNS drag-reorder + impact preview, Equipment CSV import, 5-identifier search, draft-diff viewer enhancements, OPC 40010 _base Identification field exposure per decisions #138-139. Each plan then got a Codex adversarial-review pass (codex mcp tool, read-only sandbox, synchronous). Reviews explicitly targeted decision-log conflicts, API-shape assumptions, unbounded blast radius, under-specified state transitions, and testing holes. Appended 'Adversarial Review — 2026-04-19' section to each plan with numbered findings (severity / finding / why-it-matters / adjustment accepted). Review surfaced real substantive issues that the initial drafts glossed over: Phase 6.1 auto-retry conflicting with decisions #44-45 no-auto-write-retry rule; Phase 6.1 per-driver-instance pipeline breaking decision #35's per-device isolation; Phase 6.1 recycle/watchdog at Tier A/B breaching decisions #73-74 Tier-C-only constraint; Phase 6.2 conflating control-plane LdapGroupRoleMapping with data-plane ACL grants; Phase 6.2 missing Browse enforcement entirely; Phase 6.2 subscription re-authorization policy unresolved between create-time-only and per-publish; Phase 6.3 ServiceLevel=0 colliding with OPC UA Part 5 Maintenance semantics; Phase 6.3 ServerUriArray excluding self (spec-bug); Phase 6.3 apply-window counter race on cancellation; Phase 6.3 client cutover for Kepware/Aveva OI Gateway is unverified hearsay; Phase 6.4 stale UNS impact preview overwriting concurrent draft edits; Phase 6.4 identifier contract drifting from admin-ui.md canonical set (ZTag/MachineCode/SAPID/EquipmentId/EquipmentUuid, not ZTag/SAPID/UniqueId/Alias1/Alias2); Phase 6.4 CSV import atomicity internally contradictory (single txn vs chunked inserts); Phase 6.4 OPC 40010 field list not matching decision #139. Every finding has an adjustment in the plan doc -- plans are meant to be executable from the next session with the critique already baked in rather than a clean draft that would run into the same issues at implementation time. Codex thread IDs cited in each plan's review section for reproducibility. Pure documentation PR -- no code changes. Plans are DRAFT status; each becomes its own implementation phase with its own entry-gate + exit-gate when business prioritizes.
Sign in to join this conversation.
No Reviewers
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: dohertj2/lmxopcua#67