Files
lmxopcua/docs/drivers/OpcUaClient.md
2026-04-26 06:08:30 -04:00

6.5 KiB

OPC UA Client driver

Tier-A in-process driver that opens a Session against a remote OPC UA server and re-exposes its address space through the local OtOpcUa server. The "gateway / aggregation" direction — opposite to the usual "server exposes PLC data" flow.

For the test fixture (opc-plc) see OpcUaClient-Test-Fixture.md. For the configuration surface see OpcUaClientDriverOptions in src/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient/OpcUaClientDriverOptions.cs.

Auto re-import on ModelChangeEvent

The driver subscribes to BaseModelChangeEventType (and its subtype GeneralModelChangeEventType) on the upstream Server node (i=2253) at the end of InitializeAsync. When the upstream server advertises a topology change, the driver coalesces events over a debounce window and runs a single re-import (equivalent to calling ReinitializeAsync — internally ShutdownAsync + InitializeAsync).

Configuration

Option Default Notes
WatchModelChanges true Disable to skip the watch entirely (no extra subscription, no re-import on topology change).
ModelChangeDebounce 5s Coalescing window. The first event starts the timer; further events extend it; when it elapses with no new events, the driver fires one re-import.

Behaviour

  • One model-change subscription per driver instance, separate from the data + alarm subscriptions. Created best-effort: a server that doesn't advertise the event types or rejects the EventFilter falls through to no-watch — InitializeAsync still succeeds.
  • The EventFilter selects only the EventType field (a WhereClause constrains by OfType BaseModelChangeEventType). Payload fields like Changes[] are intentionally ignored: the driver always re-imports the full upstream root, so per-event delta tracking would just add wire overhead.
  • Debounce is implemented via a single-shot Timer; every event calls Timer.Change(window, Infinite) so a burst of N events triggers exactly one re-import after the window elapses with no further events.
  • The re-import path acquires the same _gate semaphore that ReadAsync / WriteAsync / BrowseAsync / SubscribeAsync use. Downstream callers see a brief browse-gap (≈ the upstream DiscoverAsync duration) while the gate is held — but no torn reads or split-batch writes.
  • Failure during the re-import is best-effort: the next ModelChangeEvent triggers another attempt, and the keep-alive watchdog covers permanent upstream loss. Operators see failures through DriverHealth.LastError
    • the diagnostics counters.

When to disable

Flip WatchModelChanges to false when:

  • The upstream topology is known-static (e.g. firmware-pinned PLC) and the driver should never run a re-import unprompted.
  • The brief browse-gap during re-import is unacceptable and a manual ReinitializeAsync call from the operator is preferred.
  • The upstream server fires spurious ModelChangeEvents that don't reflect real topology changes, causing wasted re-imports. Tighten or disable rather than chasing the noise downstream.

Reverse Connect (server-initiated)

OPC UA's reverse-connect mode flips the transport direction: instead of the client dialling the server, the server dials the client's listener. The upstream sends a ReverseHello and the client continues the OPC UA handshake on the inbound socket. Required for OT-DMZ deployments where the plant firewall only permits outbound traffic from the upstream — the gateway opens a listener, the upstream reaches out.

Configuration

Option Default Notes
ReverseConnect.Enabled false Opt-in. When true, replaces the failover dial-sweep with a WaitForConnection call.
ReverseConnect.ListenerUrl null Local listener URL the SDK binds. Typically opc.tcp://0.0.0.0:4844 (any interface) or a specific NIC for multi-homed gateways. Required when Enabled is true.
ReverseConnect.ExpectedServerUri null Upstream's ApplicationUri to filter inbound dials. null accepts the first connection (only safe with one upstream targeting the listener).

Shared listener (singleton)

A single underlying Opc.Ua.Client.ReverseConnectManager per process keyed on ListenerUrl. Two driver instances that share a listener URL multiplex onto one TCP socket; the SDK demuxes inbound dials by the upstream's reported ServerUri. The wrapper (ReverseConnectListener) is reference-counted — first Acquire binds the port, last Release tears it down. Letting drivers come and go independently without races on port-bind / port-unbind.

When two drivers share a listener:

  • They MUST set ExpectedServerUri to disambiguate; otherwise the first upstream to dial in wins regardless of which driver is waiting.
  • They CAN come and go independently; the listener stays alive while at least one driver references it.

Behaviour

  • The dial path is bypassed entirely when Enabled is true. Failover across multiple EndpointUrls doesn't apply — there's no client-side dial to fail over.
  • ExpectedServerUri is the SDK's filter parameter to WaitForConnectionAsync. Inbound ReverseHellos from a different upstream are ignored and the caller keeps waiting.
  • The same EndpointDescription derivation runs as the dial path — the first EndpointUrl in the candidate list seeds SecurityPolicy / SecurityMode / EndpointUrl for the session-create call. The actual endpoint lives on the upstream and the SDK reconciles after the ReverseHello.
  • Cancellation: Timeout bounds the wait. A stuck listener with no inbound dial throws after Timeout rather than hanging init forever.
  • Shutdown releases the listener reference. The last release stops the listener so the port can be re-bound by a future driver lifecycle.

Wiring it up on the upstream

The upstream OPC UA server has to be configured to dial out. The opc-plc simulator does this with --rc=opc.tcp://<gateway-host>:4844; for a real upstream see your server's reverse-connect docs (most major implementations expose a "ReverseConnect.Endpoint" config knob).

When NOT to use

  • Standard plant networks where the gateway can dial the upstream — the conventional dial path is simpler and supports failover natively.
  • Public-internet OPC UA: reverse-connect is a network-policy workaround, not a security primitive. Always pair with Sign or SignAndEncrypt
    • a vetted user-token policy.