6.5 KiB
OPC UA Client driver
Tier-A in-process driver that opens a Session against a remote OPC UA server
and re-exposes its address space through the local OtOpcUa server. The
"gateway / aggregation" direction — opposite to the usual "server exposes PLC
data" flow.
For the test fixture (opc-plc) see OpcUaClient-Test-Fixture.md.
For the configuration surface see OpcUaClientDriverOptions in
src/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient/OpcUaClientDriverOptions.cs.
Auto re-import on ModelChangeEvent
The driver subscribes to BaseModelChangeEventType (and its subtype
GeneralModelChangeEventType) on the upstream Server node (i=2253) at
the end of InitializeAsync. When the upstream server advertises a
topology change, the driver coalesces events over a debounce window and
runs a single re-import (equivalent to calling ReinitializeAsync —
internally ShutdownAsync + InitializeAsync).
Configuration
| Option | Default | Notes |
|---|---|---|
WatchModelChanges |
true |
Disable to skip the watch entirely (no extra subscription, no re-import on topology change). |
ModelChangeDebounce |
5s |
Coalescing window. The first event starts the timer; further events extend it; when it elapses with no new events, the driver fires one re-import. |
Behaviour
- One model-change subscription per driver instance, separate from the
data + alarm subscriptions. Created best-effort: a server that doesn't
advertise the event types or rejects the
EventFilterfalls through to no-watch —InitializeAsyncstill succeeds. - The
EventFilterselects only theEventTypefield (aWhereClauseconstrains byOfType BaseModelChangeEventType). Payload fields likeChanges[]are intentionally ignored: the driver always re-imports the full upstream root, so per-event delta tracking would just add wire overhead. - Debounce is implemented via a single-shot
Timer; every event callsTimer.Change(window, Infinite)so a burst of N events triggers exactly one re-import after the window elapses with no further events. - The re-import path acquires the same
_gatesemaphore thatReadAsync/WriteAsync/BrowseAsync/SubscribeAsyncuse. Downstream callers see a brief browse-gap (≈ the upstreamDiscoverAsyncduration) while the gate is held — but no torn reads or split-batch writes. - Failure during the re-import is best-effort: the next
ModelChangeEventtriggers another attempt, and the keep-alive watchdog covers permanent upstream loss. Operators see failures throughDriverHealth.LastError- the diagnostics counters.
When to disable
Flip WatchModelChanges to false when:
- The upstream topology is known-static (e.g. firmware-pinned PLC) and the driver should never run a re-import unprompted.
- The brief browse-gap during re-import is unacceptable and a manual
ReinitializeAsynccall from the operator is preferred. - The upstream server fires spurious
ModelChangeEvents that don't reflect real topology changes, causing wasted re-imports. Tighten or disable rather than chasing the noise downstream.
Reverse Connect (server-initiated)
OPC UA's reverse-connect mode flips the transport direction: instead of the
client dialling the server, the server dials the client's listener. The
upstream sends a ReverseHello and the client continues the OPC UA
handshake on the inbound socket. Required for OT-DMZ deployments where the
plant firewall only permits outbound traffic from the upstream — the
gateway opens a listener, the upstream reaches out.
Configuration
| Option | Default | Notes |
|---|---|---|
ReverseConnect.Enabled |
false |
Opt-in. When true, replaces the failover dial-sweep with a WaitForConnection call. |
ReverseConnect.ListenerUrl |
null |
Local listener URL the SDK binds. Typically opc.tcp://0.0.0.0:4844 (any interface) or a specific NIC for multi-homed gateways. Required when Enabled is true. |
ReverseConnect.ExpectedServerUri |
null |
Upstream's ApplicationUri to filter inbound dials. null accepts the first connection (only safe with one upstream targeting the listener). |
Shared listener (singleton)
A single underlying Opc.Ua.Client.ReverseConnectManager per process keyed
on ListenerUrl. Two driver instances that share a listener URL multiplex
onto one TCP socket; the SDK demuxes inbound dials by the upstream's
reported ServerUri. The wrapper (ReverseConnectListener) is
reference-counted — first Acquire binds the port, last Release tears it
down. Letting drivers come and go independently without races on
port-bind / port-unbind.
When two drivers share a listener:
- They MUST set
ExpectedServerUrito disambiguate; otherwise the first upstream to dial in wins regardless of which driver is waiting. - They CAN come and go independently; the listener stays alive while at least one driver references it.
Behaviour
- The dial path is bypassed entirely when
Enabledistrue. Failover across multipleEndpointUrlsdoesn't apply — there's no client-side dial to fail over. ExpectedServerUriis the SDK's filter parameter toWaitForConnectionAsync. InboundReverseHellos from a different upstream are ignored and the caller keeps waiting.- The same
EndpointDescriptionderivation runs as the dial path — the firstEndpointUrlin the candidate list seedsSecurityPolicy/SecurityMode/EndpointUrlfor the session-create call. The actual endpoint lives on the upstream and the SDK reconciles after theReverseHello. - Cancellation:
Timeoutbounds the wait. A stuck listener with no inbound dial throws afterTimeoutrather than hanging init forever. - Shutdown releases the listener reference. The last release stops the listener so the port can be re-bound by a future driver lifecycle.
Wiring it up on the upstream
The upstream OPC UA server has to be configured to dial out. The opc-plc
simulator does this with --rc=opc.tcp://<gateway-host>:4844; for a real
upstream see your server's reverse-connect docs (most major implementations
expose a "ReverseConnect.Endpoint" config knob).
When NOT to use
- Standard plant networks where the gateway can dial the upstream — the conventional dial path is simpler and supports failover natively.
- Public-internet OPC UA: reverse-connect is a network-policy workaround,
not a security primitive. Always pair with
SignorSignAndEncrypt- a vetted user-token policy.