# Subscriptions `LmxNodeManager` bridges OPC UA monitored items to MXAccess runtime subscriptions using reference counting and a decoupled dispatch architecture. This design ensures that MXAccess COM callbacks (which run on the STA thread) never contend with the OPC UA framework lock. ## Ref-Counted MXAccess Subscriptions Multiple OPC UA clients can subscribe to the same Galaxy tag simultaneously. Rather than opening duplicate MXAccess subscriptions, `LmxNodeManager` maintains a reference count per tag in `_subscriptionRefCounts`. ### SubscribeTag `SubscribeTag` increments the reference count for a tag reference. On the first subscription (count goes from 0 to 1), it calls `_mxAccessClient.SubscribeAsync` to open the MXAccess runtime subscription: ```csharp internal void SubscribeTag(string fullTagReference) { lock (_lock) { if (_subscriptionRefCounts.TryGetValue(fullTagReference, out var count)) _subscriptionRefCounts[fullTagReference] = count + 1; else { _subscriptionRefCounts[fullTagReference] = 1; _ = _mxAccessClient.SubscribeAsync(fullTagReference, (_, _) => { }); } } } ``` ### UnsubscribeTag `UnsubscribeTag` decrements the reference count. When the count reaches zero, the MXAccess subscription is closed via `UnsubscribeAsync` and the tag is removed from the dictionary: ```csharp if (count <= 1) { _subscriptionRefCounts.Remove(fullTagReference); _ = _mxAccessClient.UnsubscribeAsync(fullTagReference); } else _subscriptionRefCounts[fullTagReference] = count - 1; ``` Both methods use `lock (_lock)` (a private object, distinct from the OPC UA framework `Lock`) to serialize ref-count updates without blocking node value dispatches. ## OnMonitoredItemCreated The OPC UA framework calls `OnMonitoredItemCreated` when a client creates a monitored item. The override resolves the node handle to a tag reference and calls `SubscribeTag`, which opens the MXAccess subscription early so runtime values start arriving before the first publish cycle: ```csharp protected override void OnMonitoredItemCreated(ServerSystemContext context, NodeHandle handle, MonitoredItem monitoredItem) { base.OnMonitoredItemCreated(context, handle, monitoredItem); var nodeIdStr = handle?.NodeId?.Identifier as string; if (nodeIdStr != null && _nodeIdToTagReference.TryGetValue(nodeIdStr, out var tagRef)) SubscribeTag(tagRef); } ``` `OnDeleteMonitoredItemsComplete` performs the inverse, calling `UnsubscribeTag` for each deleted monitored item. ## Data Change Dispatch Queue MXAccess delivers data change callbacks on the STA thread via the `OnTagValueChanged` event. These callbacks must not acquire the OPC UA framework `Lock` directly because the lock is also held during `Read`/`Write` operations that call into MXAccess (creating a potential deadlock with the STA thread). The solution is a `ConcurrentDictionary` named `_pendingDataChanges` that decouples the two threads. ### Callback handler `OnMxAccessDataChange` runs on the STA thread. It stores the latest value in the concurrent dictionary (coalescing rapid updates for the same tag) and signals the dispatch thread: ```csharp private void OnMxAccessDataChange(string address, Vtq vtq) { Interlocked.Increment(ref _totalMxChangeEvents); _pendingDataChanges[address] = vtq; _dataChangeSignal.Set(); } ``` ### Dispatch thread architecture A dedicated background thread (`OpcUaDataChangeDispatch`) runs `DispatchLoop`, which waits on an `AutoResetEvent` with a 100ms timeout. The decoupled design exists for two reasons: 1. **Deadlock avoidance** -- The STA thread must not acquire the OPC UA `Lock`. The dispatch thread is a normal background thread that can safely acquire `Lock`. 2. **Batch coalescing** -- Multiple MXAccess callbacks for the same tag between dispatch cycles are collapsed to the latest value via dictionary key overwrite. Under high load, this reduces the number of `ClearChangeMasks` calls. The dispatch loop processes changes in two phases: **Phase 1 (outside Lock):** Drain keys from `_pendingDataChanges`, convert each `Vtq` to a `DataValue` via `CreatePublishedDataValue`, and collect alarm transition events. MXAccess reads for alarm Priority and DescAttrName values also happen in this phase, since they call back into the STA thread. **Phase 2 (inside Lock):** Apply all prepared updates to variable nodes and call `ClearChangeMasks` on each to trigger OPC UA data change notifications. Alarm events are reported in this same lock scope. ```csharp lock (Lock) { foreach (var (variable, dataValue) in updates) { variable.Value = dataValue.Value; variable.StatusCode = dataValue.StatusCode; variable.Timestamp = dataValue.SourceTimestamp; variable.ClearChangeMasks(SystemContext, false); } } ``` ### ClearChangeMasks `ClearChangeMasks(SystemContext, false)` is the mechanism that notifies the OPC UA framework a node's value has changed. The framework uses change masks internally to track which nodes have pending notifications for active monitored items. Calling this method causes the server to enqueue data change notifications for all monitoring clients of that node. The `false` parameter indicates that child nodes should not be recursively cleared. ## Transferred Subscription Restoration When OPC UA sessions are transferred (e.g., client reconnects and resumes a previous session), the framework calls `OnMonitoredItemsTransferred`. The override collects the tag references for all transferred items and calls `RestoreTransferredSubscriptions`. `RestoreTransferredSubscriptions` groups the tag references by count and, for each tag that does not already have an active ref-count entry, opens a new MXAccess subscription and sets the initial reference count: ```csharp internal void RestoreTransferredSubscriptions(IEnumerable fullTagReferences) { var transferredCounts = fullTagReferences .GroupBy(tagRef => tagRef, StringComparer.OrdinalIgnoreCase) .ToDictionary(g => g.Key, g => g.Count(), StringComparer.OrdinalIgnoreCase); foreach (var kvp in transferredCounts) { lock (_lock) { if (_subscriptionRefCounts.ContainsKey(kvp.Key)) continue; _subscriptionRefCounts[kvp.Key] = kvp.Value; } _ = _mxAccessClient.SubscribeAsync(kvp.Key, (_, _) => { }); } } ``` Tags that already have in-memory bookkeeping are skipped to avoid double-counting when the transfer happens within the same server process (normal in-process session migration).