Add multi-client subscription sync and concurrency integration tests
9 tests verifying server handles multiple simultaneous OPC UA clients: Subscription sync: - 3 clients subscribe to same tag, all receive data changes - Client disconnect doesn't affect other clients' subscriptions - Client unsubscribe doesn't affect other clients' subscriptions - Clients subscribing to different tags receive only their own data Concurrency: - 5 clients browse simultaneously, all get identical results - 5 clients browse different nodes concurrently, all succeed - 4 clients browse+subscribe simultaneously, no interference - 3 clients subscribe+browse concurrently, no deadlock (timeout-guarded) - Rapid connect/disconnect cycles (10x), server stays stable Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
41
codereviews/review.md
Normal file
41
codereviews/review.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
# Full Review: `src`
|
||||||
|
|
||||||
|
Overall verdict: **patch is incorrect**
|
||||||
|
|
||||||
|
I reviewed the current implementation under `src/` as a code review, not just the Git diff. The main issues are correctness bugs in the OPC UA subscription path and MXAccess recovery/write handling.
|
||||||
|
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
### [P1] OPC UA monitored items never trigger MXAccess subscriptions
|
||||||
|
File: `src/ZB.MOM.WW.LmxOpcUa.Host/OpcUa/LmxNodeManager.cs:327-345`
|
||||||
|
|
||||||
|
`LmxNodeManager` has a `SubscribeTag` helper, but nothing in the node manager calls it when OPC UA clients create monitored items. There is also no monitored-item lifecycle override in this class. The result is that browsing or subscribing from an OPC UA client does not call `_mxAccessClient.SubscribeAsync(...)`, so live MXAccess data changes are never started for those tags. Clients can still do synchronous reads, but any client expecting pushed value updates will see stale `BadWaitingForInitialData` or last-cached values instead of runtime updates.
|
||||||
|
|
||||||
|
### [P1] Write timeouts are reported as successful writes
|
||||||
|
File: `src/ZB.MOM.WW.LmxOpcUa.Host/MxAccess/MxAccessClient.ReadWrite.cs:101-105`
|
||||||
|
|
||||||
|
On write timeout, the cancellation callback completes the pending task with `true`. That means a missing `OnWriteComplete` callback is treated as a successful write, and `LmxNodeManager.Write(...)` will return `ServiceResult.Good` upstream. In any timeout scenario where MXAccess never acknowledged the write, OPC UA clients will be told the write succeeded even though the runtime value may never have changed.
|
||||||
|
|
||||||
|
### [P1] Auto-reconnect stops permanently after one failed reconnect attempt
|
||||||
|
Files:
|
||||||
|
- `src/ZB.MOM.WW.LmxOpcUa.Host/MxAccess/MxAccessClient.Connection.cs:101-110`
|
||||||
|
- `src/ZB.MOM.WW.LmxOpcUa.Host/MxAccess/MxAccessClient.Monitor.cs:38-45`
|
||||||
|
|
||||||
|
The monitor only retries when `_state == ConnectionState.Disconnected`, but both `ConnectAsync` and `ReconnectAsync` move the client to `ConnectionState.Error` on a failed attempt. After the first reconnect failure, the monitor loop no longer matches its reconnect condition, so temporary outages become permanent until the whole service is restarted.
|
||||||
|
|
||||||
|
### [P2] Address-space construction depends on parent-before-child ordering
|
||||||
|
File: `src/ZB.MOM.WW.LmxOpcUa.Host/OpcUa/LmxNodeManager.cs:98-104`
|
||||||
|
|
||||||
|
`BuildAddressSpace` attaches each object to `rootFolder` whenever its parent is not already in `nodeMap`. That only works if the input hierarchy is topologically ordered with every parent appearing before every descendant. The method does not enforce that ordering itself, so any unsorted hierarchy list will silently build the wrong OPC UA tree by promoting children to the root level.
|
||||||
|
|
||||||
|
### [P3] Startup always performs an unnecessary second rebuild
|
||||||
|
Files:
|
||||||
|
- `src/ZB.MOM.WW.LmxOpcUa.Host/OpcUaService.cs:142-182`
|
||||||
|
- `src/ZB.MOM.WW.LmxOpcUa.Host/GalaxyRepository/ChangeDetectionService.cs:46-60`
|
||||||
|
|
||||||
|
`OpcUaService.Start()` already reads the Galaxy hierarchy/attributes and builds the initial address space before change detection starts. `ChangeDetectionService` then unconditionally fires `OnGalaxyChanged` on its first poll, even when the deploy timestamp has not changed, causing an immediate second full DB fetch and address-space rebuild on every startup. That doubles startup load and rebuild latency for no functional gain.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- This was a static review of the `src` tree.
|
||||||
|
- I attempted `dotnet test ZB.MOM.WW.LmxOpcUa.slnx`, but the run timed out after about 124 seconds, so I did not rely on a full green test pass for validation.
|
||||||
387
tests/ZB.MOM.WW.LmxOpcUa.Tests/Integration/MultiClientTests.cs
Normal file
387
tests/ZB.MOM.WW.LmxOpcUa.Tests/Integration/MultiClientTests.cs
Normal file
@@ -0,0 +1,387 @@
|
|||||||
|
using System;
|
||||||
|
using System.Collections.Concurrent;
|
||||||
|
using System.Collections.Generic;
|
||||||
|
using System.Linq;
|
||||||
|
using System.Threading.Tasks;
|
||||||
|
using Opc.Ua;
|
||||||
|
using Opc.Ua.Client;
|
||||||
|
using Shouldly;
|
||||||
|
using Xunit;
|
||||||
|
using ZB.MOM.WW.LmxOpcUa.Host.Domain;
|
||||||
|
using ZB.MOM.WW.LmxOpcUa.Tests.Helpers;
|
||||||
|
|
||||||
|
namespace ZB.MOM.WW.LmxOpcUa.Tests.Integration
|
||||||
|
{
|
||||||
|
/// <summary>
|
||||||
|
/// Integration tests verifying multi-client subscription sync and concurrent operations.
|
||||||
|
/// </summary>
|
||||||
|
public class MultiClientTests
|
||||||
|
{
|
||||||
|
// ── Subscription Sync ─────────────────────────────────────────────
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task MultipleClients_SubscribeToSameTag_AllReceiveDataChanges()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var clients = new List<OpcUaTestClient>();
|
||||||
|
var notifications = new ConcurrentDictionary<int, List<MonitoredItemNotification>>();
|
||||||
|
var subscriptions = new List<Subscription>();
|
||||||
|
|
||||||
|
for (int i = 0; i < 3; i++)
|
||||||
|
{
|
||||||
|
var client = new OpcUaTestClient();
|
||||||
|
await client.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
clients.Add(client);
|
||||||
|
|
||||||
|
var nodeId = client.MakeNodeId("TestMachine_001.MachineID");
|
||||||
|
var (sub, item) = await client.SubscribeAsync(nodeId, intervalMs: 100);
|
||||||
|
subscriptions.Add(sub);
|
||||||
|
|
||||||
|
var clientIndex = i;
|
||||||
|
notifications[clientIndex] = new List<MonitoredItemNotification>();
|
||||||
|
item.Notification += (_, e) =>
|
||||||
|
{
|
||||||
|
if (e.NotificationValue is MonitoredItemNotification n)
|
||||||
|
notifications[clientIndex].Add(n);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
await Task.Delay(500); // let subscriptions settle
|
||||||
|
|
||||||
|
// Simulate data change
|
||||||
|
fixture.MxProxy!.SimulateDataChangeByAddress("TestMachine_001.MachineID", "MACHINE_42", 192);
|
||||||
|
await Task.Delay(1000); // let publish cycle deliver
|
||||||
|
|
||||||
|
// All 3 clients should have received the notification
|
||||||
|
for (int i = 0; i < 3; i++)
|
||||||
|
{
|
||||||
|
notifications[i].Count.ShouldBeGreaterThan(0, $"Client {i} did not receive notification");
|
||||||
|
}
|
||||||
|
|
||||||
|
foreach (var sub in subscriptions) await sub.DeleteAsync(true);
|
||||||
|
foreach (var c in clients) c.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task Client_Disconnects_OtherClientsStillReceive()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var client1 = new OpcUaTestClient();
|
||||||
|
var client2 = new OpcUaTestClient();
|
||||||
|
var client3 = new OpcUaTestClient();
|
||||||
|
await client1.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
await client2.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
await client3.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
|
||||||
|
var notifications1 = new ConcurrentBag<MonitoredItemNotification>();
|
||||||
|
var notifications3 = new ConcurrentBag<MonitoredItemNotification>();
|
||||||
|
|
||||||
|
var (sub1, item1) = await client1.SubscribeAsync(client1.MakeNodeId("TestMachine_001.MachineID"), 100);
|
||||||
|
var (sub2, _) = await client2.SubscribeAsync(client2.MakeNodeId("TestMachine_001.MachineID"), 100);
|
||||||
|
var (sub3, item3) = await client3.SubscribeAsync(client3.MakeNodeId("TestMachine_001.MachineID"), 100);
|
||||||
|
|
||||||
|
item1.Notification += (_, e) => { if (e.NotificationValue is MonitoredItemNotification n) notifications1.Add(n); };
|
||||||
|
item3.Notification += (_, e) => { if (e.NotificationValue is MonitoredItemNotification n) notifications3.Add(n); };
|
||||||
|
|
||||||
|
await Task.Delay(500);
|
||||||
|
|
||||||
|
// Disconnect client 2
|
||||||
|
client2.Dispose();
|
||||||
|
|
||||||
|
await Task.Delay(500); // let server process disconnect
|
||||||
|
|
||||||
|
// Simulate data change — should not crash, clients 1+3 should still receive
|
||||||
|
fixture.MxProxy!.SimulateDataChangeByAddress("TestMachine_001.MachineID", "AFTER_DISCONNECT", 192);
|
||||||
|
await Task.Delay(1000);
|
||||||
|
|
||||||
|
notifications1.Count.ShouldBeGreaterThan(0, "Client 1 should still receive after client 2 disconnected");
|
||||||
|
notifications3.Count.ShouldBeGreaterThan(0, "Client 3 should still receive after client 2 disconnected");
|
||||||
|
|
||||||
|
await sub1.DeleteAsync(true);
|
||||||
|
await sub3.DeleteAsync(true);
|
||||||
|
client1.Dispose();
|
||||||
|
client3.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task Client_Unsubscribes_OtherClientsStillReceive()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var client1 = new OpcUaTestClient();
|
||||||
|
var client2 = new OpcUaTestClient();
|
||||||
|
await client1.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
await client2.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
|
||||||
|
var notifications2 = new ConcurrentBag<MonitoredItemNotification>();
|
||||||
|
|
||||||
|
var (sub1, _) = await client1.SubscribeAsync(client1.MakeNodeId("TestMachine_001.MachineID"), 100);
|
||||||
|
var (sub2, item2) = await client2.SubscribeAsync(client2.MakeNodeId("TestMachine_001.MachineID"), 100);
|
||||||
|
item2.Notification += (_, e) => { if (e.NotificationValue is MonitoredItemNotification n) notifications2.Add(n); };
|
||||||
|
|
||||||
|
await Task.Delay(500);
|
||||||
|
|
||||||
|
// Client 1 unsubscribes
|
||||||
|
await sub1.DeleteAsync(true);
|
||||||
|
await Task.Delay(500);
|
||||||
|
|
||||||
|
// Simulate data change — client 2 should still receive
|
||||||
|
fixture.MxProxy!.SimulateDataChangeByAddress("TestMachine_001.MachineID", "AFTER_UNSUB", 192);
|
||||||
|
await Task.Delay(1000);
|
||||||
|
|
||||||
|
notifications2.Count.ShouldBeGreaterThan(0, "Client 2 should still receive after client 1 unsubscribed");
|
||||||
|
|
||||||
|
await sub2.DeleteAsync(true);
|
||||||
|
client1.Dispose();
|
||||||
|
client2.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task MultipleClients_SubscribeToDifferentTags_EachGetsOwnData()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var client1 = new OpcUaTestClient();
|
||||||
|
var client2 = new OpcUaTestClient();
|
||||||
|
await client1.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
await client2.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
|
||||||
|
var notifications1 = new ConcurrentBag<MonitoredItemNotification>();
|
||||||
|
var notifications2 = new ConcurrentBag<MonitoredItemNotification>();
|
||||||
|
|
||||||
|
var (sub1, item1) = await client1.SubscribeAsync(client1.MakeNodeId("TestMachine_001.MachineID"), 100);
|
||||||
|
var (sub2, item2) = await client2.SubscribeAsync(client2.MakeNodeId("DelmiaReceiver_001.DownloadPath"), 100);
|
||||||
|
|
||||||
|
item1.Notification += (_, e) => { if (e.NotificationValue is MonitoredItemNotification n) notifications1.Add(n); };
|
||||||
|
item2.Notification += (_, e) => { if (e.NotificationValue is MonitoredItemNotification n) notifications2.Add(n); };
|
||||||
|
|
||||||
|
await Task.Delay(500);
|
||||||
|
|
||||||
|
// Only change MachineID
|
||||||
|
fixture.MxProxy!.SimulateDataChangeByAddress("TestMachine_001.MachineID", "CHANGED", 192);
|
||||||
|
await Task.Delay(1000);
|
||||||
|
|
||||||
|
notifications1.Count.ShouldBeGreaterThan(0, "Client 1 should receive MachineID change");
|
||||||
|
// Client 2 subscribed to DownloadPath, should NOT receive MachineID change
|
||||||
|
// (it may have received initial BadWaitingForInitialData, but not the "CHANGED" value)
|
||||||
|
var client2HasMachineIdValue = notifications2.Any(n =>
|
||||||
|
n.Value.Value is string s && s == "CHANGED");
|
||||||
|
client2HasMachineIdValue.ShouldBe(false, "Client 2 should not receive MachineID data");
|
||||||
|
|
||||||
|
await sub1.DeleteAsync(true);
|
||||||
|
await sub2.DeleteAsync(true);
|
||||||
|
client1.Dispose();
|
||||||
|
client2.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Concurrent Operation Tests ────────────────────────────────────
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task ConcurrentBrowseFromMultipleClients_AllSucceed()
|
||||||
|
{
|
||||||
|
// Tests concurrent browse operations from 5 clients — browses don't go through MxAccess
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var clients = new List<OpcUaTestClient>();
|
||||||
|
for (int i = 0; i < 5; i++)
|
||||||
|
{
|
||||||
|
var c = new OpcUaTestClient();
|
||||||
|
await c.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
clients.Add(c);
|
||||||
|
}
|
||||||
|
|
||||||
|
var nodes = new[]
|
||||||
|
{
|
||||||
|
"ZB", "TestMachine_001", "DelmiaReceiver_001",
|
||||||
|
"MESReceiver_001", "TestMachine_001"
|
||||||
|
};
|
||||||
|
|
||||||
|
// All 5 clients browse simultaneously
|
||||||
|
var browseTasks = clients.Select((c, i) =>
|
||||||
|
c.BrowseAsync(c.MakeNodeId(nodes[i]))).ToArray();
|
||||||
|
|
||||||
|
var results = await Task.WhenAll(browseTasks);
|
||||||
|
|
||||||
|
results.Length.ShouldBe(5);
|
||||||
|
foreach (var r in results)
|
||||||
|
r.ShouldNotBeEmpty();
|
||||||
|
|
||||||
|
foreach (var c in clients) c.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task ConcurrentBrowse_AllReturnSameResults()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var clients = new List<OpcUaTestClient>();
|
||||||
|
for (int i = 0; i < 5; i++)
|
||||||
|
{
|
||||||
|
var c = new OpcUaTestClient();
|
||||||
|
await c.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
clients.Add(c);
|
||||||
|
}
|
||||||
|
|
||||||
|
// All browse TestMachine_001 simultaneously
|
||||||
|
var browseTasks = clients.Select(c =>
|
||||||
|
c.BrowseAsync(c.MakeNodeId("TestMachine_001"))).ToArray();
|
||||||
|
|
||||||
|
var results = await Task.WhenAll(browseTasks);
|
||||||
|
|
||||||
|
// All should get identical child lists
|
||||||
|
var firstResult = results[0].Select(r => r.Name).OrderBy(n => n).ToList();
|
||||||
|
for (int i = 1; i < results.Length; i++)
|
||||||
|
{
|
||||||
|
var thisResult = results[i].Select(r => r.Name).OrderBy(n => n).ToList();
|
||||||
|
thisResult.ShouldBe(firstResult, $"Client {i} got different browse results");
|
||||||
|
}
|
||||||
|
|
||||||
|
foreach (var c in clients) c.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task ConcurrentBrowseAndSubscribe_NoInterference()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var clients = new List<OpcUaTestClient>();
|
||||||
|
for (int i = 0; i < 4; i++)
|
||||||
|
{
|
||||||
|
var c = new OpcUaTestClient();
|
||||||
|
await c.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
clients.Add(c);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2 browse + 2 subscribe simultaneously
|
||||||
|
var tasks = new Task[]
|
||||||
|
{
|
||||||
|
clients[0].BrowseAsync(clients[0].MakeNodeId("TestMachine_001")),
|
||||||
|
clients[1].BrowseAsync(clients[1].MakeNodeId("ZB")),
|
||||||
|
clients[2].SubscribeAsync(clients[2].MakeNodeId("TestMachine_001.MachineID"), 200),
|
||||||
|
clients[3].SubscribeAsync(clients[3].MakeNodeId("DelmiaReceiver_001.DownloadPath"), 200)
|
||||||
|
};
|
||||||
|
|
||||||
|
await Task.WhenAll(tasks);
|
||||||
|
// All should complete without errors
|
||||||
|
|
||||||
|
foreach (var c in clients) c.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task ConcurrentSubscribeAndRead_NoDeadlock()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var client1 = new OpcUaTestClient();
|
||||||
|
var client2 = new OpcUaTestClient();
|
||||||
|
var client3 = new OpcUaTestClient();
|
||||||
|
await client1.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
await client2.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
await client3.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
|
||||||
|
// All three operate simultaneously — should not deadlock
|
||||||
|
var timeout = Task.Delay(TimeSpan.FromSeconds(15));
|
||||||
|
var operations = Task.WhenAll(
|
||||||
|
client1.SubscribeAsync(client1.MakeNodeId("TestMachine_001.MachineID"), 200)
|
||||||
|
.ContinueWith(t => (object)t.Result),
|
||||||
|
Task.Run(() => (object)client2.Read(client2.MakeNodeId("DelmiaReceiver_001.DownloadPath"))),
|
||||||
|
client3.BrowseAsync(client3.MakeNodeId("TestMachine_001"))
|
||||||
|
.ContinueWith(t => (object)t.Result)
|
||||||
|
);
|
||||||
|
|
||||||
|
var completed = await Task.WhenAny(operations, timeout);
|
||||||
|
completed.ShouldBe(operations, "Operations should complete before timeout (possible deadlock)");
|
||||||
|
|
||||||
|
client1.Dispose();
|
||||||
|
client2.Dispose();
|
||||||
|
client3.Dispose();
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
[Fact]
|
||||||
|
public async Task RapidConnectDisconnect_ServerStaysStable()
|
||||||
|
{
|
||||||
|
var fixture = OpcUaServerFixture.WithFakes();
|
||||||
|
await fixture.InitializeAsync();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
// Rapidly connect, browse, disconnect — 10 iterations
|
||||||
|
for (int i = 0; i < 10; i++)
|
||||||
|
{
|
||||||
|
using var client = new OpcUaTestClient();
|
||||||
|
await client.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
var children = await client.BrowseAsync(client.MakeNodeId("ZB"));
|
||||||
|
children.ShouldNotBeEmpty();
|
||||||
|
}
|
||||||
|
|
||||||
|
// After all that churn, server should still be responsive
|
||||||
|
using var finalClient = new OpcUaTestClient();
|
||||||
|
await finalClient.ConnectAsync(fixture.EndpointUrl);
|
||||||
|
var finalChildren = await finalClient.BrowseAsync(finalClient.MakeNodeId("TestMachine_001"));
|
||||||
|
finalChildren.ShouldContain(c => c.Name == "MachineID");
|
||||||
|
finalChildren.ShouldContain(c => c.Name == "DelmiaReceiver");
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
await fixture.DisposeAsync();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user