Files
lmxopcua/tests/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.IntegrationTests/OpcPlcFixture.cs
Joseph Doherty c985c50a96 OpcUaClient integration fixture — opc-plc in Docker closes the wire-level gap (#215). Closes task #215. The OpcUaClient driver had the richest capability matrix in the fleet (reads/writes/subscribe/alarms/history across 11 unit-test classes) + zero wire-level coverage; every test mocked the Session surface. opc-plc is Microsoft Industrial IoT's OPC UA PLC simulator — already containerized, already on MCR, pinned to 2.14.10 here. Wins vs the loopback-against-our-own-server option we'd originally scoped: (a) independent cert chain + user-token handling catches interop bugs loopback can't because both endpoints would share our own cert store; (b) pinned image tag fixes the test surface in a way our evolving server wouldn't; (c) the --alm flag opens the door to real IAlarmSource coverage later without building a custom FakeAlarmDriver. Loss vs loopback: both use the OPCFoundation.NetStandard stack internally so bugs common to that stack don't surface — addressed by a follow-up to add open62541/open62541 as a second independent-stack image (tracked). Docker is the fixture launcher — no PowerShell/Python wrapper like Modbus/pymodbus or S7/python-snap7 because opc-plc ships containerized. Docker/docker-compose.yml pins 2.14.10 + maps port 50000 + command flags --pn=50000 --ut --aa --alm; the healthcheck TCP-probes 50000 so docker ps surfaces ready state. Fixture OpcPlcFixture follows the same shape as Snap7ServerFixture + ModbusSimulatorFixture: collection-scoped, parses OPCUA_SIM_ENDPOINT (default opc.tcp://localhost:50000) into host + port, 2-second TCP probe at init, SkipReason records the failure for Assert.Skip. Forced IPv4 on the probe socket for the same reason those two fixtures do — .NET's dual-stack "localhost" resolves IPv6 ::1 first + hangs the full connect timeout when the target binds 0.0.0.0 (IPv4). OpcPlcProfile holds well-known node identifiers opc-plc exposes (ns=3;s=StepUp, FastUInt1, RandomSignedInt32, AlternatingBoolean) + builds OpcUaClientDriverOptions with SecurityPolicy.None + AutoAcceptCertificates=true since opc-plc regenerates its server cert on every container spin-up + there's no meaningful chain to validate against in CI. Three smoke tests covering what the unit suite couldn't reach: (1) Client_connects_and_reads_StepUp_node_through_real_OPC_UA_stack — full Secure Channel + Session + Read on ns=3;s=StepUp (counter that ticks every 1 s); (2) Client_reads_batch_of_varied_types_from_live_simulator — batch Read of UInt32 / Int32 / Boolean to prove typed Variant decoding, with an explicit ShouldBeOfType<bool> assertion on AlternatingBoolean to catch the common "variant gets stringified" regression; (3) Client_subscribe_receives_StepUp_data_changes_from_live_server — real MonitoredItem subscription on FastUInt1 (100 ms cadence) with a SemaphoreSlim gate + 3 s deadline on the first OnDataChange fire, tolerating container warm-up. Driver ran end-to-end against a live 2.14.10 container: all 3 pass; unit suite 78/78 unchanged. Container lifecycle verified (compose up → tests → compose down) clean, no leaked state. Docker/README.md documents install (Docker Desktop already on the dev box per Phase 1 decision #134), run (compose up / compose up -d / compose down), endpoint override (OPCUA_SIM_ENDPOINT), what opc-plc advertises with the current command flags, what's tunable via compose-file tweaks (--daa for username auth tests; --fn/--fr/--ft for subscription-stress nodes), known limitation that opc-plc shares the OPCFoundation stack with our driver. OpcUaClient-Test-Fixture.md updated — TL;DR flipped from "there is no integration fixture" to the new reality; "What it actually covers" gains an Integration section listing the three smoke tests. Follow-up the doc flags: add open62541/open62541 as a second image for fully-independent-stack interop coverage; once #219 (server-side IAlarmSource/IHistoryProvider integration tests) lands, re-run the client-side suite against opc-plc's --alm nodes to close the alarm gap from the client side too.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 11:43:20 -04:00

98 lines
4.4 KiB
C#

using System.Net.Sockets;
namespace ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.IntegrationTests;
/// <summary>
/// Reachability probe for an <c>opc-plc</c> simulator (Microsoft Industrial IoT's
/// OPC UA PLC from <c>mcr.microsoft.com/iotedge/opc-plc</c>) or any real OPC UA
/// server the <c>OPCUA_SIM_ENDPOINT</c> env var points at. Parses
/// <c>OPCUA_SIM_ENDPOINT</c> (default <c>opc.tcp://localhost:50000</c>),
/// TCP-connects to the resolved host:port at collection init, and records a
/// <see cref="SkipReason"/> on failure. Tests call <c>Assert.Skip</c> on that, so
/// `dotnet test` stays green when Docker isn't running the simulator — mirrors the
/// <see cref="ModbusSimulatorFixture"/> / <c>Snap7ServerFixture</c> pattern.
/// </summary>
/// <remarks>
/// <para>
/// <b>Why opc-plc over loopback against our own server</b> — (1) independent
/// cert chain + user-token handling catches interop bugs loopback can't;
/// (2) built-in alarm ConditionType + history simulation gives
/// <see cref="Core.Abstractions.IAlarmSource"/> +
/// <see cref="Core.Abstractions.IHistoryProvider"/> coverage without a custom
/// driver fake; (3) pinned image tag fixes the test surface in a way our own
/// evolving server wouldn't. Follow-up: add <c>open62541/open62541</c> as a
/// second image once this lands, for fully-independent-stack interop.
/// </para>
/// <para>
/// Endpoint URL contract: parser strips the <c>opc.tcp://</c> scheme + resolves
/// host + port for the liveness probe only. The real test session always
/// dials the full endpoint URL via <see cref="OpcUaClientDriverOptions.EndpointUrl"/>
/// so cert negotiation + security-policy selection run end-to-end.
/// </para>
/// </remarks>
public sealed class OpcPlcFixture : IAsyncDisposable
{
private const string DefaultEndpoint = "opc.tcp://localhost:50000";
private const string EndpointEnvVar = "OPCUA_SIM_ENDPOINT";
/// <summary>Full <c>opc.tcp://host:port</c> URL the driver session should connect to.</summary>
public string EndpointUrl { get; }
public string Host { get; }
public int Port { get; }
public string? SkipReason { get; }
public OpcPlcFixture()
{
EndpointUrl = Environment.GetEnvironmentVariable(EndpointEnvVar) ?? DefaultEndpoint;
(Host, Port) = ParseHostPort(EndpointUrl);
try
{
using var client = new TcpClient(AddressFamily.InterNetwork);
var task = client.ConnectAsync(
System.Net.Dns.GetHostAddresses(Host)
.FirstOrDefault(a => a.AddressFamily == AddressFamily.InterNetwork)
?? System.Net.IPAddress.Loopback,
Port);
if (!task.Wait(TimeSpan.FromSeconds(2)) || !client.Connected)
{
SkipReason = $"opc-plc simulator at {Host}:{Port} did not accept a TCP connection within 2s. " +
$"Start it (`docker compose -f Docker/docker-compose.yml up`) or override {EndpointEnvVar}.";
}
}
catch (Exception ex)
{
SkipReason = $"opc-plc simulator at {Host}:{Port} unreachable: {ex.GetType().Name}: {ex.Message}. " +
$"Start it (`docker compose -f Docker/docker-compose.yml up`) or override {EndpointEnvVar}.";
}
}
/// <summary>
/// Parse "opc.tcp://host:port[/path]" → (host, port). Defaults to port 4840
/// (OPC UA standard) when the URL omits the port, but opc-plc's default is
/// 50000 so DefaultEndpoint carries it explicitly.
/// </summary>
private static (string Host, int Port) ParseHostPort(string endpointUrl)
{
const string scheme = "opc.tcp://";
var body = endpointUrl.StartsWith(scheme, StringComparison.OrdinalIgnoreCase)
? endpointUrl[scheme.Length..]
: endpointUrl;
var slash = body.IndexOf('/');
if (slash >= 0) body = body[..slash];
var colon = body.IndexOf(':');
if (colon < 0) return (body, 4840);
var host = body[..colon];
return int.TryParse(body[(colon + 1)..], out var p) ? (host, p) : (host, 4840);
}
public ValueTask DisposeAsync() => ValueTask.CompletedTask;
}
[Xunit.CollectionDefinition(Name)]
public sealed class OpcPlcCollection : Xunit.ICollectionFixture<OpcPlcFixture>
{
public const string Name = "OpcPlc";
}