docker compose up. Modbus — new tests/ZB.MOM.WW.OtOpcUa.Driver.Modbus.IntegrationTests/Docker/ with Dockerfile (python:3.12-slim-bookworm + pymodbus[simulator]==3.13.0) + docker-compose.yml with four compose profiles (standard / dl205 / mitsubishi / s7_1500) backed by the existing profile JSONs copied under Docker/profiles/ as canonical; native fallback in Pymodbus/ retained with the same JSON set (symlink-equivalent — manual re-sync when profiles change, noted in both READMEs). Port 5020 unchanged so MODBUS_SIM_ENDPOINT + ModbusSimulatorFixture work without code change. Dropped the --no_http CLI arg the old serve.ps1 + compose draft passed — pymodbus 3.13 doesn't recognize it; the simulator's http ui just binds inside the container where nothing maps it out and costs nothing. S7 — new tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/Docker/ with Dockerfile (python:3.12-slim-bookworm + python-snap7>=2.0) + docker-compose.yml with one s7_1500 compose profile; copies the existing server.py shim + s7_1500.json seed profile; runs python -u server.py ... --port 1102. Native fallback in PythonSnap7/ retained. Port 1102 unchanged. AB CIP — hardest because ab_server is a source-only C tool in libplctag's src/tools/ab_server/. New tests/ZB.MOM.WW.OtOpcUa.Driver.AbCip.IntegrationTests/Docker/ Dockerfile is multi-stage: build stage (debian:bookworm-slim + build-essential + cmake) clones libplctag at a pinned tag + cmake --build build --target ab_server; runtime stage (debian:bookworm-slim) copies just the binary from /src/build/bin_dist/ab_server. docker-compose.yml ships four compose profiles (controllogix / compactlogix / micro800 / guardlogix) with per-family ab_server CLI args matching AbServerProfile.cs. AbServerFixture updated: tries TCP probe on 127.0.0.1:44818 first (Docker path) + spawns the native binary only as fallback when no listener is there. AB_SERVER_ENDPOINT env var supported for pointing at a real PLC. AbServerFact/Theory attributes updated to IsServerAvailable() which accepts any of: live listener on 44818, AB_SERVER_ENDPOINT set, or binary on PATH. Required two CLI-compat fixes to ab_server's argument expectations that the existing native profile never caught because it was never actually run at CI: --plc is case-sensitive (ControlLogix not controllogix), CIP tags need [size] bracket notation (DINT[1] not bare DINT), ControlLogix also requires --path=1,0. Compose files carry the corrected flags; the existing native-path AbServerProfile.cs was never invoked in practice so we don't rewrite it here. Micro800 now uses the --plc=Micro800 mode rather than falling back to ControlLogix emulation — ab_server does have the dedicated mode, the old Notes saying otherwise were wrong. Updated docs: three fixture coverage docs (Modbus-Test-Fixture.md, S7-Test-Fixture.md, AbServer-Test-Fixture.md) flip their "What the fixture is" section from native-only to Docker-primary-with-native-fallback; dev-environment.md §Resource Inventory replaces the old ambiguous "Docker Desktop + ab_server native" mix with four per-driver rows (each listing the image, compose file, compose profiles, port, credentials) + a new Docker fixtures — quick reference subsection giving the one-line docker compose -f <…> --profile <…> up for each driver + the env-var override names + the native fallback install recipes. drivers/README.md coverage map table updated — Modbus/AB CIP/S7 entries now read "Dockerized …" consistent with OpcUaClient's line. Verified end-to-end against live containers: Modbus DL205 smoke 1/1, S7 3/3, AB CIP ControlLogix 4/4 (all family theory rows). Container lifecycle clean (up/test/down, no leaked state). Every fixture keeps its skip-when-absent probe + env-var endpoint override so dotnet test on a fresh clone without Docker running still gets a green run.
docker compose up. Modbus — new tests/ZB.MOM.WW.OtOpcUa.Driver.Modbus.IntegrationTests/Docker/ with Dockerfile (python:3.12-slim-bookworm + pymodbus[simulator]==3.13.0) + docker-compose.yml with four compose profiles (standard / dl205 / mitsubishi / s7_1500) backed by the existing profile JSONs copied under Docker/profiles/ as canonical; native fallback in Pymodbus/ retained with the same JSON set (symlink-equivalent — manual re-sync when profiles change, noted in both READMEs). Port 5020 unchanged so MODBUS_SIM_ENDPOINT + ModbusSimulatorFixture work without code change. Dropped the --no_http CLI arg the old serve.ps1 + compose draft passed — pymodbus 3.13 doesn't recognize it; the simulator's http ui just binds inside the container where nothing maps it out and costs nothing. S7 — new tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/Docker/ with Dockerfile (python:3.12-slim-bookworm + python-snap7>=2.0) + docker-compose.yml with one s7_1500 compose profile; copies the existing server.py shim + s7_1500.json seed profile; runs python -u server.py ... --port 1102. Native fallback in PythonSnap7/ retained. Port 1102 unchanged. AB CIP — hardest because ab_server is a source-only C tool in libplctag's src/tools/ab_server/. New tests/ZB.MOM.WW.OtOpcUa.Driver.AbCip.IntegrationTests/Docker/ Dockerfile is multi-stage: build stage (debian:bookworm-slim + build-essential + cmake) clones libplctag at a pinned tag + cmake --build build --target ab_server; runtime stage (debian:bookworm-slim) copies just the binary from /src/build/bin_dist/ab_server. docker-compose.yml ships four compose profiles (controllogix / compactlogix / micro800 / guardlogix) with per-family ab_server CLI args matching AbServerProfile.cs. AbServerFixture updated: tries TCP probe on 127.0.0.1:44818 first (Docker path) + spawns the native binary only as fallback when no listener is there. AB_SERVER_ENDPOINT env var supported for pointing at a real PLC. AbServerFact/Theory attributes updated to IsServerAvailable() which accepts any of: live listener on 44818, AB_SERVER_ENDPOINT set, or binary on PATH. Required two CLI-compat fixes to ab_server's argument expectations that the existing native profile never caught because it was never actually run at CI: --plc is case-sensitive (ControlLogix not controllogix), CIP tags need [size] bracket notation (DINT[1] not bare DINT), ControlLogix also requires --path=1,0. Compose files carry the corrected flags; the existing native-path AbServerProfile.cs was never invoked in practice so we don't rewrite it here. Micro800 now uses the --plc=Micro800 mode rather than falling back to ControlLogix emulation — ab_server does have the dedicated mode, the old Notes saying otherwise were wrong. Updated docs: three fixture coverage docs (Modbus-Test-Fixture.md, S7-Test-Fixture.md, AbServer-Test-Fixture.md) flip their "What the fixture is" section from native-only to Docker-primary-with-native-fallback; dev-environment.md §Resource Inventory replaces the old ambiguous "Docker Desktop + ab_server native" mix with four per-driver rows (each listing the image, compose file, compose profiles, port, credentials) + a new Docker fixtures — quick reference subsection giving the one-line docker compose -f <…> --profile <…> up for each driver + the env-var override names + the native fallback install recipes. drivers/README.md coverage map table updated — Modbus/AB CIP/S7 entries now read "Dockerized …" consistent with OpcUaClient's line. Verified end-to-end against live containers: Modbus DL205 smoke 1/1, S7 3/3, AB CIP ControlLogix 4/4 (all family theory rows). Container lifecycle clean (up/test/down, no leaked state). Every fixture keeps its skip-when-absent probe + env-var endpoint override so dotnet test on a fresh clone without Docker running still gets a green run.
docker ps surfaces ready state. Fixture OpcPlcFixture follows the same shape as Snap7ServerFixture + ModbusSimulatorFixture: collection-scoped, parses OPCUA_SIM_ENDPOINT (default opc.tcp://localhost:50000) into host + port, 2-second TCP probe at init, SkipReason records the failure for Assert.Skip. Forced IPv4 on the probe socket for the same reason those two fixtures do — .NET's dual-stack "localhost" resolves IPv6 ::1 first + hangs the full connect timeout when the target binds 0.0.0.0 (IPv4). OpcPlcProfile holds well-known node identifiers opc-plc exposes (ns=3;s=StepUp, FastUInt1, RandomSignedInt32, AlternatingBoolean) + builds OpcUaClientDriverOptions with SecurityPolicy.None + AutoAcceptCertificates=true since opc-plc regenerates its server cert on every container spin-up + there's no meaningful chain to validate against in CI. Three smoke tests covering what the unit suite couldn't reach: (1) Client_connects_and_reads_StepUp_node_through_real_OPC_UA_stack — full Secure Channel + Session + Read on ns=3;s=StepUp (counter that ticks every 1 s); (2) Client_reads_batch_of_varied_types_from_live_simulator — batch Read of UInt32 / Int32 / Boolean to prove typed Variant decoding, with an explicit ShouldBeOfType<bool> assertion on AlternatingBoolean to catch the common "variant gets stringified" regression; (3) Client_subscribe_receives_StepUp_data_changes_from_live_server — real MonitoredItem subscription on FastUInt1 (100 ms cadence) with a SemaphoreSlim gate + 3 s deadline on the first OnDataChange fire, tolerating container warm-up. Driver ran end-to-end against a live 2.14.10 container: all 3 pass; unit suite 78/78 unchanged. Container lifecycle verified (compose up → tests → compose down) clean, no leaked state. Docker/README.md documents install (Docker Desktop already on the dev box per Phase 1 decision #134), run (compose up / compose up -d / compose down), endpoint override (OPCUA_SIM_ENDPOINT), what opc-plc advertises with the current command flags, what's tunable via compose-file tweaks (--daa for username auth tests; --fn/--fr/--ft for subscription-stress nodes), known limitation that opc-plc shares the OPCFoundation stack with our driver. OpcUaClient-Test-Fixture.md updated — TL;DR flipped from "there is no integration fixture" to the new reality; "What it actually covers" gains an Integration section listing the three smoke tests. Follow-up the doc flags: add open62541/open62541 as a second image for fully-independent-stack interop coverage; once #219 (server-side IAlarmSource/IHistoryProvider integration tests) lands, re-run the client-side suite against opc-plc's --alm nodes to close the alarm gap from the client side too.
LmxOpcUa
OPC UA server and cross-platform client tools for AVEVA System Platform (Wonderware) Galaxy. The server exposes Galaxy tags via MXAccess as an OPC UA address space. The client stack provides a shared library, CLI tool, and Avalonia desktop application for browsing, reading/writing, subscriptions, alarms, and historical data.
Architecture
OPC UA Clients
(CLI, Desktop UI, 3rd-party)
|
v
+-----------------+ +------------------+ +-----------------+
| Galaxy Repo DB |---->| OPC UA Server |<--->| MXAccess Client |
| (SQL Server) | | (address space) | | (STA + COM) |
+-----------------+ +------------------+ +-----------------+
| |
+-------+--------+ +---------+---------+
| Status Dashboard| | Historian Runtime |
| (HTTP/JSON) | | (SQL Server) |
+----------------+ +-------------------+
Contained Name vs Tag Name
| Browse Path (contained names) | Runtime Reference (tag name) |
|---|---|
TestMachine_001/DelmiaReceiver/DownloadPath |
DelmiaReceiver_001.DownloadPath |
TestMachine_001/MESReceiver/MoveInBatchID |
MESReceiver_001.MoveInBatchID |
Server
The OPC UA server runs on .NET Framework 4.8 (x86) and bridges the Galaxy runtime to OPC UA clients.
Server Prerequisites
- .NET Framework 4.8 SDK
- AVEVA System Platform with ArchestrA Framework installed
- Galaxy repository database (SQL Server, Windows Auth)
- MXAccess COM registered (
LMXProxy.LMXProxyServer) - Wonderware Historian (optional, for historical data access)
- Windows (required for COM interop and MXAccess)
Build and Run Server
dotnet restore ZB.MOM.WW.LmxOpcUa.slnx
dotnet build src/ZB.MOM.WW.LmxOpcUa.Host
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Host
The server starts on opc.tcp://localhost:4840/LmxOpcUa with the None security profile by default. Configure Security.Profiles in appsettings.json to enable Basic256Sha256-Sign or Basic256Sha256-SignAndEncrypt for transport security. See Security Guide.
Install as Windows Service
cd src/ZB.MOM.WW.LmxOpcUa.Host/bin/Debug/net48
ZB.MOM.WW.LmxOpcUa.Host.exe install
ZB.MOM.WW.LmxOpcUa.Host.exe start
Service logon requirement: The service must run under a Windows account that has access to the AVEVA Galaxy and Historian. The default LocalSystem account can connect to MXAccess and SQL Server but cannot authenticate with the Historian SDK (HCAP). Configure the service to "Log on as" a domain or local user that is a recognized ArchestrA platform user. This can be set in services.msc or during install with ZB.MOM.WW.LmxOpcUa.Host.exe install -username DOMAIN\user -password ***.
Run Server Tests
dotnet test tests/ZB.MOM.WW.LmxOpcUa.Tests
dotnet test tests/ZB.MOM.WW.LmxOpcUa.IntegrationTests
Client Stack
The client stack is cross-platform (.NET 10) and consists of three projects sharing a common IOpcUaClientService abstraction. No AVEVA software or COM is required — the clients connect to any OPC UA server.
Client Prerequisites
- .NET 10 SDK
- No platform-specific dependencies (runs on Windows, macOS, Linux)
Build All Clients
dotnet build src/ZB.MOM.WW.LmxOpcUa.Client.Shared
dotnet build src/ZB.MOM.WW.LmxOpcUa.Client.CLI
dotnet build src/ZB.MOM.WW.LmxOpcUa.Client.UI
Run Client Tests
dotnet test tests/ZB.MOM.WW.LmxOpcUa.Client.Shared.Tests
dotnet test tests/ZB.MOM.WW.LmxOpcUa.Client.CLI.Tests
dotnet test tests/ZB.MOM.WW.LmxOpcUa.Client.UI.Tests
Client CLI
# Connect
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- connect -u opc.tcp://localhost:4840/LmxOpcUa
# Browse Galaxy hierarchy
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- browse -u opc.tcp://localhost:4840/LmxOpcUa -n "ns=3;s=ZB" -r -d 5
# Read a tag
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- read -u opc.tcp://localhost:4840/LmxOpcUa -n "ns=3;s=TestMachine_001.MachineID"
# Write a tag
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- write -u opc.tcp://localhost:4840/LmxOpcUa -n "ns=3;s=TestChildObject.TestString" -v "Hello"
# Subscribe to changes
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- subscribe -u opc.tcp://localhost:4840/LmxOpcUa -n "ns=3;s=TestChildObject.TestInt" -i 500
# Read historical data
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- historyread -u opc.tcp://localhost:4840/LmxOpcUa -n "ns=3;s=TestMachine_001.TestHistoryValue" --start "2026-03-25" --end "2026-03-30"
# Subscribe to alarm events
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- alarms -u opc.tcp://localhost:4840/LmxOpcUa -n "ns=3;s=TestMachine_001" --refresh
# Query redundancy state
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI -- redundancy -u opc.tcp://localhost:4840/LmxOpcUa
Client UI
dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.UI
The desktop application provides browse tree, subscriptions, alarm monitoring, history reads, and write dialogs. See Client UI Documentation for details.
Project Structure
src/
ZB.MOM.WW.LmxOpcUa.Host/ OPC UA server (.NET Framework 4.8, x86)
Configuration/ Config binding and validation
Domain/ Interfaces, DTOs, enums, mappers
Historian/ Wonderware Historian data source
Metrics/ Performance tracking (rolling P95)
MxAccess/ STA thread, COM interop, subscriptions
GalaxyRepository/ SQL queries, change detection
OpcUa/ Server, node manager, address space, alarms, diff
Status/ HTTP dashboard, health checks
ZB.MOM.WW.LmxOpcUa.Client.Shared/ Shared OPC UA client library (.NET 10)
ZB.MOM.WW.LmxOpcUa.Client.CLI/ Command-line client (.NET 10)
ZB.MOM.WW.LmxOpcUa.Client.UI/ Avalonia desktop client (.NET 10)
tests/
ZB.MOM.WW.LmxOpcUa.Tests/ Server unit + integration tests
ZB.MOM.WW.LmxOpcUa.IntegrationTests/ Server integration tests (live DB)
ZB.MOM.WW.LmxOpcUa.Client.Shared.Tests/ Shared library tests
ZB.MOM.WW.LmxOpcUa.Client.CLI.Tests/ CLI command tests
ZB.MOM.WW.LmxOpcUa.Client.UI.Tests/ UI ViewModel + headless tests
gr/ Galaxy repository docs, SQL queries, schema
Documentation
Server
| Component | Description |
|---|---|
| OPC UA Server | Endpoint, sessions, security policy, server lifecycle |
| Address Space | Hierarchy nodes, variable nodes, primitive grouping, NodeId scheme |
| Galaxy Repository | SQL queries, deployed package chain, change detection |
| MXAccess Bridge | STA thread, COM interop, subscriptions, reconnection |
| Data Type Mapping | Galaxy to OPC UA types, arrays, security classification |
| Read/Write Operations | Value reads, writes, access level enforcement, array element writes |
| Subscriptions | Ref-counted MXAccess subscriptions, data change dispatch |
| Alarm Tracking | AlarmConditionState nodes, InAlarm monitoring, event reporting |
| Historical Data Access | Historian data source, HistoryReadRaw, HistoryReadProcessed |
| Incremental Sync | Diff computation, subtree teardown/rebuild, subscription preservation |
| Configuration | appsettings.json binding, feature flags, validation |
| Status Dashboard | HTTP server, health checks, metrics reporting |
| Service Hosting | TopShelf, startup/shutdown sequence, error handling |
| Security | Transport security profiles, certificate trust, production hardening |
| Redundancy | Non-transparent warm/hot redundancy, ServiceLevel, paired deployment |
Client
| Component | Description |
|---|---|
| Client CLI | Connect, browse, read, write, subscribe, historyread, alarms, redundancy commands |
| Client UI | Avalonia desktop client: browse, subscribe, alarms, history, write values |
Reference
- Galaxy Repository Queries — SQL queries for hierarchy, attributes, and change detection
- Data Type Mapping — Galaxy to OPC UA type mapping with security classification
License
Internal use only.