S7 integration — AbCip/Modbus already have real-simulator integration suites; S7 had zero wire-level coverage despite being a Tier-A driver (all unit tests mocked IS7Client). Picked python-snap7's `snap7.server.Server` over raw Snap7 C library because `pip install` beats per-OS binary-pin maintenance, the package ships a Python __main__ shim that mirrors our existing pymodbus serve.ps1 + *.json pattern structurally, and the python-snap7 project is actively maintained. New project `tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/` with four moving parts: (a) `Snap7ServerFixture` — collection-scoped TCP probe on `localhost:1102` that sets `SkipReason` when the simulator's not running, matching the `ModbusSimulatorFixture` shape one directory over (same S7_SIM_ENDPOINT env var override convention for pointing at a real S7 CPU on port 102); (b) `PythonSnap7/` — `serve.ps1` wrapper + `server.py` shim + `s7_1500.json` seed profile + `README.md` documenting install / run / known limitations; (c) `S7_1500/S7_1500Profile.cs` — driver-side `S7DriverOptions` whose tag addresses map 1:1 to the JSON profile's seed offsets (DB1.DBW0 u16, DB1.DBW10 i16, DB1.DBD20 i32, DB1.DBD30 f32, DB1.DBX50.3 bool, DB1.DBW100 scratch); (d) `S7_1500SmokeTests` — three tests proving typed reads + write-then-read round-trip work through real S7netplus + real ISO-on-TCP + real snap7 server. Picked port 1102 default instead of S7-standard 102 because 102 is privileged on Linux + triggers Windows Firewall prompt; S7netplus 0.20 has a 5-arg `Plc(CpuType, host, port, rack, slot)` ctor that lets the driver honour `S7DriverOptions.Port`, but the existing driver code called the 4-arg overload + silently hardcoded 102. One-line driver fix (S7Driver.cs:87) threads `_options.Port` through — the S7 unit suite (58/58) still passes unchanged because every unit test uses a fake IS7Client that never sees the real ctor. Server seed-type matrix in `server.py` covers u8 / i8 / u16 / i16 / u32 / i32 / f32 / bool-with-bit / ascii (S7 STRING with max_len header). register_area takes the SrvArea enum value, not the string name — a 15-minute debug after the first test run caught that; documented inline. Per-driver test-fixture coverage docs — eight new files in `docs/drivers/` laying out what each driver's harness actually benchmarks vs. what's trusted from field deployments. Pattern mirrors the AbServer-Test-Fixture.md doc that shipped earlier in this arc: TL;DR → What the fixture is → What it actually covers → What it does NOT cover → When-to-trust table → Follow-up candidates → Key files. Ugly truth the survey made visible: Galaxy + Modbus + (now) S7 + AB CIP have real wire-level coverage; AB Legacy / TwinCAT / FOCAS / OpcUaClient are still contract-only because their libraries ship no fake + no open-source simulator exists (AB Legacy PCCC), no public simulator exists (FOCAS), the vendor SDK has no in-process fake (TwinCAT/ADS.NET), or the test wiring just hasn't happened yet (OpcUaClient could trivially loopback against this repo's own server — flagged as #215). Each doc names the specific follow-up route: Snap7 server for S7 (done), TwinCAT 3 developer-runtime auto-restart for TwinCAT, Tier-C out-of-process Host for FOCAS, lab rigs for AB Legacy + hardware-gated bits of the others. `docs/drivers/README.md` gains a coverage-map section linking all eight. Tracking tasks #215-#222 filed for each PR-able follow-up. Build clean (driver + integration project + docs); S7.Tests 58/58 (unchanged); S7.IntegrationTests 3/3 (new, verified end-to-end against a live python-snap7 server: `driver_reads_seeded_u16_through_real_S7comm`, `driver_reads_seeded_typed_batch`, `driver_write_then_read_round_trip_on_scratch_word`). Next fixture follow-up is #215 (OpcUaClient loopback against own server) — highest ROI of the remaining set, zero external deps. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
165 lines
7.1 KiB
Markdown
165 lines
7.1 KiB
Markdown
# Galaxy test fixture
|
|
|
|
Coverage map + gap inventory for the Galaxy driver — out-of-process Host
|
|
(net48 x86 MXAccess COM) + Proxy (net10) + Shared protocol.
|
|
|
|
**TL;DR: Galaxy has the richest test harness in the fleet** — real Host
|
|
subprocess spawn, real ZB SQL queries, IPC parity checks against the v1
|
|
LmxProxy reference, + live-smoke tests when MXAccess runtime is actually
|
|
installed. Gaps are live-plant + failover-shaped: the E2E suite covers the
|
|
representative ~50-tag deployment but not large-site discovery stress, real
|
|
Rockwell/Siemens PLC enumeration through MXAccess, or ZB SQL Always-On
|
|
replica failover.
|
|
|
|
## What the fixture is
|
|
|
|
Multi-project test topology:
|
|
|
|
- **E2E parity** —
|
|
`tests/ZB.MOM.WW.OtOpcUa.Driver.Galaxy.E2E/ParityFixture.cs` spawns the
|
|
production `OtOpcUa.Driver.Galaxy.Host.exe` as a subprocess, opens the
|
|
named-pipe IPC, connects `GalaxyProxyDriver` + runs hierarchy / stability
|
|
parity tests against both.
|
|
- **Host.Tests** —
|
|
`tests/ZB.MOM.WW.OtOpcUa.Driver.Galaxy.Host.Tests/` — direct Host process
|
|
testing (18+ test classes covering alarm discovery, AVEVA prerequisite
|
|
checks, IPC dispatcher, alarm tracker, probe manager, historian
|
|
cluster/quality/wiring, history read, OPC UA attribute mapping,
|
|
subscription lifecycle, reconnect, multi-host proxy, ADS address routing,
|
|
expression evaluation) + `GalaxyRepositoryLiveSmokeTests` that hit real
|
|
ZB SQL.
|
|
- **Proxy.Tests** — `GalaxyProxyDriver` client contract tests.
|
|
- **Shared.Tests** — shared protocol + address model.
|
|
- **TestSupport** — test helpers reused across the above.
|
|
|
|
## How tests skip
|
|
|
|
- **E2E parity**: `ParityFixture.SkipIfUnavailable()` runs at class init and
|
|
checks Windows-only, non-admin user, ZB SQL reachable on
|
|
`localhost:1433`, Host EXE built in the expected `bin/` folder. Any miss
|
|
→ tests skip.
|
|
- **Live-smoke** (`GalaxyRepositoryLiveSmokeTests`): `Assert.Skip` when ZB
|
|
unreachable. A `per project_galaxy_host_installed` memory on this repo's
|
|
dev box notes the MXAccess runtime is installed + pipe ACL denies Admins,
|
|
so live tests must run from a non-elevated shell.
|
|
- **Unit** tests (Shared, Proxy contract, most Host.Tests) have no skip —
|
|
they run anywhere.
|
|
|
|
## What it actually covers
|
|
|
|
### E2E parity suite
|
|
|
|
- `HierarchyParityTests` — Host address-space hierarchy vs v1 LmxProxy
|
|
reference (same ZB, same Galaxy, same shape)
|
|
- `StabilityFindingsRegressionTests` — probe subscription failure
|
|
handling + host-status mutation guard from the v1 stability findings
|
|
backlog
|
|
|
|
### Host.Tests (representative)
|
|
|
|
- Alarm discovery → subsystem setup
|
|
- AVEVA prerequisite checks (runtime installed, platform deployed, etc.)
|
|
- IPC dispatcher — request/response routing over the named pipe
|
|
- Alarm tracker state machine
|
|
- Probe manager — per-runtime probe subscription + reconnect
|
|
- Historian cluster / quality / wiring — Aveva Historian integration
|
|
- OPC UA attribute mapping
|
|
- Subscription lifecycle + reconnect
|
|
- Multi-host proxy routing
|
|
- ADS address routing + expression evaluation (Galaxy's legacy expression
|
|
language)
|
|
|
|
### Live-smoke
|
|
|
|
- `GalaxyRepositoryLiveSmokeTests` — real SQL against ZB database, verifies
|
|
the ZB schema + `LocalPlatform` scope filter + change-detection query
|
|
shape match production.
|
|
|
|
### Capability surfaces hit
|
|
|
|
All of them: `IDriver`, `IReadable`, `IWritable`, `ITagDiscovery`,
|
|
`ISubscribable`, `IHostConnectivityProbe`, `IPerCallHostResolver`,
|
|
`IAlarmSource`, `IHistoryProvider`. Galaxy is the only driver where every
|
|
interface sees both contract + real-integration coverage.
|
|
|
|
## What it does NOT cover
|
|
|
|
### 1. MXAccess COM by default
|
|
|
|
The E2E parity suite backs subscriptions via the DB-only path; MXAccess COM
|
|
integration opts in via a separate live-smoke. So "does the MXAccess STA
|
|
pump correctly handle real Wonderware runtime events" is exercised only
|
|
when the operator runs live smoke on a machine with MXAccess installed.
|
|
|
|
### 2. Real Rockwell / Siemens PLC enumeration
|
|
|
|
Galaxy runtime talks to PLCs through MXAccess (Device Integration Objects).
|
|
The CI parity suite uses a representative ~50-tag deployment; large sites
|
|
(1000+ tag hierarchies, multi-Galaxy replication, deeply-nested templates)
|
|
are not stressed.
|
|
|
|
### 3. ZB SQL Always-On failover
|
|
|
|
Live-smoke hits a single SQL instance. Real production ZB often runs on
|
|
Always-On availability groups; replica failover behavior is not tested.
|
|
|
|
### 4. Galaxy replication / backup-restore
|
|
|
|
Galaxy supports backup + partial replication across platforms — these
|
|
rewrite the ZB schema in ways that change the contained_name vs tag_name
|
|
mapping. Not exercised.
|
|
|
|
### 5. Historian failover
|
|
|
|
Aveva Historian can be clustered. `historian cluster / quality` tests
|
|
verify the cluster-config query; they don't exercise actual failover
|
|
(primary dies → secondary takes over mid-HistoryRead).
|
|
|
|
### 6. AVEVA runtime version matrix
|
|
|
|
MXAccess COM contract varies subtly across System Platform 2017 / 2020 /
|
|
2023. The live-smoke runs against whatever version is installed on the dev
|
|
box; CI has no AVEVA installed at all (licensing + footprint).
|
|
|
|
## When to trust the Galaxy suite, when to reach for a live plant
|
|
|
|
| Question | E2E parity | Live-smoke | Real plant |
|
|
| --- | --- | --- | --- |
|
|
| "Does Host spawn + IPC round-trip work?" | yes | yes | yes |
|
|
| "Does the ZB schema query match production shape?" | partial | yes | yes |
|
|
| "Does MXAccess COM handle runtime reconnect correctly?" | no | yes | yes |
|
|
| "Does the driver scale to 1000+ tags on one Galaxy?" | no | partial | yes (required) |
|
|
| "Does historian failover mid-read return a clean error?" | no | no | yes (required) |
|
|
| "Does System Platform 2023's MXAccess differ from 2020?" | no | partial | yes (required) |
|
|
| "Does ZB Always-On replica failover preserve generation?" | no | no | yes (required) |
|
|
|
|
## Follow-up candidates
|
|
|
|
1. **System Platform 2023 live-smoke matrix** — set up a second dev box
|
|
running SP2023; run the same live-smoke against both to catch COM-contract
|
|
drift early.
|
|
2. **Synthetic large-site fixture** — script a ZB populator that creates a
|
|
1000-Equipment / 20000-tag hierarchy, run the parity suite against it.
|
|
Catches O(N) → O(N²) discovery regressions.
|
|
3. **Historian failover scripted test** — with a two-node AVEVA Historian
|
|
cluster, tear down primary mid-HistoryRead + verify the driver's failover
|
|
behavior + error surface.
|
|
4. **ZB Always-On CI** — SQL Server 2022 on Linux supports Always-On;
|
|
could stand up a two-replica group for replica-failover coverage.
|
|
|
|
This is already the best-tested driver; the remaining work is site-scale
|
|
+ production-topology coverage, not capability coverage.
|
|
|
|
## Key fixture / config files
|
|
|
|
- `tests/ZB.MOM.WW.OtOpcUa.Driver.Galaxy.E2E/ParityFixture.cs` — E2E fixture
|
|
that spawns Host + connects Proxy
|
|
- `tests/ZB.MOM.WW.OtOpcUa.Driver.Galaxy.Host.Tests/GalaxyRepositoryLiveSmokeTests.cs`
|
|
— live ZB smoke with `Assert.Skip` gate
|
|
- `tests/ZB.MOM.WW.OtOpcUa.Driver.Galaxy.TestSupport/` — shared helpers
|
|
- `docs/drivers/Galaxy.md` — COM bridge + STA pump + IPC architecture
|
|
- `docs/drivers/Galaxy-Repository.md` — ZB SQL reader + `LocalPlatform`
|
|
scope filter + change detection
|
|
- `docs/v2/aveva-system-platform-io-research.md` — MXAccess + Wonderware
|
|
background
|