Long-running soak harness exercising the in-process GalaxyDriver
against a live mxaccessgw. Subscribes a configurable tag count
(default 50_000), holds the subscription for a configurable duration
(default 24h), polls the EventPump's three counters every minute, and
asserts:
- events.received continues to grow (gw stream isn't stuck)
- events.dropped stays under a configurable percent ceiling
(default 0.5%)
- process working-set doesn't grow >1 GB above baseline (leak guard)
Always skipped unless the operator opts in via OTOPCUA_SOAK_RUN=1.
Tag count, duration, and drop ceiling are env-overridable
(OTOPCUA_SOAK_TAGS / OTOPCUA_SOAK_MINUTES / OTOPCUA_SOAK_DROP_PCT) so
a smoke run can compress the scenario for CI gating.
Per-minute progress is logged as a CSV-style line to stdout so an
operator can grep the test runner output mid-run. PR 6.5 consumes the
data this scenario emits to tune MxGatewayClientOptions defaults.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes Phase 5 scenario coverage. Both
GalaxyRuntimeProbeManager (legacy) and PerPlatformProbeWatcher (PR 4.7)
must surface the same per-host status stream:
- GetHostStatuses_emits_same_host_set_after_Discover — drives Discover
on both backends, waits 1.5s for the probe watcher's first push, then
asserts the platform-host set agrees (transport-entry names differ
by design — legacy uses the Galaxy.Host process identity, mxgw uses
MxAccess.ClientName, so we strip those before comparing).
- GetHostStatuses_state_per_platform_matches_across_backends — for
every overlapping platform host, the HostState must be identical.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Reinitialize_returns_both_backends_to_Healthy — drives
ReinitializeAsync on each backend, asserts DriverState.Healthy
afterwards, then re-reads a 3-tag sample to confirm the runtime
surface is back. Recovery latency isn't pinned tightly (legacy = pipe
+ MxAccess COM client, mxgw = re-Register gw session — different
cadences are expected).
- Health_state_diverges_only_when_one_backend_is_in_recovery — soft
pin that both backends sit in Healthy or Degraded after init.
A tighter fault-injection scenario (toxiproxy-style) is the 5.7
follow-up — landed when the parity rig grows that capability.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Galaxy history reads route through the server-owned HistoryRouter
(Phase 1, PR 1.3) — neither Galaxy backend implements IHistoryProvider
directly. Parity surface here is the routing decision:
- Discover_emits_same_historized_attribute_set_for_both_backends — the
IsHistorized attribute set must agree symmetric-set-wise; that's what
HistoryRouter consumes when deciding whether to route a HistoryRead to
the Wonderware historian sidecar.
- Neither_Galaxy_backend_implements_IHistoryProvider_directly — pins
the architectural decision so a regression that re-introduces a
per-driver history path fires.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Discover_emits_same_AlarmConditionInfo_per_alarm_attribute — both
backends produce the same alarm-condition source-node-id set, with
matching SourceName / InitialSeverity / InAlarmRef / DescAttrNameRef
per condition. Skips when the rig's Galaxy carries no alarm-marked
attributes.
- Discover_marks_at_least_one_alarm_attribute_when_dev_Galaxy_has_alarms
— IsAlarm-marked variable count parity, soft-pinned (count must
match across backends but doesn't have to be non-zero).
Alarm-event persistence (the SQLite store-and-forward → Wonderware
historian event store path) is exercised in PR 5.6 against the
historian sidecar.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Both backends route a write through the same path keyed off the attribute's
SecurityClassification, so a single write request must produce the same
StatusCode on each:
- FreeAccess_or_Operate_write_returns_same_StatusCode_on_both_backends
picks the first numeric FreeAccess/Operate attribute and writes 0.0.
- Configure_class_write_routes_through_secured_path_on_both_backends
picks a Configure/Tune attribute, writes through the secured path,
asserts StatusCode parity (the test doesn't care whether the write
succeeds — only that both backends produce the same outcome).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Subscribe_returns_a_handle_for_each_backend — both backends accept
the same full-reference list and return a non-null handle, with
symmetric Unsubscribe cleanup.
- Subscribe_event_rate_within_tolerance_for_a_3s_window — counts
OnDataChange invocations on each backend across a 3s window and
asserts the mxgw/legacy ratio sits in [0.5, 1.5]. Skips when the
sampled tags don't change in the window (configuration-only Galaxy).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three scenarios using ParityHarness.RequireBoth:
- Discover_emits_same_variable_set_for_both_backends — symmetric set diff
on the full-reference set must be empty.
- Discover_emits_same_DataType_and_SecurityClass_per_attribute — meta
triple (DriverDataType, SecurityClass, IsHistorized) must match per
attribute.
- Read_returns_same_value_and_status_for_a_sampled_attribute — samples
the first 5 discovered variables, reads through both backends, asserts
StatusCode equality and value-CLR-type equality (raw values may drift
between the two reads on a live Galaxy).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Side-by-side fixture that boots both backends against the same dev Galaxy:
- Legacy GalaxyProxyDriver against an out-of-process Galaxy.Host EXE
(skipped when ZB SQL on localhost:1433 isn't reachable or when the EXE
hasn't been built).
- New in-process GalaxyDriver against an mxaccessgw gateway at
http://localhost:5120 by default (skipped when the gateway isn't
reachable). Endpoint, API key, and client name are env-var overridable
for the central parity host.
Per-backend availability is independent — each scenario decides whether
to RequireBoth, GetDriver(specific), or use RunOnAvailableAsync to drive
both with the same closure and diff snapshots. PR 5.2–5.8 land scenarios
on top of this shell.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>