Auto: twincat-2.1 — ADS Sum-read / Sum-write

Closes #310
This commit is contained in:
Joseph Doherty
2026-04-25 21:43:32 -04:00
parent fa2fbb404d
commit 931049b5a7
11 changed files with 875 additions and 26 deletions

View File

@@ -125,6 +125,35 @@ back an `IAlarmSource`, but shipping that is a separate feature.
| "Do notifications coalesce under load?" | no | yes (required) |
| "Does a TC2 PLC work the same as TC3?" | no | yes (required) |
## Performance
PR 2.1 (Sum-read / Sum-write, IndexGroup `0xF080..0xF084`) replaced the per-tag
`ReadValueAsync` loop in `TwinCATDriver.ReadAsync` / `WriteAsync` with a
bucketed bulk dispatch — N tags addressed against the same device flow through a
single ADS sum-command round-trip via `SumInstancePathAnyTypeRead` (read) and
`SumWriteBySymbolPath` (write). Whole-array tags + bit-extracted BOOL tags
remain on the per-tag fallback path because the sum surface only marshals
scalars and bit-RMW writes need the per-parent serialisation lock.
**Baseline → Sum-command delta** (dev box, 1000 × DINT, XAR VM over LAN):
| Path | Round-trips | Wall-clock |
| --- | --- | --- |
| Per-tag loop (pre-PR 2.1) | 1000 | ~58 s |
| Sum-command bulk (PR 2.1) | 1 | ~250600 ms |
| Ratio | — | ≥ 10× typical, ≥ 5× CI floor |
The perf-tier test
`TwinCATSumCommandPerfTests.Driver_sum_read_1000_tags_beats_loop_baseline_by_5x`
asserts the ratio with a conservative 5× lower bound that survives noisy CI /
VM scheduling. It is gated behind both `TWINCAT_TARGET_NETID` (XAR reachable)
and `TWINCAT_PERF=1` (operator opt-in) — perf runs aren't part of the default
integration pass because they hit the wire heavily.
The required fixture state (1000-DINT GVL + churn POU) is documented in
`TwinCatProject/README.md §Performance scenarios`; XAE-form sources land at
`TwinCatProject/PLC/GVLs/GVL_Perf.TcGVL` + `TwinCatProject/PLC/POUs/FB_PerfChurn.TcPOU`.
## Follow-up candidates
1. **XAR VM live-population** — scaffolding is in place (this PR); the