Auto: s7-c3 — per-tag scan group / publish rate

Closes #296
This commit is contained in:
Joseph Doherty
2026-04-26 01:03:00 -04:00
parent ca3d4bf581
commit 162c82b8d9
6 changed files with 736 additions and 26 deletions

View File

@@ -573,6 +573,101 @@ S7 driver health without reaching for a Wireshark capture:
The values render alongside Modbus / OPC UA Client metrics in the Admin
UI driver-diagnostics panel — same RPC, same dashboard row layout.
### Per-tag scan groups
Before PR-S7-C3, `ISubscribable.SubscribeAsync` took **one** publishing
interval and applied it to every tag in the input list. A site that wanted
mixed cadences — say a 100 ms HMI pulse, a 1 s dashboard tile, and a 10 s
slow-poll for trend data — had to issue **three separate subscribe calls**,
each with its own list of tags. That works, but it pushes the partitioning
problem up to the caller (the OPC UA address space layer) and means an
operator can't express "this tag is slow-poll" purely in driver config.
PR-S7-C3 adds **per-tag scan groups** so a single `SubscribeAsync` call
naturally splits into N independent poll loops:
- `S7TagDefinition.ScanGroup` (string, optional) — the group identifier the
tag belongs to. Tags with no group (or with a group not declared in the
rate map below) keep the legacy behaviour and inherit the
subscription-default publishing interval.
- `S7DriverOptions.ScanGroupIntervals` (`IReadOnlyDictionary<string, TimeSpan>`,
optional) — the rate map. Group names are matched case-insensitively. Any
group with a non-positive interval (≤ 0 ms) is silently dropped at config
load and tags falling back to that group land in the default partition.
At subscribe time the driver buckets the input tag list by **resolved
publishing interval** (per-tag group → map lookup → fallback to the
subscription default), then spins up one background poll loop per distinct
interval. Each loop owns its own `CancellationTokenSource` and its own
`LastValues` cache; `UnsubscribeAsync` cancels and disposes every per-group
loop together so a multi-rate subscription can't leak background tasks.
#### JSON config example
```json
{
"Host": "10.0.0.50",
"ScanGroupIntervalsMs": {
"Fast": 100,
"Medium": 1000,
"Slow": 10000
},
"Tags": [
{ "Name": "PressureSetpoint", "Address": "DB1.DBW0", "DataType": "Int16", "ScanGroup": "Fast" },
{ "Name": "BatchTotal", "Address": "DB1.DBD10", "DataType": "Int32", "ScanGroup": "Medium" },
{ "Name": "TrendBucket", "Address": "DB1.DBD20", "DataType": "Float32", "ScanGroup": "Slow" }
]
}
```
A single `SubscribeAsync(["PressureSetpoint","BatchTotal","TrendBucket"], 1s)`
call against this driver produces **three independent poll loops** —
the fast HMI tag ticks at 100 ms, the dashboard tile at 1 s, the trend
bucket at 10 s. The caller-supplied 1 s default is unused because every
tag carries an explicit group.
#### 100 ms floor applies per partition
The `100 ms` floor that protects the S7 mailbox from sub-scan polling
applies to **both** the subscription default **and** every per-group rate.
A typo'd entry like `{"TooFast": 25}` is silently floored to 100 ms at
partition-build time — the driver never schedules a sub-100 ms `Task.Delay`
even if the operator tries.
#### `_gate` contention caveat — "1 connection / 1 mailbox"
Partitioning into N poll loops does **not** parallelise wire-level reads.
S7netplus's documented pattern is one `Plc` instance per CPU, and the
driver enforces that with a per-instance `SemaphoreSlim` (`_gate`) that
every read takes before touching the socket. All N partitions share the
same gate, so the **mailbox is still strictly serial** — what the multi-rate
split actually buys you is **cadence decoupling**:
- Before PR-S7-C3: every tag ticked at the slowest configured interval (or
required three separate subscribe calls and three separate logical
subscription handles, complicating the address-space layer).
- After PR-S7-C3: a 100 ms HMI tag isn't blocked behind a 10 s slow-poll
batch's `Task.Delay`. While Slow is sleeping, the gate is free and Fast
acquires it, polls, releases. The CPU sees more frequent small requests
rather than infrequent large ones — which is what you want for a
responsive HMI surface.
The caveat to be aware of: if Fast's per-tick read takes longer than its
tick interval (e.g. 100 ms tick but 200 ms gate-held read because Medium
or Slow happens to be mid-read on the gate), Fast's effective cadence
slows to "as fast as the gate lets me." That's a property of S7netplus's
single-connection design, not of partitioning — three separate driver
instances against the same CPU would just waste the CPU's
8-64-connection-resource budget without speeding anything up.
#### Diagnostics
Partition counts aren't yet surfaced under
`DriverHealth.Diagnostics` (planned for a follow-up alongside per-partition
tick rate). Tests can call the internal helpers `S7Driver.GetPartitionCount`
and `S7Driver.GetPartitionSummary` to inspect the resolved partitioning of
a live subscription handle.
## TSAP / Connection Type
S7comm runs on top of ISO-on-TCP (RFC 1006), and the COTP connection-request