Scope alarm tracking to selected templates and surface endpoint/security state on the dashboard so operators can deploy in large galaxies without drowning clients in irrelevant alarms or guessing what the server is advertising
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
139
service_info.md
139
service_info.md
@@ -324,6 +324,145 @@ Code changes:
|
||||
|
||||
No configuration changes required. All security gaps (1-10) resolved.
|
||||
|
||||
## Historian Plugin Runtime Load + Dashboard Health
|
||||
|
||||
Updated: `2026-04-12 18:47-18:49 America/New_York`
|
||||
|
||||
Both instances updated to the latest build. Brings in the runtime-loaded Historian plugin (`Historian/` subfolder next to the Host) and the status dashboard health surface for historian plugin + alarm-tracking misconfiguration.
|
||||
|
||||
Backups created before deploy:
|
||||
- `C:\publish\lmxopcua\backups\20260412-184713-instance1`
|
||||
- `C:\publish\lmxopcua\backups\20260412-184713-instance2`
|
||||
|
||||
Configuration preserved:
|
||||
- `C:\publish\lmxopcua\instance1\appsettings.json` was not overwritten.
|
||||
- `C:\publish\lmxopcua\instance2\appsettings.json` was not overwritten.
|
||||
|
||||
Layout change:
|
||||
- Flat historian interop DLLs removed from each instance root (`aahClient*.dll`, `ArchestrA.CloudHistorian.Contract.dll`, `Historian.CBE.dll`, `Historian.DPAPI.dll`).
|
||||
- Historian plugin + interop DLLs now live under `<instance>\Historian\` (including `ZB.MOM.WW.LmxOpcUa.Historian.Aveva.dll`), loaded by `HistorianPluginLoader`.
|
||||
|
||||
Deployed binary (both instances):
|
||||
- `ZB.MOM.WW.LmxOpcUa.Host.exe`
|
||||
- Last write time: `2026-04-12 18:46:22 -04:00`
|
||||
- Size: `7938048`
|
||||
|
||||
Windows services:
|
||||
- `LmxOpcUa` — Running, PID `40176`
|
||||
- `LmxOpcUa2` — Running, PID `34400`
|
||||
|
||||
Restart evidence (instance1 `logs/lmxopcua-20260412.log`):
|
||||
```
|
||||
2026-04-12 18:48:02.968 -04:00 [INF] Historian.Enabled=true, ServerName=localhost, IntegratedSecurity=true, Port=32568
|
||||
2026-04-12 18:48:02.971 -04:00 [INF] === Configuration Valid ===
|
||||
2026-04-12 18:48:09.658 -04:00 [INF] Historian plugin loaded from C:\publish\lmxopcua\instance1\Historian\ZB.MOM.WW.LmxOpcUa.Historian.Aveva.dll
|
||||
2026-04-12 18:48:13.691 -04:00 [INF] LmxOpcUa service started successfully
|
||||
```
|
||||
|
||||
Restart evidence (instance2 `logs/lmxopcua-20260412.log`):
|
||||
```
|
||||
2026-04-12 18:49:08.152 -04:00 [INF] Historian.Enabled=true, ServerName=localhost, IntegratedSecurity=true, Port=32568
|
||||
2026-04-12 18:49:08.155 -04:00 [INF] === Configuration Valid ===
|
||||
2026-04-12 18:49:14.744 -04:00 [INF] Historian plugin loaded from C:\publish\lmxopcua\instance2\Historian\ZB.MOM.WW.LmxOpcUa.Historian.Aveva.dll
|
||||
2026-04-12 18:49:18.777 -04:00 [INF] LmxOpcUa service started successfully
|
||||
```
|
||||
|
||||
CLI verification (via `dotnet run --project src/ZB.MOM.WW.LmxOpcUa.Client.CLI`):
|
||||
```
|
||||
connect opc.tcp://localhost:4840/LmxOpcUa → Server: LmxOpcUa
|
||||
connect opc.tcp://localhost:4841/LmxOpcUa → Server: LmxOpcUa2
|
||||
redundancy opc.tcp://localhost:4840/LmxOpcUa → Warm, ServiceLevel=200, urn:localhost:LmxOpcUa:instance1
|
||||
redundancy opc.tcp://localhost:4841/LmxOpcUa → Warm, ServiceLevel=150, urn:localhost:LmxOpcUa:instance2
|
||||
```
|
||||
|
||||
Both instances report the same `ServerUriArray` and the primary advertises the higher ServiceLevel, matching the prior redundancy baseline.
|
||||
|
||||
## Endpoints Panel on Dashboard
|
||||
|
||||
Updated: `2026-04-13 08:46-08:50 America/New_York`
|
||||
|
||||
Both instances updated with a new `Endpoints` panel on the status dashboard surfacing the opc.tcp base addresses, active OPC UA security profiles (mode + policy name + full URI), and user token policies.
|
||||
|
||||
Code changes:
|
||||
- `StatusData.cs` — added `EndpointsInfo` / `SecurityProfileInfo` DTOs on `StatusData`.
|
||||
- `OpcUaServerHost.cs` — added `BaseAddresses`, `SecurityPolicies`, `UserTokenPolicies` runtime accessors reading `ApplicationConfiguration.ServerConfiguration` live state.
|
||||
- `StatusReportService.cs` — builds `EndpointsInfo` from the host and renders a new panel with a graceful empty state when the server is not started.
|
||||
|
||||
No configuration changes required.
|
||||
|
||||
Verification (instance1 @ `http://localhost:8085/`):
|
||||
```
|
||||
Base Addresses: opc.tcp://localhost:4840/LmxOpcUa
|
||||
Security Profiles: None / None / http://opcfoundation.org/UA/SecurityPolicy#None
|
||||
User Token Policies: Anonymous, UserName
|
||||
```
|
||||
|
||||
Verification (instance2 @ `http://localhost:8086/`):
|
||||
```
|
||||
Base Addresses: opc.tcp://localhost:4841/LmxOpcUa
|
||||
Security Profiles: None / None / http://opcfoundation.org/UA/SecurityPolicy#None
|
||||
User Token Policies: Anonymous, UserName
|
||||
```
|
||||
|
||||
## Template-Based Alarm Object Filter
|
||||
|
||||
Updated: `2026-04-13 09:39-09:43 America/New_York`
|
||||
|
||||
Both instances updated with a new configurable alarm object filter. When `OpcUa.AlarmFilter.ObjectFilters` is non-empty, only Galaxy objects whose template derivation chain matches a pattern (and their containment-tree descendants) contribute `AlarmConditionState` nodes. When the list is empty, the current unfiltered behavior is preserved (backward-compatible default).
|
||||
|
||||
Backups created before deploy:
|
||||
- `C:\publish\lmxopcua\backups\20260413-093900-instance1`
|
||||
- `C:\publish\lmxopcua\backups\20260413-093900-instance2`
|
||||
|
||||
Deployed binary (both instances):
|
||||
- `ZB.MOM.WW.LmxOpcUa.Host.exe`
|
||||
- Last write time: `2026-04-13 09:38:46 -04:00`
|
||||
- Size: `7951360`
|
||||
|
||||
Windows services:
|
||||
- `LmxOpcUa` — Running, PID `40900`
|
||||
- `LmxOpcUa2` — Running, PID `29936`
|
||||
|
||||
Code changes:
|
||||
- `gr/queries/hierarchy.sql` — added recursive CTE on `gobject.derived_from_gobject_id` and a new `template_chain` column (pipe-delimited, innermost template first).
|
||||
- `Domain/GalaxyObjectInfo.cs` — added `TemplateChain: List<string>` populated from the new SQL column.
|
||||
- `GalaxyRepositoryService.cs` — reads the new column and splits into `TemplateChain`.
|
||||
- `Configuration/AlarmFilterConfiguration.cs` (new) — `List<string> ObjectFilters`; entries may themselves be comma-separated. Attached to `OpcUaConfiguration.AlarmFilter`.
|
||||
- `Configuration/ConfigurationValidator.cs` — logs the effective filter and warns if patterns are configured while `AlarmTrackingEnabled == false`.
|
||||
- `Domain/AlarmObjectFilter.cs` (new) — compiles wildcard patterns (`*` only) to case-insensitive regexes with Galaxy `$` prefix normalized on both sides; walks the hierarchy top-down with cycle defense; returns a `HashSet<int>` of included gobject IDs plus `UnmatchedPatterns` for startup warnings.
|
||||
- `OpcUa/LmxNodeManager.cs` — constructor accepts the filter; the two alarm-creation loops (`BuildAddressSpace` full build and the subtree rebuild path) both call `ResolveAlarmFilterIncludedIds(sorted)` and skip any object not in the resolved set. New public properties expose filter state to the dashboard: `AlarmFilterEnabled`, `AlarmFilterPatternCount`, `AlarmFilterIncludedObjectCount`.
|
||||
- `OpcUa/OpcUaServerHost.cs`, `OpcUa/LmxOpcUaServer.cs`, `OpcUaService.cs`, `OpcUaServiceBuilder.cs` — plumbing to construct and thread the filter from `appsettings.json` down to the node manager.
|
||||
- `Status/StatusData.cs` + `Status/StatusReportService.cs` — `AlarmStatusInfo` gains `FilterEnabled`, `FilterPatternCount`, `FilterIncludedObjectCount`; a filter summary line renders in the Alarms panel when the filter is active.
|
||||
|
||||
Tests:
|
||||
- 36 new unit tests in `tests/.../Domain/AlarmObjectFilterTests.cs` covering pattern parsing, wildcard semantics, regex escaping, Galaxy `$` normalization, template-chain matching, subtree propagation, set semantics, orphan/cycle defense, and `UnmatchedPatterns` tracking.
|
||||
- 5 new integration tests in `tests/.../Integration/AlarmObjectFilterIntegrationTests.cs` spinning up a real `LmxNodeManager` via `OpcUaServerFixture` and asserting `AlarmConditionCount`/`AlarmFilterIncludedObjectCount` under various filters.
|
||||
- 1 new Status test verifying JSON exposes the filter counters.
|
||||
- Full suite: **446/446 tests passing** (no regressions).
|
||||
|
||||
Configuration change: both instances have `OpcUa.AlarmFilter.ObjectFilters: []` (filter disabled, unfiltered alarm tracking preserved).
|
||||
|
||||
Live verification against instance1 Galaxy (filter temporarily set to `"TestMachine"`):
|
||||
```
|
||||
2026-04-13 09:41:31 [INF] OpcUa.AlarmTrackingEnabled=true, AlarmFilter.ObjectFilters=[TestMachine]
|
||||
2026-04-13 09:41:42 [INF] Alarm filter: 42 of 49 objects included (1 pattern(s))
|
||||
Dashboard Alarms panel: Tracking: True | Conditions: 60 | Active: 4
|
||||
Filter: 1 pattern(s), 42 object(s) included
|
||||
```
|
||||
|
||||
Final configuration restored to empty filter. Dashboard confirms unfiltered behavior on both endpoints:
|
||||
```
|
||||
instance1 @ http://localhost:8085/ → Conditions: 60 | Active: 4 (no filter line)
|
||||
instance2 @ http://localhost:8086/ → Conditions: 60 | Active: 4 (no filter line)
|
||||
```
|
||||
|
||||
Filter syntax quick reference (documented in `AlarmFilterConfiguration.cs` XML-doc):
|
||||
- `*` is the only wildcard (glob-style; zero or more characters).
|
||||
- Matching is case-insensitive and ignores the Galaxy leading `$` template prefix on both the pattern and the stored chain entry, so operators write `TestMachine*` not `$TestMachine*`.
|
||||
- Each entry may contain comma-separated patterns for convenience (e.g., `"TestMachine*, Pump_*"`).
|
||||
- Empty list → filter disabled → current unfiltered behavior.
|
||||
- Match semantics: an object is included when any template in its derivation chain matches any pattern, and the inclusion propagates to all descendants in the containment hierarchy. Each object is evaluated once regardless of how many patterns or ancestors match.
|
||||
|
||||
## Notes
|
||||
|
||||
The service deployment and restart succeeded. The live CLI checks confirm the endpoint is reachable and that the array node identifier has changed to the bracketless form. The array value on the live service still prints as blank even though the status is good, so if this environment should have populated `MoveInPartNumbers`, the runtime data path still needs follow-up investigation.
|
||||
|
||||
Reference in New Issue
Block a user