503 lines
43 KiB
Markdown
503 lines
43 KiB
Markdown
# Admin Web UI — OtOpcUa v2
|
|
|
|
> **Status**: DRAFT — companion to `plan.md` §4 and `config-db-schema.md`. Defines the Blazor Server admin app for managing the central config DB.
|
|
>
|
|
> **Branch**: `v2`
|
|
> **Created**: 2026-04-17
|
|
|
|
## Scope
|
|
|
|
This document covers the **OtOpcUa Admin** web app — the operator-facing UI for managing fleet configuration. It owns every write to the central config DB; OtOpcUa nodes are read-only consumers.
|
|
|
|
Out of scope here:
|
|
|
|
- Per-node operator dashboards (status, alarm acks for runtime concerns) — that's the existing Status Dashboard, deployed alongside each node, not the Admin app
|
|
- Driver-specific config screens — these are deferred to each driver's implementation phase per decision #27, and each driver doc is responsible for sketching its config UI surface
|
|
- Authentication of the OPC UA endpoint itself — covered by `Security.md` (LDAP)
|
|
|
|
## Tech Stack
|
|
|
|
**Aligned with ScadaLink CentralUI** (`scadalink-design/src/ScadaLink.CentralUI`) — operators using both apps see the same login screen, same sidebar, same component vocabulary. Same patterns, same aesthetic.
|
|
|
|
| Component | Choice | Reason |
|
|
|-----------|--------|--------|
|
|
| Framework | **Blazor Server** (.NET 10 Razor Components, `AddInteractiveServerComponents`) | Same as ScadaLink; real-time UI without separate SPA build; SignalR built-in for live cluster status |
|
|
| Hosting | Co-deploy with central DB by default; standalone option | Most deployments run Admin on the same machine as MSSQL; large fleets can split |
|
|
| Auth | **LDAP bind via `LdapAuthService` (sibling of `ScadaLink.Security`) + cookie auth + `JwtTokenService` for API tokens** | Direct parity with ScadaLink — same login form, same cookie scheme, same claim shape, same `RoleMapper` pattern. Operators authenticated to one app feel at home in the other |
|
|
| DB access | EF Core (same `Configuration` project that nodes use) | Schema versioning lives in one place |
|
|
| Real-time | SignalR (Blazor Server's underlying transport) | Live updates on `ClusterNodeGenerationState` and crash-loop alerts |
|
|
| Styling | **Bootstrap 5** vendored under `wwwroot/lib/bootstrap/` | Direct parity with ScadaLink; standard component vocabulary (card, table, alert, btn, form-control, modal); no third-party Blazor-component-library dependency |
|
|
| Shared components | `DataTable`, `ConfirmDialog`, `LoadingSpinner`, `ToastNotification`, `TimestampDisplay`, `RedirectToLogin`, `NotAuthorizedView` | Same set as ScadaLink CentralUI; copy structurally so cross-app feel is identical |
|
|
| Reconnect overlay | Custom Bootstrap modal triggered on `Blazor` SignalR disconnect | Same pattern as ScadaLink — modal appears on connection loss, dismisses on reconnect |
|
|
|
|
### Code organization
|
|
|
|
Mirror ScadaLink's layout exactly:
|
|
|
|
```
|
|
src/
|
|
ZB.MOM.WW.OtOpcUa.Admin/ # Razor Components project (.NET 10)
|
|
Auth/
|
|
AuthEndpoints.cs # /auth/login, /auth/logout, /auth/token
|
|
CookieAuthenticationStateProvider.cs # bridges cookie auth to Blazor <AuthorizeView>
|
|
Components/
|
|
Layout/
|
|
MainLayout.razor # dark sidebar + light main flex layout
|
|
NavMenu.razor # role-gated nav sections
|
|
Pages/
|
|
Login.razor # server-rendered HTML form POSTing to /auth/login
|
|
Dashboard.razor # default landing
|
|
Clusters/
|
|
Generations/
|
|
Credentials/
|
|
Audit/
|
|
Shared/
|
|
DataTable.razor # paged/sortable/filterable table (verbatim from ScadaLink)
|
|
ConfirmDialog.razor
|
|
LoadingSpinner.razor
|
|
ToastNotification.razor
|
|
TimestampDisplay.razor
|
|
RedirectToLogin.razor
|
|
NotAuthorizedView.razor
|
|
EndpointExtensions.cs # MapAuthEndpoints + role policies
|
|
ServiceCollectionExtensions.cs # AddCentralAdmin
|
|
ZB.MOM.WW.OtOpcUa.Admin.Security/ # LDAP + role mapping + JWT (sibling of ScadaLink.Security)
|
|
```
|
|
|
|
The `Admin.Security` project carries `LdapAuthService`, `RoleMapper`, `JwtTokenService`, `AuthorizationPolicies`. If it ever makes sense to consolidate with ScadaLink's identical project, lift to a shared internal NuGet — out of scope for v2.0 to keep OtOpcUa decoupled from ScadaLink's release cycle.
|
|
|
|
## Authentication & Authorization
|
|
|
|
### Operator authentication
|
|
|
|
**Identical pattern to ScadaLink CentralUI.** Operators log in via LDAP bind against the GLAuth server. The login flow is a server-rendered HTML form POSTing to `/auth/login` (NOT a Blazor interactive form — `data-enhance="false"` to disable Blazor enhanced navigation), handled by a minimal-API endpoint that:
|
|
|
|
1. Reads `username` / `password` from form
|
|
2. Calls `LdapAuthService.AuthenticateAsync(username, password)` — performs LDAP bind, returns `Username`, `DisplayName`, `Groups`
|
|
3. Calls `RoleMapper.MapGroupsToRolesAsync(groups)` — translates LDAP groups → application roles + cluster-scope set
|
|
4. Builds `ClaimsIdentity` with `Name`, `DisplayName`, `Username`, `Role` (multiple), `ClusterId` scope claims (multiple, when not system-wide)
|
|
5. `HttpContext.SignInAsync(CookieAuthenticationDefaults.AuthenticationScheme, principal, ...)` with `IsPersistent = true`, `ExpiresUtc = +30 min` (sliding)
|
|
6. Redirects to `/`
|
|
7. On failure, redirects to `/login?error={URL-encoded message}`
|
|
|
|
A parallel `/auth/token` endpoint returns a JWT for API clients (CLI tooling, scripts) — same auth, different transport. Symmetric with ScadaLink's pattern.
|
|
|
|
`CookieAuthenticationStateProvider` bridges the cookie principal to Blazor's `AuthenticationStateProvider` so `<AuthorizeView>` and `[Authorize]` work in components.
|
|
|
|
### LDAP group → role mapping
|
|
|
|
| LDAP group | Admin role | Capabilities |
|
|
|------------|------------|--------------|
|
|
| `OtOpcUaAdmins` | `FleetAdmin` | Everything: cluster CRUD, node CRUD, credential management, publish/rollback any cluster |
|
|
| `OtOpcUaConfigEditors` | `ConfigEditor` | Edit drafts and publish for assigned clusters; cannot create/delete clusters or manage credentials |
|
|
| `OtOpcUaViewers` | `ReadOnly` | View-only access to all clusters and generations; cannot edit drafts or publish |
|
|
|
|
`AuthorizationPolicies` constants (mirrors ScadaLink): `RequireFleetAdmin`, `RequireConfigEditor`, `RequireReadOnly`. `<AuthorizeView Policy="@AuthorizationPolicies.RequireFleetAdmin">` gates nav menu sections and page-level access.
|
|
|
|
### Cluster-scoped grants (lifted from v2.1 to v2.0)
|
|
|
|
Because ScadaLink already has the site-scoped grant pattern (`PermittedSiteIds` claim, `IsSystemWideDeployment` flag), we get cluster-scoped grants essentially for free in v2.0 by mirroring it:
|
|
|
|
- A `ConfigEditor` user mapped to LDAP group `OtOpcUaConfigEditors-LINE3` is granted `ConfigEditor` role + `ClusterId=LINE3-OPCUA` scope claim only
|
|
- The `RoleMapper` reads a small `LdapGroupRoleMapping` table (Group → Role, Group → ClusterId scope) configured by `FleetAdmin` via the Admin UI
|
|
- All cluster-scoped pages check both role AND `ClusterId` scope claim before showing edit affordances
|
|
|
|
System-wide users (no `ClusterId` scope claims, `IsSystemWideDeployment = true`) see every cluster.
|
|
|
|
### Bootstrap (first-run)
|
|
|
|
Same as ScadaLink: a local-admin login configured in `appsettings.json` (or a local certificate-authenticated user) bootstraps the first `OtOpcUaAdmins` LDAP group binding before LDAP-only access takes over. Documented as a one-time setup step.
|
|
|
|
### Audit
|
|
|
|
Every write operation goes through `sp_*` procs that log to `ConfigAuditLog` with the operator's principal. The Admin UI also logs view-only actions (page navigation, generation diff views) to a separate UI access log for compliance.
|
|
|
|
## Visual Design — Direct Parity with ScadaLink
|
|
|
|
Every visual element is lifted from ScadaLink CentralUI's design system to ensure cross-app consistency. Concrete specs:
|
|
|
|
### Layout
|
|
|
|
- **Flex layout**: `<div class="d-flex">` containing `<NavMenu />` (sidebar) and `<main class="flex-grow-1 p-3">` (content)
|
|
- **Sidebar**: 220px fixed width (`min-width: 220px; max-width: 220px`), full viewport height (`min-height: 100vh`), background `#212529` (Bootstrap dark)
|
|
- **Main background**: `#f8f9fa` (Bootstrap light)
|
|
- **Brand**: "OtOpcUa" in white bold (font-size: 1.1rem, padding 1rem, border-bottom `1px solid #343a40`) at top of sidebar
|
|
- **Nav links**: color `#adb5bd`, padding `0.4rem 1rem`, font-size `0.9rem`. Hover: white text, background `#343a40`. Active: white text, background `#0d6efd` (Bootstrap primary)
|
|
- **Section headers** ("Admin", "Configuration", "Monitoring"): color `#6c757d`, uppercase, font-size `0.75rem`, font-weight `600`, letter-spacing `0.05em`, padding `0.75rem 1rem 0.25rem`
|
|
- **User strip** at bottom of sidebar: display name (text-light small) + Sign Out button (`btn-outline-light btn-sm`), separated from nav by `border-top border-secondary`
|
|
|
|
### Login page
|
|
|
|
Verbatim structure from ScadaLink's `Login.razor`:
|
|
|
|
```razor
|
|
<div class="container" style="max-width: 400px; margin-top: 10vh;">
|
|
<div class="card shadow-sm">
|
|
<div class="card-body p-4">
|
|
<h4 class="card-title mb-4 text-center">OtOpcUa</h4>
|
|
|
|
@if (!string.IsNullOrEmpty(ErrorMessage))
|
|
{
|
|
<div class="alert alert-danger py-2" role="alert">@ErrorMessage</div>
|
|
}
|
|
|
|
<form method="post" action="/auth/login" data-enhance="false">
|
|
<div class="mb-3">
|
|
<label for="username" class="form-label">Username</label>
|
|
<input type="text" class="form-control" id="username" name="username"
|
|
required autocomplete="username" autofocus />
|
|
</div>
|
|
<div class="mb-3">
|
|
<label for="password" class="form-label">Password</label>
|
|
<input type="password" class="form-control" id="password" name="password"
|
|
required autocomplete="current-password" />
|
|
</div>
|
|
<button type="submit" class="btn btn-primary w-100">Sign In</button>
|
|
</form>
|
|
</div>
|
|
</div>
|
|
<p class="text-center text-muted mt-3 small">Authenticate with your organization's LDAP credentials.</p>
|
|
</div>
|
|
```
|
|
|
|
Exact same dimensions, exact same copy pattern, only the brand name differs.
|
|
|
|
### Reconnection overlay
|
|
|
|
Same SignalR-disconnect modal as ScadaLink — `#reconnect-modal` overlay (`rgba(0,0,0,0.5)` backdrop, centered white card with `spinner-border text-primary`, "Connection Lost" heading, "Attempting to reconnect to the server. Please wait..." body). Listens for `Blazor.addEventListener('enhancedload')` to dismiss on reconnect. Lifted from ScadaLink's `App.razor` inline styles.
|
|
|
|
### Shared components — direct copies
|
|
|
|
All seven shared components from ScadaLink CentralUI are copied verbatim into our `Components/Shared/`:
|
|
|
|
| Component | Use |
|
|
|-----------|-----|
|
|
| `DataTable.razor` | Sortable, filterable, paged table — used for tags, generations, audit log, cluster list |
|
|
| `ConfirmDialog.razor` | Modal confirmation for destructive actions (publish, rollback, discard draft, disable credential) |
|
|
| `LoadingSpinner.razor` | Standard spinner for in-flight DB operations |
|
|
| `ToastNotification.razor` | Transient success/error toasts for non-modal feedback |
|
|
| `TimestampDisplay.razor` | Consistent UTC + relative-time rendering ("3 minutes ago") |
|
|
| `RedirectToLogin.razor` | Component used by pages requiring auth — server-side redirect to `/login?returnUrl=...` |
|
|
| `NotAuthorizedView.razor` | Standard "you don't have permission for this action" view, shown by `<AuthorizeView>` Not authorized branch |
|
|
|
|
If we discover an Admin-specific component need, add it to our Shared folder rather than diverging from ScadaLink's set.
|
|
|
|
## Information Architecture
|
|
|
|
```
|
|
/ Fleet Overview (default landing)
|
|
/clusters Cluster list
|
|
/clusters/{ClusterId} Cluster detail (tabs: Overview / Namespaces / UNS Structure / Drivers / Devices / Equipment / Tags / Generations / Audit)
|
|
/clusters/{ClusterId}/nodes/{NodeId} Node detail
|
|
/clusters/{ClusterId}/namespaces Namespace management (generation-versioned via draft → publish; same boundary as drivers/tags)
|
|
/clusters/{ClusterId}/uns UNS structure management (areas, lines, drag-drop reorganize)
|
|
/clusters/{ClusterId}/equipment Equipment list (default sorted by ZTag)
|
|
/clusters/{ClusterId}/equipment/{EquipmentId} Equipment detail (5 identifiers, UNS placement, signals, audit)
|
|
/clusters/{ClusterId}/draft Draft editor (drivers/devices/equipment/tags)
|
|
/clusters/{ClusterId}/draft/diff Draft vs current diff viewer
|
|
/clusters/{ClusterId}/generations Generation history
|
|
/clusters/{ClusterId}/generations/{Id} Generation detail (read-only view of any generation)
|
|
/clusters/{ClusterId}/audit Audit log filtered to this cluster
|
|
/credentials Credential management (FleetAdmin only)
|
|
/audit Fleet-wide audit log
|
|
/admin/users Admin role assignments (FleetAdmin only)
|
|
```
|
|
|
|
## Core Pages
|
|
|
|
### Fleet Overview (`/`)
|
|
|
|
Single-page summary intended as the operator landing page.
|
|
|
|
- **Cluster cards**, one per `ServerCluster`, showing:
|
|
- Cluster name, site, redundancy mode, node count
|
|
- Per-node status: online/offline (from `ClusterNodeGenerationState.LastSeenAt`), current generation, RedundancyRole, ServiceLevel (last reported)
|
|
- Drift indicator: red if 2-node cluster's nodes are on different generations, amber if mid-apply, green if converged
|
|
- **Active alerts** strip (top of page):
|
|
- Sticky crash-loop circuit alerts (per `driver-stability.md`)
|
|
- Stragglers: nodes that haven't applied the latest published generation within 5 min
|
|
- Failed applies (`LastAppliedStatus = 'Failed'`)
|
|
- **Recent activity**: last 20 events from `ConfigAuditLog` across the fleet
|
|
- **Search bar** at top: jump to any cluster, node, tag, or driver instance by name
|
|
|
|
Refresh: SignalR push for status changes; full reload every 30 s as a safety net.
|
|
|
|
### Cluster Detail (`/clusters/{ClusterId}`)
|
|
|
|
Tabbed view for one cluster.
|
|
|
|
**Tabs:**
|
|
|
|
1. **Overview** — cluster metadata (name, Enterprise, Site, redundancy mode), namespace summary (which kinds are configured + their URIs), node table with online/offline/role/generation/last-applied-status, current published generation summary, draft status (none / in progress / ready to publish)
|
|
2. **Namespaces** — list of `Namespace` rows for this cluster *in the current published generation* (Kind, NamespaceUri, Enabled). **Namespaces are generation-versioned** (revised after adversarial review finding #2): add / disable / re-enable a namespace by opening a draft, making the change, and publishing. The tab is read-only when no draft is open; "Edit in draft" button opens the cluster's draft scoped to the namespace section. Equipment kind is auto-included in the cluster's first generation; SystemPlatform kind is added when a Galaxy driver is configured. Simulated kind is reserved (operator can add a row of `Kind = 'Simulated'` in a draft but no driver populates it in v2.0; UI shows "Awaiting replay driver — see roadmap" placeholder).
|
|
3. **UNS Structure** — tree view of `UnsArea` → `UnsLine` → `Equipment` for this cluster's current published generation. Operators can:
|
|
- Add/rename/delete areas and lines (changes go into the active draft)
|
|
- Bulk-move lines between areas (drag-and-drop in the tree, single edit propagates UNS path changes to all equipment under the moved line)
|
|
- Bulk-move equipment between lines
|
|
- View live UNS path preview per node (`Enterprise/Site/Area/Line/Equipment`)
|
|
- See validation errors inline (segment regex, length cap, _default placeholder rules)
|
|
- Counts per node: # lines per area, # equipment per line, # signals per equipment
|
|
- Path-rename impact: when renaming an area, UI shows "X lines, Y equipment, Z signals will pick up new path" before commit
|
|
4. **Drivers** — table of `DriverInstance` rows in the *current published* generation, with per-row namespace assignment shown. Per-row navigation to driver-specific config screens. "Edit in draft" button creates or opens the cluster's draft.
|
|
5. **Devices** — table of `Device` rows (where applicable), grouped by `DriverInstance`
|
|
6. **Equipment** — table of `Equipment` rows in the current published generation, scoped to drivers in Equipment-kind namespaces. **Default sort: ZTag ascending** (the primary browse identifier per decision #117). Default columns:
|
|
- `ZTag` (primary, bold, copyable)
|
|
- `MachineCode` (secondary, e.g. `machine_001`)
|
|
- Full UNS path (rendered live from cluster + UnsLine→UnsArea + Equipment.Name)
|
|
- `SAPID` (when set)
|
|
- `EquipmentUuid` (collapsed badge, copyable on click — "show UUID" toggle to expand)
|
|
- `EquipmentClassRef` (placeholder until schemas repo lands)
|
|
- DriverInstance, DeviceId, Enabled
|
|
|
|
Search bar supports any of the five identifiers (ZTag, MachineCode, SAPID, EquipmentId, EquipmentUuid) — operator types and the search dispatches across all five with a typeahead that disambiguates ("Found in ZTag" / "Found in MachineCode" labels on each suggestion). Per-row click opens the Equipment Detail page.
|
|
7. **Tags** — paged, filterable table of all tags. Filters: namespace kind, equipment (by ZTag/MachineCode/SAPID), driver, device, folder path, name pattern, data type. For Equipment-ns tags the path is shown as the full UNS path; for SystemPlatform-ns tags the v1-style `FolderPath/Name` is shown. Bulk operations toolbar: export to CSV, import from CSV (validated against active draft).
|
|
8. **ACLs** — OPC UA client data-path authorization grants. Two views (toggle at top): "By LDAP group" (rows) and "By scope" (UNS tree with permission badges per node). Bulk-grant flow: pick group + permission bundle (`ReadOnly` / `Operator` / `Engineer` / `Admin`) or per-flag selection + scope (multi-select from tree or pattern), preview, confirm via draft. Permission simulator panel: enter username + LDAP groups → effective permission map across the cluster's UNS tree. Default seed on cluster creation maps v1 LmxOpcUa LDAP roles. Banner shows when this cluster's ACL set diverges from the seed. See `acl-design.md` for full design.
|
|
9. **Generations** — generation history list (see Generation History page)
|
|
10. **Audit** — filtered audit log
|
|
|
|
The Drivers/Devices/Equipment/Tags tabs are **read-only views** of the published generation; editing is done in the dedicated draft editor to make the publish boundary explicit. The Namespaces tab and the UNS Structure tab follow the same hybrid pattern: navigation is read-only over the published generation, click-to-edit on any node opens the draft editor scoped to that node. **No table in v2.0 is edited outside the publish boundary** (revised after adversarial review finding #2).
|
|
|
|
### Equipment Detail (`/clusters/{ClusterId}/equipment/{EquipmentId}`)
|
|
|
|
Per-equipment view. Form sections:
|
|
|
|
- **OPC 40010 Identification panel** (per the `_base` equipment-class template): operator-set static metadata exposed as OPC UA properties on the equipment node's `Identification` sub-folder — Manufacturer (required), Model (required), SerialNumber, HardwareRevision, SoftwareRevision, YearOfConstruction, AssetLocation (free-text supplementary to UNS path), ManufacturerUri (URL), DeviceManualUri (URL). Manufacturer + Model are required because the `_base` template declares them as `isRequired: true`; the rest are optional and can be filled in over time. Drivers that can read fields dynamically (e.g. FANUC `cnc_sysinfo()` returning `SoftwareRevision`) override the static value at runtime; otherwise the operator-set value flows through.
|
|
- **Identifiers panel**: all five identifiers, with explicit purpose labels and copy-to-clipboard buttons
|
|
- `ZTag` — editable; live fleet-wide uniqueness check via `ExternalIdReservation` (warns if value is currently held by another EquipmentUuid; cannot save unless reservation is released first)
|
|
- `MachineCode` — editable; live within-cluster uniqueness check
|
|
- `SAPID` — editable; same reservation-backed check as ZTag
|
|
- `EquipmentId` — **read-only forever** (revised after adversarial review finding #4). System-generated as `'EQ-' + first 12 hex chars of EquipmentUuid`. Never operator-editable, never present in any input form, never accepted from CSV imports
|
|
- `EquipmentUuid` — read-only forever (auto-generated UUIDv4 on creation, never editable; copyable badge with "downstream consumers join on this" tooltip)
|
|
- **UNS placement panel**: UnsArea/UnsLine pickers (typeahead from existing structure); `Equipment.Name` field with live segment validation; live full-path preview with character counter
|
|
- **Class template panel**: `EquipmentClassRef` — free text in v2.0; becomes a typeahead picker when schemas repo lands
|
|
- **Driver source panel**: DriverInstance + DeviceId pickers (filtered to drivers in Equipment-kind namespaces of this cluster)
|
|
- **Signals panel**: list of `Tag` rows that belong to this equipment; inline edit not supported here (use Draft Editor's Tags panel for editing); read-only with a "Edit in draft" deep link
|
|
- **Audit panel**: filtered audit log scoped to this equipment row across generations
|
|
|
|
### Node Detail (`/clusters/{ClusterId}/nodes/{NodeId}`)
|
|
|
|
Per-node view for `ClusterNode` management.
|
|
|
|
- **Physical attributes** form: Host, OpcUaPort, DashboardPort, ApplicationUri, ServiceLevelBase, RedundancyRole
|
|
- **ApplicationUri auto-suggest** behavior (per decision #86):
|
|
- When creating a new node: prefilled with `urn:{Host}:OtOpcUa`
|
|
- When editing an existing node: changing `Host` shows a warning banner — "ApplicationUri is not updated automatically. Changing it will require all OPC UA clients to re-establish trust." Operator must explicitly click an "Update ApplicationUri" button to apply the suggestion.
|
|
- **Credentials** sub-tab: list of `ClusterNodeCredential` rows (kind, value, enabled, rotated-at). FleetAdmin can add/disable/rotate. Credential rotation flow is documented inline ("create new credential → wait for node to use it → disable old credential").
|
|
- **Per-node overrides** sub-tab: structured editor for `DriverConfigOverridesJson`. Surfaces the cluster's `DriverInstance` rows with their current `DriverConfig`, and lets the operator add path → value override entries per driver. Validation: override path must exist in the current draft's `DriverConfig`; loud failure if it doesn't (per the merge semantics in the schema doc).
|
|
- **Generation state**: current applied generation, last-applied timestamp, last-applied status, last error if any
|
|
- **Recent node activity**: filtered audit log
|
|
|
|
### Draft Editor (`/clusters/{ClusterId}/draft`)
|
|
|
|
The primary edit surface. Three-panel layout: tree on the left (Drivers → Devices → Equipment → Tags, with Equipment shown only for drivers in Equipment-kind namespaces), edit form on the right, validation panel at the bottom.
|
|
|
|
- **Drivers panel**: add/edit/remove `DriverInstance` rows in the draft. Each driver type opens a driver-specific config screen (deferred per #27). Generic fields (Name, NamespaceId, Enabled) are always editable. The NamespaceId picker is filtered to namespace kinds that are valid for the chosen driver type (e.g. selecting `DriverType=Galaxy` restricts the picker to SystemPlatform-kind namespaces only).
|
|
- **Devices panel**: scoped to the selected driver instance (where applicable)
|
|
- **UNS Structure panel** (Equipment-ns drivers only): tree of UnsArea → UnsLine; CRUD on areas and lines; rename and move operations with live impact preview ("renaming bldg-3 → bldg-3a will update 12 lines, 47 equipment, 1,103 signal paths"); validator rejects identity reuse with a different parent
|
|
- **Equipment panel** (Equipment-ns drivers only):
|
|
- Add/edit/remove `Equipment` rows scoped to the selected driver
|
|
- Inline form sections:
|
|
- **Identifiers**: `MachineCode` (required, e.g. `machine_001`, validates within-cluster uniqueness live); `ZTag` (optional, ERP id, validates fleet-wide uniqueness via `ExternalIdReservation` lookup live — surfaces "currently reserved by EquipmentUuid X in cluster Y" if collision); `SAPID` (optional, SAP PM id, same reservation-backed check)
|
|
- **UNS placement**: `UnsLineId` picker (typeahead from existing structure or "Create new line" inline); `Name` (UNS level 5, live segment validation `^[a-z0-9-]{1,32}$`)
|
|
- **Class template**: `EquipmentClassRef` (free text in v2.0; becomes a typeahead picker when schemas repo lands)
|
|
- **Source**: `DeviceId` (when driver has multiple devices); `Enabled`
|
|
- **`EquipmentUuid` is auto-generated UUIDv4 on creation, displayed read-only as a copyable badge**, never editable. **`EquipmentId` is also auto-generated** (`'EQ-' + first 12 hex chars of EquipmentUuid`) and never editable in any form. Both stay constant across renames, MachineCode/ZTag/SAPID edits, and area/line moves. The validator rejects any draft that tries to change either value on a published equipment.
|
|
- **Live UNS path preview** above the form: `{Cluster.Enterprise}/{Cluster.Site}/{UnsArea.Name}/{UnsLine.Name}/{Name}` with character count and ≤200 limit indicator
|
|
- Bulk operations:
|
|
- Move many equipment from one line to another (UUIDs and identifiers preserved)
|
|
- Bulk-edit MachineCode/ZTag/SAPID via inline grid (validation per row)
|
|
- Bulk-create equipment from CSV (one row per equipment; UUIDs auto-generated for new rows)
|
|
- **Tags panel**:
|
|
- Tree view: by Equipment when in Equipment-ns; by `FolderPath` when in SystemPlatform-ns
|
|
- Inline edit for individual tags (Name, DataType, AccessLevel, WriteIdempotent, PollGroupId, TagConfig JSON in a structured editor)
|
|
- **Bulk operations**: select multiple tags → bulk edit (change poll group, access level, etc.)
|
|
- **CSV import** schemas (one per namespace kind):
|
|
- Equipment-ns: `(EquipmentId, Name, DataType, AccessLevel, WriteIdempotent, PollGroupId, TagConfig)`
|
|
- SystemPlatform-ns: `(DriverInstanceId, DeviceId?, FolderPath, Name, DataType, AccessLevel, WriteIdempotent, PollGroupId, TagConfig)`
|
|
- Preview shows additions/modifications/removals against current draft, with row-level validation errors. Operator confirms or cancels.
|
|
- **CSV export**: emit the matching shape from the current published generation
|
|
- **Equipment CSV import** (separate flow): bulk-create-or-update equipment. Columns: `(EquipmentUuid?, MachineCode, ZTag?, SAPID?, UnsAreaName, UnsLineName, Name, DriverInstanceId, DeviceId?, EquipmentClassRef?)`. **No `EquipmentId` column** (revised after adversarial review finding #4 — operator-supplied EquipmentId would mint duplicate equipment identity on typos):
|
|
- **Row with `EquipmentUuid` set**: matches existing equipment by UUID, updates the matched row's editable fields (MachineCode/ZTag/SAPID/UnsLineId/Name/EquipmentClassRef/DeviceId/Enabled). Mismatched UUID = error, abort row.
|
|
- **Row without `EquipmentUuid`**: creates new equipment. System generates fresh UUID and `EquipmentId = 'EQ-' + first 12 hex chars`. Cannot be used to update an existing row — operator must include UUID for updates.
|
|
- UnsArea/UnsLine resolved by name within the cluster (auto-create if not present, with validation prompt).
|
|
- Identifier uniqueness checks run row-by-row with errors surfaced before commit. ZTag/SAPID checked against `ExternalIdReservation` — collisions surface inline with the conflicting EquipmentUuid named.
|
|
- Explicit "merge equipment A into B" or "rebind ZTag from A to B" operations are not in the CSV import path — see the Merge / Rebind operator flow below.
|
|
- **Validation panel** runs `sp_ValidateDraft` continuously (debounced 500 ms) and surfaces FK errors, JSON schema errors, duplicate paths, missing references, UNS naming-rule violations, UUID-immutability violations, and driver-type-vs-namespace-kind mismatches. Publish button is disabled while errors exist.
|
|
- **Diff link** at top: opens the diff viewer comparing the draft against the current published generation
|
|
|
|
### Diff Viewer (`/clusters/{ClusterId}/draft/diff`)
|
|
|
|
Three-column compare: previous published | draft | summary. Per-table sections (drivers, devices, tags, poll groups) with rows colored by change type:
|
|
|
|
- Green: added in draft
|
|
- Red: removed in draft
|
|
- Yellow: modified (with field-level diff on hover/expand)
|
|
|
|
Includes a **publish dialog** triggered from this view: required Notes field, optional "publish and apply now" vs. "publish and let nodes pick up on next poll" (the latter is the default; the former invokes a one-shot push notification, deferred per existing plan).
|
|
|
|
### Generation History (`/clusters/{ClusterId}/generations`)
|
|
|
|
List of all generations for the cluster with: ID, status, published-by, published-at, notes, and a per-row "Roll back to this" action (FleetAdmin or ConfigEditor). Clicking a row opens the generation detail page (read-only view of all rows in that generation, with diff-against-current as a button).
|
|
|
|
Rollback flow:
|
|
|
|
1. Operator clicks "Roll back to this generation"
|
|
2. Modal: "This will create a new published generation cloned from generation N. Both nodes of this cluster will pick up the change on their next poll. Notes (required):"
|
|
3. Confirm → invokes `sp_RollbackToGeneration` → immediate UI feedback that a new generation was published
|
|
|
|
### Credential Management (`/credentials`)
|
|
|
|
FleetAdmin-only. Lists all `ClusterNodeCredential` rows fleet-wide, filterable by cluster/node/kind/enabled.
|
|
|
|
Operations: add credential to node, disable credential, mark credential rotated. Rotation is the most common operation — the UI provides a guided flow ("create new → confirm node has used it once via `LastAppliedAt` advance → disable old").
|
|
|
|
### Fleet Audit (`/audit`)
|
|
|
|
Searchable / filterable view of `ConfigAuditLog` across all clusters. Filters: cluster, node, principal, event type, date range. Export to CSV for compliance.
|
|
|
|
## Real-Time Updates
|
|
|
|
Blazor Server runs over SignalR by default. The Admin app uses two SignalR hubs:
|
|
|
|
| Hub | Purpose |
|
|
|-----|---------|
|
|
| `FleetStatusHub` | Push `ClusterNodeGenerationState` changes (LastSeenAt updates, applied-generation transitions, status changes) to any open Fleet Overview or Cluster Detail page |
|
|
| `AlertHub` | Push new sticky alerts (crash-loop circuit trips, failed applies) to all subscribed pages |
|
|
|
|
Updates fan out from a backend `IHostedService` that polls `ClusterNodeGenerationState` every 5 s and diffs against last-known state. Pages subscribe selectively (Cluster Detail page subscribes to one cluster's updates; Fleet Overview subscribes to all). No polling from the browser.
|
|
|
|
## UX Rules
|
|
|
|
- **Sticky alerts that don't auto-clear** — per the crash-loop circuit-breaker rule in `driver-stability.md`, alerts in the Active Alerts strip require explicit operator acknowledgment before clearing, regardless of whether the underlying state has recovered. "We crash-looped 3 times overnight" must remain visible the next morning.
|
|
- **Publish boundary is explicit** — there is no "edit in place" path. All changes go through draft → diff → publish. The diff viewer is required reading before the publish dialog enables.
|
|
- **Loud failures over silent fallbacks** — if validation fails, the publish button is disabled and the failures are listed; we never publish a generation with warnings hidden. If a node override path doesn't resolve in the draft, the override editor flags it red, not yellow.
|
|
- **No auto-rewrite of `ApplicationUri`** — see Node Detail page above. The principle generalizes: any field that OPC UA clients pin trust to (`ApplicationUri`, certificate thumbprints) requires explicit operator action to change, never silent updates.
|
|
- **Bulk operations always preview before commit** — CSV imports, bulk tag edits, rollbacks all show a diff and require confirmation. No "apply" buttons that act without preview.
|
|
|
|
## Per-Driver Config Screens (deferred)
|
|
|
|
Per decision #27, driver-specific config screens are added in each driver's implementation phase, not up front. The Admin app provides:
|
|
|
|
- A pluggable `IDriverConfigEditor` interface in `Configuration.Abstractions`
|
|
- Driver projects implement an editor that renders into a slot on the Driver Detail screen
|
|
- For drivers that don't yet have a custom editor, a generic JSON editor with schema-driven validation is used (better than nothing, ugly but functional)
|
|
|
|
The generic JSON editor uses the per-driver JSON schema from `DriverTypeRegistry` so even pre-custom-editor, validation works.
|
|
|
|
## Workflows
|
|
|
|
### Add a new cluster
|
|
|
|
1. FleetAdmin: `/clusters` → "New cluster"
|
|
2. Form: Name, **Enterprise** (UNS level 1; default-prefilled `zb` per the org-wide canonical value, validated `^[a-z0-9-]{1,32}$`), **Site** (UNS level 2, e.g. `warsaw-west`, same validation), NodeCount (1 or 2), RedundancyMode (auto-set based on NodeCount)
|
|
3. Save → cluster row created (`Enabled = 1`, no generations yet)
|
|
4. **Open initial draft** containing default namespaces:
|
|
- Equipment-kind namespace (`NamespaceId = {ClusterName}-equipment`, `NamespaceUri = urn:{Enterprise}:{Site}:equipment`). Operator can edit URI in the draft before publish.
|
|
- Prompt: "This cluster will host a Galaxy / System Platform driver?" → if yes, the draft also includes a SystemPlatform-kind namespace (`urn:{Enterprise}:{Site}:system-platform`). If no, skip — operator can add it later via a draft.
|
|
5. Operator reviews the initial draft, optionally adds the first nodes' worth of drivers/equipment, then publishes generation 1. The cluster cannot serve any consumer until generation 1 is published (no namespaces exist before that).
|
|
6. Redirect to Cluster Detail; prompt to add nodes via the Node tab (cluster topology) — node addition itself remains cluster-level since `ClusterNode` rows are physical-machine topology, not consumer-visible content.
|
|
|
|
(Revised after adversarial review finding #2 — namespaces must travel through the publish boundary; the cluster-create flow no longer writes namespace rows directly.)
|
|
|
|
### Add a node to a cluster
|
|
|
|
1. Cluster Detail → "Add node"
|
|
2. Form: NodeId, RedundancyRole, Host (required), OpcUaPort (default 4840), DashboardPort (default 8081), ApplicationUri (auto-prefilled `urn:{Host}:OtOpcUa`), ServiceLevelBase (auto: Primary=200, Secondary=150)
|
|
3. Save
|
|
4. Prompt: "Add a credential for this node now?" → opens credential add flow
|
|
5. The node won't be functional until at least one credential is added and the credential is provisioned on the node's machine (out-of-band step documented in deployment guide)
|
|
|
|
### Edit drivers/tags and publish
|
|
|
|
1. Cluster Detail → "Edit configuration" → opens draft editor (creates a draft generation if none exists)
|
|
2. Operator edits drivers, devices, tags, poll groups
|
|
3. Validation panel updates live; publish disabled while errors exist
|
|
4. Operator clicks "Diff" → diff viewer
|
|
5. Operator clicks "Publish" → modal asks for Notes, confirms
|
|
6. `sp_PublishGeneration` runs in transaction; on success, draft becomes new published generation; previous published becomes superseded
|
|
7. Within ~30 s (default poll interval), both nodes pick up the new generation; Cluster Detail page shows live progress as `LastAppliedAt` advances on each node
|
|
|
|
### Roll back
|
|
|
|
1. Cluster Detail → Generations tab → find target generation → "Roll back to this"
|
|
2. Modal: explains a new generation will be created (clone of target) and published; require Notes
|
|
3. Confirm → `sp_RollbackToGeneration` runs
|
|
4. Same propagation as a forward publish — both nodes pick up the new generation on next poll
|
|
|
|
### Override a setting per node
|
|
|
|
1. Node Detail → Overrides sub-tab
|
|
2. Pick driver instance from dropdown → schema-driven editor shows current `DriverConfig` keys
|
|
3. Add override row: select key path (validated against the driver's JSON schema), enter override value
|
|
4. Save → updates `ClusterNode.DriverConfigOverridesJson`
|
|
5. **No new generation created** — overrides are per-node metadata, not generation-versioned. They take effect on the node's next config-apply cycle.
|
|
|
|
The "no new generation" choice is deliberate: overrides are operationally bound to a specific physical machine, not to the cluster's logical config evolution. A node replacement scenario would copy the override to the replacement node via the credential/override migration flow, not by replaying generation history.
|
|
|
|
### Rotate a credential
|
|
|
|
1. Node Detail → Credentials sub-tab → "Add credential"
|
|
2. Pick Kind, enter Value, save → new credential is enabled alongside the old
|
|
3. Wait for `LastAppliedAt` on the node to advance (proves the new credential is being used by the node — operator-side work to provision the new credential on the node's machine happens out-of-band)
|
|
4. Once verified, disable the old credential → only the new one is valid
|
|
|
|
### Release an external-ID reservation
|
|
|
|
When equipment is permanently retired and its `ZTag` or `SAPID` needs to be reusable by a different physical asset (a known-rare event):
|
|
|
|
1. FleetAdmin: navigate to Equipment Detail of the retired equipment, or to a global "External ID Reservations" view
|
|
2. Select the reservation (Kind + Value), click "Release"
|
|
3. Modal requires: confirmation of the EquipmentUuid that currently holds the reservation, and a free-text **release reason** (compliance audit trail)
|
|
4. Confirm → `sp_ReleaseExternalIdReservation` runs: sets `ReleasedAt`, `ReleasedBy`, `ReleaseReason`. Audit-logged with `EventType = 'ExternalIdReleased'`.
|
|
5. The same `(Kind, Value)` can now be reserved by a different EquipmentUuid in a future publish. The released row stays in the table forever for audit.
|
|
|
|
This is the **only** path that allows ZTag/SAPID reuse — no implicit release on equipment disable, no implicit release on cluster delete. Requires explicit FleetAdmin action with a documented reason.
|
|
|
|
### Merge or rebind equipment (rare)
|
|
|
|
When operators discover that two `EquipmentRow`s in different generations actually represent the same physical asset (e.g. a typo created a duplicate) — or when an asset's identity has been incorrectly split across UUIDs — the resolution is **not** an in-place EquipmentId edit (which is now impossible per finding #4). Instead:
|
|
|
|
1. FleetAdmin: Equipment Detail of the row that should be retained → "Merge from another EquipmentUuid"
|
|
2. Pick the source EquipmentUuid (the one to retire); modal shows a side-by-side diff of identifiers and signal counts
|
|
3. Confirm → opens a **draft** that:
|
|
- Disables the source equipment row (`Enabled = 0`) and adds an `EventType = 'EquipmentMergedAway'` audit entry naming the target UUID
|
|
- Re-points any tags currently on the source equipment to the target equipment
|
|
- If the source held a ZTag/SAPID reservation that should move to the target: explicit release of the source's reservation followed by re-reservation under the target UUID, both audit-logged
|
|
4. Operator reviews the draft diff; publishes
|
|
5. Downstream consumers see the source EquipmentUuid disappear (joins on it return historical data only) and the target EquipmentUuid gain the merged tags
|
|
|
|
Merge is a destructive lineage operation — the source EquipmentUuid is never reused, but its history persists in old generations + audit log. Rare by intent; UI buries the action behind two confirmation prompts.
|
|
|
|
## Deferred / Out of Scope
|
|
|
|
- **Cluster-scoped admin grants** (`ConfigEditor` for Cluster X only, not for Cluster Y) — surface in v2.1
|
|
- **Per-driver custom config editors** — added in each driver's implementation phase
|
|
- **Tag template / inheritance** — define a tag pattern once and apply to many similar device instances; deferred until the bulk import path proves insufficient
|
|
- **Multi-cluster synchronized publish** — push a configuration change across many clusters atomically. Out of scope; orchestrate via per-cluster publishes from a script if needed.
|
|
- **Mobile / tablet layout** — desktop-only initially
|
|
- **Role grants editor in UI** — initial v2 manages LDAP group → admin role mappings via `appsettings.json`; UI editor surfaced later
|
|
|
|
## Decisions / Open Questions
|
|
|
|
**Decided** (captured in `plan.md` decision log):
|
|
|
|
- Blazor Server tech stack (vs. SPA + API)
|
|
- **Visual + auth parity with ScadaLink CentralUI** — Bootstrap 5, dark sidebar, server-rendered login form, cookie auth + JWT API endpoint, copied shared component set, reconnect overlay
|
|
- LDAP for operator auth via `LdapAuthService` + `RoleMapper` + `JwtTokenService` mirrored from `ScadaLink.Security`
|
|
- Three admin roles: FleetAdmin / ConfigEditor / ReadOnly, with cluster-scoped grants in v2.0 (mirrored from ScadaLink's site-scoped pattern)
|
|
- Draft → diff → publish is the only edit path; no in-place edits
|
|
- Sticky alerts require manual ack
|
|
- Per-node overrides are NOT generation-versioned
|
|
- **All content edits go through the draft → diff → publish boundary** — Namespaces, UNS Structure, Drivers, Devices, Equipment, Tags. The UNS Structure and Namespaces tabs are hybrid (read-only navigation over the published generation, click-to-edit opens the draft editor scoped to that node). No table is editable outside the publish boundary in v2.0 (revised after adversarial review finding #2 — earlier draft mistakenly treated namespaces as cluster-level)
|
|
- **Equipment list defaults to ZTag sort** (primary browse identifier per the 3-year-plan handoff). All five identifiers (ZTag/MachineCode/SAPID/EquipmentId/EquipmentUuid) are searchable; typeahead disambiguates which field matched
|
|
- **EquipmentUuid is read-only forever** in the UI; never editable. Auto-generated UUIDv4 on equipment creation, displayed as a copyable badge
|
|
|
|
**Resolved Defaults**:
|
|
|
|
- **Styling: Bootstrap 5 vendored** (not MudBlazor or Fluent UI). Direct parity with ScadaLink CentralUI; standard component vocabulary; no Blazor-specific component-library dependency. Reverses an earlier draft choice — the cross-app consistency requirement outweighs MudBlazor's component conveniences.
|
|
- **Theme: light only (single theme matching ScadaLink).** ScadaLink ships light-only with the dark sidebar / light main pattern. Operators using both apps see one consistent aesthetic. Reverses an earlier draft choice that proposed both light and dark — cross-app consistency wins. Revisit only if ScadaLink adds dark mode.
|
|
- **CSV import dialect: strict CSV (RFC 4180), UTF-8 BOM accepted.** Excel "Save as CSV (UTF-8)" produces RFC 4180-compatible output and is the documented primary input format. TSV not supported initially; add only if operator feedback shows real friction with Excel CSV.
|
|
- **Push notification deferred to v2.1; polling is initial model.** SignalR-from-DB-to-nodes would tighten apply latency from ~30 s to ~1 s but adds infrastructure (SignalR backplane or SQL Service Broker) that's not earning its keep at v2.0 scale. The publish dialog reserves a disabled **"Push now"** button labeled "Available in v2.1" so the future UX is anchored.
|
|
- **Auto-save drafts with explicit Discard button.** Every form field change writes to the draft rows immediately (debounced 500 ms). The Discard button shows a confirmation dialog ("Discard all changes since last publish?") and rolls the draft generation back to empty. The Publish button is the only commit; auto-save does not publish.
|
|
- **Cluster-scoped admin grants in v2.0** (lifted from v2.1 deferred list). ScadaLink already ships the equivalent site-scoped pattern, so we get cluster-scoped grants essentially for free by mirroring it. `RoleMapper` reads an `LdapGroupRoleMapping` table; cluster-scoped users carry `ClusterId` claims and see only their permitted clusters.
|