Core plan: current-state, goal-state (layered architecture, OtOpcUa, Redpanda EventHub, SnowBridge, canonical model, UNS posture + naming hierarchy, digital twin use cases absorbed), roadmap (7 workstreams x 3 years), and status bookmark. Component detail files: legacy integrations inventory (3 integrations, pillar 3 denominator closed), equipment protocol survey template (dual mandate with UNS hierarchy snapshot), digital twin management brief (conversation complete, outcome recorded). Output generation pipeline: specs for 18-slide mixed-stakeholder PPTX and faithful-typeset PDF, with README, design doc, and implementation plan. No generated outputs yet — deferred until source data is stable.
127 lines
17 KiB
Markdown
127 lines
17 KiB
Markdown
# Legacy Integrations Inventory
|
|
|
|
The authoritative list of **legacy point-to-point integrations** that currently run **outside ScadaBridge** and must be retired by end of plan (see `goal-state.md` → Success Criteria → pillar 3).
|
|
|
|
> This file is the **denominator** for the zero-count retirement target. If an integration is not captured here, it cannot be tracked to retirement — so capture first, argue about classification later.
|
|
|
|
## How to use this file
|
|
|
|
- **One row per integration.** If the same logical integration runs at multiple sites with the same code path, it's one row with multiple sites listed. If each site runs a meaningfully different variant, split into separate rows.
|
|
- **"Legacy" here means any IT↔OT integration path that crosses the ScadaBridge-central boundary without going through ScadaBridge.** In the target architecture, **ScadaBridge central is the sole IT↔OT crossing point**; anything that crosses that boundary via another path — Web API interfaces exposed by the Aveva System Platform primary cluster, custom services, scheduled jobs, file drops, direct DB links, etc. — is legacy and in scope for retirement.
|
|
- **Not in scope for this inventory:** OT-internal traffic. System Platform ↔ System Platform traffic over Global Galaxy, site-level ScadaBridge ↔ local equipment, site System Platform clusters ↔ central System Platform cluster — all of this stays on the OT side in the target architecture and is not tracked here.
|
|
- **Fields are described in the schema below.** Missing values are fine during discovery — mark them `_TBD_` rather than leaving blank, so gaps are visible.
|
|
- **Do not remove rows.** Retired integrations stay in the file with `Status: Retired` and a retirement date, so pillar 3 progress is auditable.
|
|
|
|
## Field schema
|
|
|
|
| Field | Description |
|
|
|---|---|
|
|
| `ID` | Short stable identifier (e.g., `LEG-001`). Never reused. |
|
|
| `Name` | Human-readable name of the integration. |
|
|
| `Source System` | System the data comes from (e.g., System Platform primary cluster, specific SCADA node, MES, PLC). |
|
|
| `Target System` | System the data goes to (e.g., Camstar MES, Delmia DNC, a specific database, an external partner). |
|
|
| `Direction` | One of: `Source→Target`, `Target→Source`, `Bidirectional`. |
|
|
| `Transport` | How the integration moves data (e.g., Web API, direct DB link, file drop, OPC DA, scheduled SQL job, SOAP). |
|
|
| `Site(s)` | Sites where this integration runs. `All integrated sites` is acceptable shorthand. |
|
|
| `Traffic Volume` | Rough order-of-magnitude (e.g., ~10 req/min, ~1k events/day, ~50 MB/day, unknown). |
|
|
| `Business Purpose` | One sentence: what this integration exists to do. |
|
|
| `Current Owner` | Team or person responsible today. |
|
|
| `Dependencies` | Other integrations or systems that depend on this one continuing to work. |
|
|
| `Migration Target` | What ScadaBridge pattern replaces it (e.g., `ScadaBridge inbound Web API`, `ScadaBridge → EventHub topic X`, `ScadaBridge outbound Web API call`, `ScadaBridge DB write`). |
|
|
| `Retirement Criteria` | Concrete, testable conditions that must be true before this integration can be switched off (e.g., "all consumers reading topic `mes.workorder.started` in prod for 30 days"). |
|
|
| `Status` | One of: `Discovered`, `Planned`, `In Migration`, `Dual-Run`, `Retired`. |
|
|
| `Retirement Date` | Actual retirement date once `Status = Retired`. |
|
|
| `Notes` | Anything else — known risks, gotchas, related tickets. |
|
|
|
|
## Current inventory
|
|
|
|
> **Discovery complete — denominator = 3.** Three legacy IT↔OT integrations are tracked: **LEG-001 Delmia DNC** (Aveva Web API), **LEG-002 Camstar MES** (Aveva Web API, Camstar-initiated), **LEG-003 custom email notification service**. Header-level field details on these three are still largely TBD and get filled in as migration planning proceeds, but no further integrations are expected to be added — the inventory is closed as the pillar 3 denominator unless new evidence surfaces during migration work. See **Deliberately not tracked** below for categories explicitly carved out of the definition.
|
|
|
|
### LEG-001 — Aveva Web API ↔ Delmia DNC
|
|
|
|
| Field | Value |
|
|
|---|---|
|
|
| **ID** | LEG-001 |
|
|
| **Name** | Aveva System Platform Web API — Delmia DNC interface |
|
|
| **Source System** | Aveva System Platform primary cluster (initiates the download request) **and** Delmia DNC (initiates the completion notification back). |
|
|
| **Target System** | Delmia DNC (recipe library / download service) on one leg; Aveva System Platform primary cluster + equipment on the return leg. |
|
|
| **Direction** | **Bidirectional — orchestrated handshake.** <br>(1) **System Platform → Delmia:** System Platform triggers a Delmia download ("fetch recipe X"). <br>(2) **Delmia → System Platform:** when the download completes, Delmia notifies System Platform so System Platform can parse/load the recipe file to the equipment. <br>Both legs go over the Web API on the Aveva System Platform primary cluster. |
|
|
| **Transport** | Web API on the primary cluster — used in both directions (System Platform as client for leg 1, System Platform Web API as server for leg 2). |
|
|
| **Site(s)** | _TBD — presumably all sites with equipment that consumes Delmia recipe files, confirm_ |
|
|
| **Traffic Volume** | _TBD — recipe download events are typically low-frequency compared to tag data, order-of-magnitude unknown_ |
|
|
| **Business Purpose** | **Delmia DNC distributes recipe (NC program) files to equipment**, with **System Platform orchestrating the handshake**. The sequence is: System Platform triggers Delmia to download a recipe file → Delmia fetches it → Delmia notifies System Platform that the download is ready → System Platform parses/loads the recipe file to the equipment. Parsing is required for some equipment classes; pass-through for others. |
|
|
| **Current Owner** | _TBD_ |
|
|
| **Dependencies** | Equipment that consumes Delmia recipe files is the downstream dependency. Any equipment requiring the System Platform parse step is a harder-to-migrate case than equipment that accepts a raw file. |
|
|
| **Migration Target** | ScadaBridge replaces the System Platform Web API as the Delmia-facing surface on **both legs** of the handshake: <br>— **Leg 1 (trigger):** ScadaBridge calls Delmia via its **outbound Web API client** capability to trigger a recipe download (replacing the System Platform → Delmia call). <br>— **Leg 2 (notification):** ScadaBridge exposes an **inbound secure Web API** endpoint that Delmia calls to notify download completion (replacing the Delmia → System Platform Web API callback). <br>— **Parse/load step:** ScadaBridge **scripts** (C#/Roslyn) re-implement the recipe-parse logic that currently runs on System Platform; ScadaBridge then writes the parsed result to equipment via its OPC UA write path. <br>_TBD — whether the parse logic lives per-site or centrally; whether Delmia can target per-site ScadaBridge endpoints directly or must go through ScadaBridge central; how orchestration state (pending triggers, in-flight downloads) is held during the handshake, especially across a WAN outage that separates the trigger and the notification._ |
|
|
| **Retirement Criteria** | _TBD — almost certainly requires per-equipment-class validation that the ScadaBridge-parsed output matches the System Platform-parsed output byte-for-byte, for any equipment where parsing is involved._ |
|
|
| **Status** | Discovered |
|
|
| **Retirement Date** | — |
|
|
| **Notes** | One of the two existing Aveva Web API interfaces documented in `../current-state.md`. Unlike LEG-002 (which has a ready replacement in ScadaBridge's native Camstar path), **LEG-001 requires new work** — the recipe-parse logic currently on System Platform has to be re-implemented (most likely as ScadaBridge scripts) before the legacy path can be retired. Expected to be a **harder retirement** than LEG-002. |
|
|
|
|
### LEG-002 — Aveva Web API ↔ Camstar MES
|
|
|
|
| Field | Value |
|
|
|---|---|
|
|
| **ID** | LEG-002 |
|
|
| **Name** | Aveva System Platform Web API — Camstar MES interface |
|
|
| **Source System** | Aveva System Platform primary cluster (South Bend) — **hosts** the Web API. |
|
|
| **Target System** | Camstar MES — **is the caller** (Camstar initiates the interaction by calling into the System Platform Web API). |
|
|
| **Direction** | **Bidirectional, Camstar-initiated.** Camstar calls the Aveva System Platform primary-cluster Web API to initiate; request/response data flows in both directions over that call. _TBD — whether Camstar is primarily pulling data out of System Platform, pushing data into it, or doing a true two-way exchange during the call, and whether System Platform ever initiates back to Camstar through any other channel._ |
|
|
| **Transport** | Web API on the primary cluster — **inbound to System Platform** (Camstar is the HTTP client, System Platform Web API is the HTTP server). |
|
|
| **Site(s)** | _TBD — presumably all sites, confirm_ |
|
|
| **Traffic Volume** | _TBD_ |
|
|
| **Business Purpose** | MES integration — details _TBD_ |
|
|
| **Current Owner** | _TBD_ |
|
|
| **Dependencies** | _TBD_ |
|
|
| **Migration Target** | **Move callers off the System Platform Web API and onto ScadaBridge's existing direct Camstar integration.** ScadaBridge already calls Camstar directly (see `../current-state.md` → ScadaBridge downstream consumers), so the retirement work for this integration is about redirecting existing consumers, **not** building a new ScadaBridge→Camstar path. |
|
|
| **Retirement Criteria** | _TBD_ |
|
|
| **Status** | Discovered |
|
|
| **Retirement Date** | — |
|
|
| **Notes** | One of the two existing Aveva Web API interfaces documented in `../current-state.md`. ScadaBridge's native Camstar path exists today, which means LEG-002 is an "end-user redirect" problem rather than a "build something new" problem — typically cheaper and faster to retire than an integration that requires replacement work. **Important directional nuance:** because **Camstar** is the caller (not System Platform), retirement can't be done purely by changing System Platform's outbound code — it requires reconfiguring Camstar to call ScadaBridge instead of the System Platform Web API. That makes Camstar's change-control process a critical dependency for retirement planning. The deeper Camstar integration described in `goal-state.md` may still inform the long-term shape. |
|
|
|
|
### LEG-003 — System Platform → email notifications via custom service
|
|
|
|
| Field | Value |
|
|
|---|---|
|
|
| **ID** | LEG-003 |
|
|
| **Name** | Aveva System Platform → email notifications via custom in-house service |
|
|
| **Source System** | Aveva System Platform (emits notification events) — _TBD whether from the primary cluster only, from site clusters, or both_. |
|
|
| **Target System** | Enterprise email system, reached via a **custom in-house notification service** that sits between System Platform and email. |
|
|
| **Direction** | Source→Target. System Platform emits notification events; the custom service relays them to email. No known return path from email back to System Platform. |
|
|
| **Transport** | Custom in-house notification service. _TBD — how the service is fed from System Platform (scripted handler in System Platform, direct DB read, OPC UA / LmxOpcUa subscription, Web API push, file drop, …) and how it reaches email (SMTP to enterprise relay, Exchange Web Services, Graph API, …)._ |
|
|
| **Site(s)** | _TBD — confirm which clusters actually emit through this path and whether site clusters use the same service or have their own variants_ |
|
|
| **Traffic Volume** | _TBD_ |
|
|
| **Business Purpose** | Operator / engineering / maintenance notifications (alarms, state changes, threshold breaches — _TBD_ which categories) emitted from System Platform and delivered by email. |
|
|
| **Current Owner** | _TBD — service is in-house but ownership of the custom service codebase and its operational runbook needs to be confirmed._ |
|
|
| **Dependencies** | Enterprise email system availability; recipient distribution lists baked into the custom service's configuration; any downstream workflows (ticketing, on-call escalation, acknowledgement) that trigger off these emails — _TBD_ whether any such downstream automation exists. |
|
|
| **Migration Target** | **ScadaBridge native notifications.** ScadaBridge already provides contact-list-driven, transport-agnostic notifications over **email and Microsoft Teams** (see `../current-state.md` → ScadaBridge Capabilities → Notifications). Migration path: port the existing notification triggers to ScadaBridge scripts, recreate recipient lists as ScadaBridge contact lists, dual-run both notification paths for a window long enough to catch low-frequency alarms, then cut over and decommission the custom service. |
|
|
| **Retirement Criteria** | _TBD — at minimum: (1) **trigger parity** — every notification the custom service emits today is emitted by a ScadaBridge script; (2) **recipient parity** — every recipient on every distribution list exists on the equivalent ScadaBridge contact list; (3) **delivery SLO parity** — email delivery latency on the ScadaBridge path meets or beats today's; (4) **dual-run window** long enough to surface low-frequency notifications (monthly / quarterly events)._ |
|
|
| **Status** | Discovered |
|
|
| **Retirement Date** | — |
|
|
| **Notes** | **Easier retirement than LEG-001** — ScadaBridge's notification capability already exists in production, so this is a migration of **triggers and recipient lists**, not new build work. Closer in difficulty to LEG-002. **Open risk:** if the custom service also handles non-notification workflows (acknowledgement callbacks, escalation state, ticketing integration), those are out of scope for ScadaBridge's notification capability and would need a separate migration path — confirm during discovery whether the service is purely fire-and-forget email or has state. **Also — Teams as a by-product:** moving to ScadaBridge opens the door to routing the same notifications over Microsoft Teams without code changes (contact-list config only), which may be a quick win to surface with stakeholders during dual-run. |
|
|
|
|
## Deliberately not tracked
|
|
|
|
Categories explicitly carved out of the "legacy integration" definition. These crossings may look like they cross the IT↔OT boundary, but they are **not** pillar 3 work and are **not** counted against the "drive to zero" target. The carve-outs are load-bearing — without them, pillar 3's zero target is ambiguous.
|
|
|
|
### Historian SQL reporting consumers (SAP BOBJ, Power BI, ad-hoc analyst SQL)
|
|
|
|
**Not legacy.** Aveva Historian exposes data via **MSSQL** (SQL Server linked-server / OPENQUERY views) as its **native consumption surface** — the supported, designed way to read historian data. A reporting tool hitting those SQL views is using Historian as intended, not a bespoke point-to-point integration grafted onto the side of it.
|
|
|
|
Pillar 3's "legacy integration" target covers **bespoke IT↔OT crossings** — Web API interfaces exposed by the System Platform primary cluster, custom services, file drops, direct DB links into internal stores that weren't designed for external reads. Consuming a system's own first-class SQL interface is categorically different and does not fit that definition.
|
|
|
|
The BOBJ → Power BI migration currently in flight (see `../current-state.md` → Aveva Historian → Current consumers) will reshape this surface independently of pillar 3. Whether Power BI ultimately reads from Historian's SQL interface, from Snowflake's dbt curated layer, or from both is a coordination question between the reporting team and this plan (tracked in `../status.md` as a top pending item) — but whichever way it lands, the resulting path is **not** tracked here as a retirement target.
|
|
|
|
> **Implication:** if at any point a reporting consumer stops using Historian's SQL views and instead starts talking to Historian via a bespoke side-channel (custom extract job, scheduled export, direct file read of the historian store, etc.), **that** side-channel **would** be legacy and would need a row in the inventory. The carve-out applies specifically to the native MSSQL surface.
|
|
|
|
### OT-internal traffic
|
|
|
|
Restating the rule from "How to use this file" above, for explicit visibility: System Platform ↔ System Platform traffic over Global Galaxy, site-level ScadaBridge ↔ local equipment, site System Platform clusters ↔ central System Platform cluster, and any other traffic that stays entirely within the OT side of the boundary is **not** legacy for pillar 3 purposes. The IT↔OT crossing at ScadaBridge central is the only boundary this inventory tracks.
|
|
|
|
## Open questions for the inventory itself
|
|
|
|
- Is there an existing system of record (e.g., a CMDB, an integration catalog, a SharePoint inventory) that this file should pull from or cross-reference rather than duplicate?
|
|
- Discovery approach: walk the Aveva System Platform cluster configs, interview site owners, scan network traffic, or some combination?
|
|
- ~~Definition boundary: does "legacy integration" include internal System Platform ↔ System Platform traffic over Global Galaxy, or only IT↔OT crossings?~~ **Resolved:** **only IT↔OT crossings at the central ScadaBridge boundary.** In the target architecture, System Platform traffic (including Global Galaxy, site↔site, and site↔central System Platform clusters) stays **entirely on the OT side** and is not subject to pillar 3 retirement. The **IT↔OT boundary** is specifically **ScadaBridge central ↔ enterprise integrations** — that crossing is the only one the inventory tracks and the only one subject to the "zero legacy paths" target.
|
|
- How often is this file reviewed, and by whom? (Needs to be at least quarterly to feed pillar 3 progress metrics.)
|