Initial commit: 3-year shopfloor IT/OT transformation plan

Core plan: current-state, goal-state (layered architecture, OtOpcUa,
Redpanda EventHub, SnowBridge, canonical model, UNS posture + naming
hierarchy, digital twin use cases absorbed), roadmap (7 workstreams x 3
years), and status bookmark.

Component detail files: legacy integrations inventory (3 integrations,
pillar 3 denominator closed), equipment protocol survey template (dual
mandate with UNS hierarchy snapshot), digital twin management brief
(conversation complete, outcome recorded).

Output generation pipeline: specs for 18-slide mixed-stakeholder PPTX
and faithful-typeset PDF, with README, design doc, and implementation
plan. No generated outputs yet — deferred until source data is stable.
This commit is contained in:
Joseph Doherty
2026-04-17 09:12:35 -04:00
commit ec1dfe59e4
15 changed files with 2743 additions and 0 deletions

View File

@@ -0,0 +1,232 @@
# Equipment Protocol Survey
The authoritative inventory of **equipment protocols in use across the estate**. Feeds scope decisions for the **OtOpcUa core driver library** (see [`../goal-state.md`](../goal-state.md) → OtOpcUa → Driver strategy).
> This file is the **input** to the Year 1 OtOpcUa decision: which protocols get a driver built **proactively** (core library) vs. which get built **on-demand** (long-tail) as each site onboards. Without this survey, the core driver library cannot be scoped — the whole Year 1 OtOpcUa workstream sits on top of it. See [`../roadmap.md`](../roadmap.md) → OtOpcUa → Year 1.
## Why this inventory exists (and why it's not the legacy-integrations inventory)
Different question, different denominator:
- [`legacy-integrations.md`](legacy-integrations.md) tracks **bespoke IT↔OT integrations that cross the ScadaBridge-central boundary outside ScadaBridge**. Denominator for pillar 3 retirement.
- **This file** tracks **equipment protocols on the OT side** — the native protocols that PLCs, controllers, and instruments actually speak on the shopfloor, which OtOpcUa has to translate into OPC UA. Denominator for the OtOpcUa core driver library scope.
They are unrelated lists. Equipment that already speaks OPC UA natively is **in scope for this survey** (because OtOpcUa still needs to connect to it), even though "OPC UA → OtOpcUa" is not a driver build.
## Companion deliverable: initial UNS hierarchy snapshot
**The discovery walk for this survey is the same walk that produces the initial UNS naming-hierarchy snapshot.** See [`../goal-state.md`](../goal-state.md) → **Target IT/OT Integration → Unified Namespace (UNS) posture → UNS naming hierarchy standard**.
The UNS hierarchy (Enterprise → Site → Area → Line → Equipment, with stable equipment UUIDs) and this protocol survey both require someone to walk System Platform IO config, Ignition OPC UA connections, and ScadaBridge templates across the estate. Running those walks once and producing two artifacts is dramatically cheaper than running them twice.
**Different granularity, same source:**
- **This file** captures data at **equipment-class granularity** — "approximately 40 three-axis CNC mills, Vendor X, Modbus TCP, across Warsaw West and Warsaw North" — because the core driver library scope is a per-protocol decision and individual instance detail would be noise at that level.
- **The UNS hierarchy snapshot** captures data at **equipment-instance granularity** — one entry per physical machine, with site / area / line / equipment-name / stable UUID — because the hierarchy is a per-instance addressing surface.
**Guidance for the walker:** at every step of the Discovery approach below, capture both levels of data in the same pass. Each equipment instance found becomes both (a) an increment to this file's equipment-class count for its protocol, and (b) a row in the initial UNS hierarchy snapshot for the `schemas` repo. Do not split into two discovery efforts.
The initial hierarchy snapshot is not stored in this file (it belongs in the `schemas` repo as part of the UNS canonical definition), but capturing it is a **required output** of the same discovery walk. If the walker produces only protocol-survey data and no hierarchy data, the work has to be redone for the UNS snapshot — unacceptable duplication.
## How to use this file
- **One row per (site, equipment class, protocol) combination.** An equipment class is a grouping of functionally equivalent machines — "3-axis CNC milling machines, Vendor X, 20152020 generation" is one class, regardless of how many individual machines fit that description. If the same equipment class appears at multiple sites speaking the same protocol, list all sites in the `Sites` field of a single row. If the same class speaks different protocols at different sites (e.g., older fleet on Modbus, newer fleet on OPC UA), split into separate rows.
- **Rows are discovery-grade.** Exact model numbers, exact counts, and exact firmware revisions are **not** required. The survey only has to be precise enough to scope driver work — "approximately 40 units across 3 sites, Modbus TCP" is enough; a CMDB dump is not.
- **Missing values are fine during discovery** — mark them `_TBD_` rather than leaving blank.
- **Rows are not removed once the driver ships.** A row represents "this protocol exists in the estate," which stays true even after OtOpcUa can speak it. Rows are removed only when the underlying equipment is decommissioned.
## Field schema
| Field | Description |
|---|---|
| `ID` | Short stable identifier (e.g., `EQP-001`). Never reused. |
| `Equipment Class` | Human-readable grouping — machine type + vendor + generation. Precise enough to mean something to a shopfloor engineer, not precise enough to require a part number. |
| `Vendor(s)` | Equipment manufacturer(s) covered by this class. |
| `Native Protocol` | The protocol the equipment actually speaks on the wire. One of: `OPC UA`, `Modbus TCP`, `Modbus RTU`, `EtherNet/IP` (CIP), `Siemens S7` (ISO-on-TCP), `Profinet`, `Profibus`, `DeviceNet`, `MTConnect`, `Fanuc FOCAS`, `Heidenhain DNC`, `Siemens SINUMERIK OPC UA`, `ASCII serial`, `proprietary`, other (name it). |
| `Protocol Variant / Notes` | Sub-variant if relevant (e.g., `Modbus TCP with custom register map`, `EtherNet/IP explicit messaging only`, `S7-300 vs S7-1500`). |
| `Sites` | Sites where equipment in this class is present. `All integrated sites` is acceptable shorthand. |
| `Approx. Instance Count` | Rough order-of-magnitude across all listed sites (e.g., `~40`, `~1020`, `>100`, `unknown`). A magnitude is enough. |
| `Current Access Path` | How equipment in this class is accessed **today** — direct OPC UA from Aveva System Platform / Ignition / ScadaBridge, via a site-level gateway, via a vendor-specific driver in System Platform, not connected at all, etc. |
| `OtOpcUa Driver Needed?` | One of: `No — already OPC UA` (OtOpcUa connects as an OPC UA client, no driver build), `Yes — core candidate` (protocol is broad enough to warrant a proactive driver), `Yes — long-tail` (protocol is narrow; driver built on-demand when the first site that needs it onboards), `Unknown` (survey incomplete). |
| `Driver Complexity (Estimate)` | `Low` / `Medium` / `High` / `Unknown`. Proxy for how much the driver will cost to build and maintain; influences core-vs-long-tail decision. |
| `Priority Site(s)` | If this driver is on the critical path for onboarding a specific site or cluster, name it. Drives sequencing. |
| `Notes` | Vendor docs availability, known quirks, existing LMX or ScadaBridge work that could be reused, anything else. |
## Classification: core vs long-tail
A protocol becomes a **core library driver** if it meets **any one** of these criteria:
1. **Breadth** — present at **three or more sites**, regardless of instance count.
2. **Volume** — present at **any number of sites** with a combined instance count **above ~25** across the estate.
3. **Blocker** — needed to onboard a site that is on the roadmap for Year 1 or Year 2, regardless of how narrowly the protocol is used elsewhere.
4. **Strategic vendor** — the protocol belongs to a vendor whose equipment is expected to grow in the estate (e.g., the vendor is winning new purchases), even if today's footprint is small. This is a **judgment call**, not a hard rule — use sparingly.
A protocol is **long-tail** by default if none of the above apply. Long-tail drivers are built **on-demand** when the first site that needs the protocol reaches onboarding.
**Protocols already OPC UA** are **neither core nor long-tail** — OtOpcUa speaks OPC UA natively and the work is a connection configuration, not a driver build.
**Tiebreakers:**
- When a protocol narrowly misses a threshold, **err toward long-tail.** Core drivers are a commitment to maintain; long-tail drivers are one-off builds. The cost of building a long-tail driver later is bounded; the cost of committing to maintain a core driver forever is not.
- When a protocol narrowly makes a threshold but is **known to be retiring** from the estate (old generation equipment scheduled for replacement), **err toward long-tail** and make a note about the retirement horizon.
## Current inventory
> **Discovery not started.** Populate as the protocol survey is conducted. The rows below are pre-seeded with **expected categories** based on typical discrete-manufacturing estates — they are **placeholders** to make the shape of the table obvious, not confirmed observations. Remove or merge them as real discovery data arrives. **Nothing in this section is authoritative until it has a non-TBD `Approx. Instance Count` and at least one confirmed `Sites` entry.**
### EQP-001 — OPC UA-native equipment
| Field | Value |
|---|---|
| **ID** | EQP-001 |
| **Equipment Class** | Equipment speaking OPC UA natively (mixed vendors and generations) |
| **Vendor(s)** | _TBD — almost certainly a mix, name the top 35 vendors once known_ |
| **Native Protocol** | OPC UA |
| **Protocol Variant / Notes** | _TBD — security modes actually in use (`None` / `Sign` / `SignAndEncrypt`), profiles (`Basic256Sha256`, `Aes256_Sha256_RsaPss`, etc.), auth tokens (anonymous vs UserName vs certificate)_ |
| **Sites** | _TBD_ |
| **Approx. Instance Count** | _TBD_ |
| **Current Access Path** | Mixed — direct OPC UA sessions from Aveva System Platform, Ignition, and/or ScadaBridge depending on the equipment. See [`../current-state.md`](../current-state.md) → Equipment OPC UA. |
| **OtOpcUa Driver Needed?** | **No — already OPC UA.** OtOpcUa acts as an OPC UA client to these devices; no driver build, but connection configuration and auth setup still required. |
| **Driver Complexity (Estimate)** | N/A — connection work only. |
| **Priority Site(s)** | N/A |
| **Notes** | Will be the **easiest** equipment class to bring onto OtOpcUa once OtOpcUa ships — no driver work, just redirect the client-side connection. Expected to be a meaningful fraction of the estate given the "OPC UA-first" posture of most equipment vendors over the last decade. Survey should **still** capture this category because the count informs how much of the tier-1 ScadaBridge cutover is "redirect an existing OPC UA client" vs "bridge through a new driver." |
### EQP-002 — Siemens PLC family (S7)
| Field | Value |
|---|---|
| **ID** | EQP-002 |
| **Equipment Class** | Siemens S7 PLCs (S7-300 / S7-400 / S7-1200 / S7-1500) |
| **Vendor(s)** | Siemens |
| **Native Protocol** | Siemens S7 (ISO-on-TCP); newer S7-1500 also speaks OPC UA natively |
| **Protocol Variant / Notes** | _TBD — mix of S7 generations determines whether the S7 driver is actually needed or whether OPC UA covers most units_ |
| **Sites** | _TBD_ |
| **Approx. Instance Count** | _TBD_ |
| **Current Access Path** | _TBD — likely a mix of System Platform S7 IO drivers and direct OPC UA on newer units_ |
| **OtOpcUa Driver Needed?** | **Unknown.** Depends on whether the S7-1500 OPC UA footprint has displaced older S7 generations. If S7-300/400 still dominate → **core candidate**. If fleet is mostly S7-1500 → `No — already OPC UA` for most units, long-tail for the residual older generations. |
| **Driver Complexity (Estimate)** | Medium — S7 protocol is well-documented; multiple open-source implementations exist; the work is in matching existing System Platform semantics. |
| **Priority Site(s)** | _TBD_ |
| **Notes** | Strong core candidate on first-principles grounds — Siemens is a common PLC vendor in discrete manufacturing. Confirm or refute with a specific count during discovery. |
### EQP-003 — Allen-Bradley / Rockwell PLC family (EtherNet/IP)
| Field | Value |
|---|---|
| **ID** | EQP-003 |
| **Equipment Class** | Allen-Bradley / Rockwell ControlLogix, CompactLogix, MicroLogix, SLC families |
| **Vendor(s)** | Rockwell Automation |
| **Native Protocol** | EtherNet/IP (CIP) |
| **Protocol Variant / Notes** | _TBD — implicit vs explicit messaging, tag-based vs legacy data table access_ |
| **Sites** | _TBD_ |
| **Approx. Instance Count** | _TBD_ |
| **Current Access Path** | _TBD — likely System Platform EtherNet/IP IO driver_ |
| **OtOpcUa Driver Needed?** | **Unknown — likely core candidate.** Rockwell equipment does not generally speak OPC UA natively at the controller level, so if Rockwell has any meaningful footprint in the estate, an EtherNet/IP driver is a core candidate. |
| **Driver Complexity (Estimate)** | Medium-to-high — CIP is a large protocol family; scope depends on which message classes are actually needed. |
| **Priority Site(s)** | _TBD_ |
| **Notes** | Paired with EQP-002 (Siemens) as the two most likely dominant PLC protocol families. Confirm scope during discovery. |
### EQP-004 — Generic Modbus devices
| Field | Value |
|---|---|
| **ID** | EQP-004 |
| **Equipment Class** | Modbus TCP and Modbus RTU devices across instruments, sensors, power meters, older PLCs, variable-frequency drives |
| **Vendor(s)** | Mixed — Modbus is multi-vendor |
| **Native Protocol** | Modbus TCP, Modbus RTU (often bridged to TCP via a gateway) |
| **Protocol Variant / Notes** | _TBD — per-device register map is the real work, not the protocol itself_ |
| **Sites** | _TBD_ |
| **Approx. Instance Count** | _TBD_ |
| **Current Access Path** | _TBD — likely a mix of System Platform Modbus driver and site-level gateways_ |
| **OtOpcUa Driver Needed?** | **Likely core candidate.** Modbus is the most common low-cost protocol in the estate on first-principles grounds. The driver itself is simple; the long-tail is **per-device register maps**. |
| **Driver Complexity (Estimate)** | Low for the protocol; Medium-to-High for the register-map configuration surface (needs to be editable per-device without code changes). |
| **Priority Site(s)** | _TBD_ |
| **Notes** | **Register map configuration is the real work.** The core driver should ship a configuration mechanism (UI or templates) for register-map definition so new Modbus devices don't require code changes. Strong candidate for core library. |
### EQP-005 — Fanuc CNC controllers (FOCAS)
| Field | Value |
|---|---|
| **ID** | EQP-005 |
| **Equipment Class** | Fanuc CNC machine controls |
| **Vendor(s)** | Fanuc |
| **Native Protocol** | Fanuc FOCAS (proprietary library, not a wire protocol) |
| **Protocol Variant / Notes** | FOCAS1 / FOCAS2 library versions |
| **Sites** | _TBD_ |
| **Approx. Instance Count** | _TBD_ |
| **Current Access Path** | _TBD — likely direct or via a vendor-specific driver_ |
| **OtOpcUa Driver Needed?** | **Core candidate if any significant CNC footprint exists.** FOCAS is the de-facto API for Fanuc CNCs and does not come "for free" with OPC UA. |
| **Driver Complexity (Estimate)** | High — FOCAS is a C library with platform-specific bindings and licensing considerations; wrapping it into a .NET driver carries non-trivial ops/licensing work. |
| **Priority Site(s)** | _TBD — Warsaw campuses or TMT likely candidates if they have CNC machining centers_ |
| **Notes** | **Decouple from core library decision until the CNC count is known.** If CNC is a meaningful fraction of the estate, FOCAS is unavoidable. If CNC is a handful of machines, build on-demand as long-tail. A separate alternative — MTConnect — is worth asking about (some modern CNCs expose MTConnect, which is a simpler target). |
### EQP-006 — Other long-tail equipment
| Field | Value |
|---|---|
| **ID** | EQP-006 |
| **Equipment Class** | Everything else — instruments, ovens, vision systems, stand-alone controllers, legacy proprietary devices |
| **Vendor(s)** | Mixed |
| **Native Protocol** | Various — ASCII serial, proprietary, vendor-specific |
| **Protocol Variant / Notes** | _TBD — catalog as encountered_ |
| **Sites** | _TBD_ |
| **Approx. Instance Count** | _TBD_ |
| **Current Access Path** | _TBD — often bespoke per-device integration work in System Platform today_ |
| **OtOpcUa Driver Needed?** | **Long-tail — build on-demand.** This is the category the "on-demand long-tail" driver strategy exists to serve. |
| **Driver Complexity (Estimate)** | Low-to-High per device — varies wildly. |
| **Priority Site(s)** | Wherever the first blocker appears. |
| **Notes** | Track individual long-tail cases as separate rows (EQP-007, EQP-008, …) as discovery identifies them. The placeholder above exists only to anchor the category. |
### _Further rows TBD — add as discovery progresses_
## Rollup views
These views are **derived** from the row-level data above. Regenerate as rows are updated.
### By protocol — drives core library scope
| Native Protocol | Row IDs | Total Approx. Instances | Sites | Core / Long-tail / Already OPC UA |
|---|---|---|---|---|
| OPC UA | EQP-001 | _TBD_ | _TBD_ | Already OPC UA — no driver needed |
| Siemens S7 | EQP-002 | _TBD_ | _TBD_ | _TBD — depends on S7-1500 fraction_ |
| EtherNet/IP | EQP-003 | _TBD_ | _TBD_ | _TBD — likely core_ |
| Modbus TCP/RTU | EQP-004 | _TBD_ | _TBD_ | _TBD — likely core_ |
| Fanuc FOCAS | EQP-005 | _TBD_ | _TBD_ | _TBD — depends on CNC count_ |
| Long-tail mix | EQP-006+ | _TBD_ | _TBD_ | Long-tail — on-demand |
**Decision output of this table:** the **core driver library scope** for Year 1 OtOpcUa. A protocol row tagged `Core` becomes a Year 1 build commitment; `Long-tail` becomes a Year 2+ on-demand build budget; `Already OPC UA` becomes connection configuration work only.
### By site — drives onboarding sequencing
| Site | Row IDs present | Protocols present | Blockers for tier-1 cutover |
|---|---|---|---|
| South Bend DC (primary cluster) | _TBD_ | _TBD_ | _TBD_ |
| Warsaw West | _TBD_ | _TBD_ | _TBD_ |
| Warsaw North | _TBD_ | _TBD_ | _TBD_ |
| Shannon | _TBD_ | _TBD_ | _TBD_ |
| Galway | _TBD_ | _TBD_ | _TBD_ |
| TMT | _TBD_ | _TBD_ | _TBD_ |
| Ponce | _TBD_ | _TBD_ | _TBD_ |
**Decision output of this table:** **tier-1 ScadaBridge cutover sequencing.** The roadmap commits tier 1 at large sites first; this view identifies which sites can cut over with which drivers available, so that sequencing is driven by driver availability, not just site size.
## Discovery approach
Recommended path — not prescriptive. **Each step produces both protocol-survey rows (equipment-class granularity) and UNS hierarchy snapshot rows (equipment-instance granularity) in the same pass** — see "Companion deliverable" above.
1. **Walk the Aveva System Platform IO configuration** at the primary cluster and each site cluster. System Platform's IO server layer is the most complete existing inventory of "what protocols are we talking to what equipment with." Every configured IO object is a row candidate — and its parent galaxy / area / cluster assignment maps directly onto UNS Site and Area levels.
2. **Walk the Ignition OPC UA connections** at the central Ignition footprint in South Bend. These are the equipment Ignition talks to directly today. Every distinct endpoint is a row candidate; Ignition's tag browse tree typically groups endpoints by site and production area, which feeds the UNS hierarchy walk directly.
3. **Walk ScadaBridge template configuration** for equipment-facing templates. Less complete than System Platform IO, but captures anything already templated on ScadaBridge. Template groupings also surface line-level organization where it exists.
4. **Walk any existing site asset registers or CMDBs** if present. Lower priority — often stale, rarely matches reality — but may surface equipment not yet integrated with any SCADA layer, and may be the best source for a stable enterprise equipment ID that can seed the UNS UUID.
5. **Interview site controls/automation engineers** at each large site. They are the ground truth for "what's actually out there" — especially for long-tail equipment that never made it into a central inventory. Interviews are also the best source for **Line-level structure**, which is frequently implicit in operator knowledge and absent from System Platform or Ignition configuration.
6. **Cross-check against Camstar's equipment master data** (if Camstar tracks equipment at that granularity) as a sanity check against what manufacturing operations believes is on the floor — and as a tiebreak when different configuration sources name the same machine differently.
**Order matters.** Steps 13 are cheap and produce most of the signal for both outputs. Steps 46 are interview-driven and time-consuming; reserve them for gap-filling after the System Platform / Ignition / ScadaBridge walks are done.
**Dual output at each step:** the walker should carry two notebooks (or two sheets of a spreadsheet) and capture entries in both as they go — one equipment instance observed produces one row in each notebook. Reconciliation between the two outputs happens at the end of the walk, not during it. If reconciliation reveals mismatches (a machine shows up in the protocol survey but not the hierarchy, or vice versa), that's a walker error to chase down, not a data difference to split the difference on.
## Open questions for the survey itself
- **Who owns the survey?** The OtOpcUa workstream lead, a dedicated discovery resource, or a rotating responsibility across site automation engineers? _TBD._
- **Deadline.** Year 1 OtOpcUa work cannot scope the core driver library without this survey — it's a **Year 1 prerequisite**. Aim to complete at least steps 13 (System Platform / Ignition / ScadaBridge walks) within the first quarter of Year 1 so core driver library build can start by quarter 2. Interview-driven gap-filling can extend through the rest of Year 1 in parallel with core driver build.
- **How often is this file reviewed after Year 1?** Recommended: quarterly during Year 1 (active discovery), annually thereafter (to catch drift from new equipment purchases). Add a row when a new equipment class shows up in the estate; do **not** remove rows when drivers ship.
- **Relationship to Site Onboarding workstream.** When a smaller site (Berlin, Winterthur, Jacksonville, …) is onboarded in Year 2, its equipment should be added to this file as part of the onboarding checklist — that way the core-vs-long-tail decision is re-evaluated as the estate grows.
- **MTConnect.** Modern CNCs often expose MTConnect as an alternative to FOCAS. If MTConnect coverage is broad enough, it may replace FOCAS as the CNC driver choice. Worth an explicit "is MTConnect in play here?" question during discovery.

View File

@@ -0,0 +1,126 @@
# Legacy Integrations Inventory
The authoritative list of **legacy point-to-point integrations** that currently run **outside ScadaBridge** and must be retired by end of plan (see `goal-state.md` → Success Criteria → pillar 3).
> This file is the **denominator** for the zero-count retirement target. If an integration is not captured here, it cannot be tracked to retirement — so capture first, argue about classification later.
## How to use this file
- **One row per integration.** If the same logical integration runs at multiple sites with the same code path, it's one row with multiple sites listed. If each site runs a meaningfully different variant, split into separate rows.
- **"Legacy" here means any IT↔OT integration path that crosses the ScadaBridge-central boundary without going through ScadaBridge.** In the target architecture, **ScadaBridge central is the sole IT↔OT crossing point**; anything that crosses that boundary via another path — Web API interfaces exposed by the Aveva System Platform primary cluster, custom services, scheduled jobs, file drops, direct DB links, etc. — is legacy and in scope for retirement.
- **Not in scope for this inventory:** OT-internal traffic. System Platform ↔ System Platform traffic over Global Galaxy, site-level ScadaBridge ↔ local equipment, site System Platform clusters ↔ central System Platform cluster — all of this stays on the OT side in the target architecture and is not tracked here.
- **Fields are described in the schema below.** Missing values are fine during discovery — mark them `_TBD_` rather than leaving blank, so gaps are visible.
- **Do not remove rows.** Retired integrations stay in the file with `Status: Retired` and a retirement date, so pillar 3 progress is auditable.
## Field schema
| Field | Description |
|---|---|
| `ID` | Short stable identifier (e.g., `LEG-001`). Never reused. |
| `Name` | Human-readable name of the integration. |
| `Source System` | System the data comes from (e.g., System Platform primary cluster, specific SCADA node, MES, PLC). |
| `Target System` | System the data goes to (e.g., Camstar MES, Delmia DNC, a specific database, an external partner). |
| `Direction` | One of: `Source→Target`, `Target→Source`, `Bidirectional`. |
| `Transport` | How the integration moves data (e.g., Web API, direct DB link, file drop, OPC DA, scheduled SQL job, SOAP). |
| `Site(s)` | Sites where this integration runs. `All integrated sites` is acceptable shorthand. |
| `Traffic Volume` | Rough order-of-magnitude (e.g., ~10 req/min, ~1k events/day, ~50 MB/day, unknown). |
| `Business Purpose` | One sentence: what this integration exists to do. |
| `Current Owner` | Team or person responsible today. |
| `Dependencies` | Other integrations or systems that depend on this one continuing to work. |
| `Migration Target` | What ScadaBridge pattern replaces it (e.g., `ScadaBridge inbound Web API`, `ScadaBridge → EventHub topic X`, `ScadaBridge outbound Web API call`, `ScadaBridge DB write`). |
| `Retirement Criteria` | Concrete, testable conditions that must be true before this integration can be switched off (e.g., "all consumers reading topic `mes.workorder.started` in prod for 30 days"). |
| `Status` | One of: `Discovered`, `Planned`, `In Migration`, `Dual-Run`, `Retired`. |
| `Retirement Date` | Actual retirement date once `Status = Retired`. |
| `Notes` | Anything else — known risks, gotchas, related tickets. |
## Current inventory
> **Discovery complete — denominator = 3.** Three legacy IT↔OT integrations are tracked: **LEG-001 Delmia DNC** (Aveva Web API), **LEG-002 Camstar MES** (Aveva Web API, Camstar-initiated), **LEG-003 custom email notification service**. Header-level field details on these three are still largely TBD and get filled in as migration planning proceeds, but no further integrations are expected to be added — the inventory is closed as the pillar 3 denominator unless new evidence surfaces during migration work. See **Deliberately not tracked** below for categories explicitly carved out of the definition.
### LEG-001 — Aveva Web API ↔ Delmia DNC
| Field | Value |
|---|---|
| **ID** | LEG-001 |
| **Name** | Aveva System Platform Web API — Delmia DNC interface |
| **Source System** | Aveva System Platform primary cluster (initiates the download request) **and** Delmia DNC (initiates the completion notification back). |
| **Target System** | Delmia DNC (recipe library / download service) on one leg; Aveva System Platform primary cluster + equipment on the return leg. |
| **Direction** | **Bidirectional — orchestrated handshake.** <br>(1) **System Platform → Delmia:** System Platform triggers a Delmia download ("fetch recipe X"). <br>(2) **Delmia → System Platform:** when the download completes, Delmia notifies System Platform so System Platform can parse/load the recipe file to the equipment. <br>Both legs go over the Web API on the Aveva System Platform primary cluster. |
| **Transport** | Web API on the primary cluster — used in both directions (System Platform as client for leg 1, System Platform Web API as server for leg 2). |
| **Site(s)** | _TBD — presumably all sites with equipment that consumes Delmia recipe files, confirm_ |
| **Traffic Volume** | _TBD — recipe download events are typically low-frequency compared to tag data, order-of-magnitude unknown_ |
| **Business Purpose** | **Delmia DNC distributes recipe (NC program) files to equipment**, with **System Platform orchestrating the handshake**. The sequence is: System Platform triggers Delmia to download a recipe file → Delmia fetches it → Delmia notifies System Platform that the download is ready → System Platform parses/loads the recipe file to the equipment. Parsing is required for some equipment classes; pass-through for others. |
| **Current Owner** | _TBD_ |
| **Dependencies** | Equipment that consumes Delmia recipe files is the downstream dependency. Any equipment requiring the System Platform parse step is a harder-to-migrate case than equipment that accepts a raw file. |
| **Migration Target** | ScadaBridge replaces the System Platform Web API as the Delmia-facing surface on **both legs** of the handshake: <br>— **Leg 1 (trigger):** ScadaBridge calls Delmia via its **outbound Web API client** capability to trigger a recipe download (replacing the System Platform → Delmia call). <br>— **Leg 2 (notification):** ScadaBridge exposes an **inbound secure Web API** endpoint that Delmia calls to notify download completion (replacing the Delmia → System Platform Web API callback). <br>— **Parse/load step:** ScadaBridge **scripts** (C#/Roslyn) re-implement the recipe-parse logic that currently runs on System Platform; ScadaBridge then writes the parsed result to equipment via its OPC UA write path. <br>_TBD — whether the parse logic lives per-site or centrally; whether Delmia can target per-site ScadaBridge endpoints directly or must go through ScadaBridge central; how orchestration state (pending triggers, in-flight downloads) is held during the handshake, especially across a WAN outage that separates the trigger and the notification._ |
| **Retirement Criteria** | _TBD — almost certainly requires per-equipment-class validation that the ScadaBridge-parsed output matches the System Platform-parsed output byte-for-byte, for any equipment where parsing is involved._ |
| **Status** | Discovered |
| **Retirement Date** | — |
| **Notes** | One of the two existing Aveva Web API interfaces documented in `../current-state.md`. Unlike LEG-002 (which has a ready replacement in ScadaBridge's native Camstar path), **LEG-001 requires new work** — the recipe-parse logic currently on System Platform has to be re-implemented (most likely as ScadaBridge scripts) before the legacy path can be retired. Expected to be a **harder retirement** than LEG-002. |
### LEG-002 — Aveva Web API ↔ Camstar MES
| Field | Value |
|---|---|
| **ID** | LEG-002 |
| **Name** | Aveva System Platform Web API — Camstar MES interface |
| **Source System** | Aveva System Platform primary cluster (South Bend) — **hosts** the Web API. |
| **Target System** | Camstar MES — **is the caller** (Camstar initiates the interaction by calling into the System Platform Web API). |
| **Direction** | **Bidirectional, Camstar-initiated.** Camstar calls the Aveva System Platform primary-cluster Web API to initiate; request/response data flows in both directions over that call. _TBD — whether Camstar is primarily pulling data out of System Platform, pushing data into it, or doing a true two-way exchange during the call, and whether System Platform ever initiates back to Camstar through any other channel._ |
| **Transport** | Web API on the primary cluster — **inbound to System Platform** (Camstar is the HTTP client, System Platform Web API is the HTTP server). |
| **Site(s)** | _TBD — presumably all sites, confirm_ |
| **Traffic Volume** | _TBD_ |
| **Business Purpose** | MES integration — details _TBD_ |
| **Current Owner** | _TBD_ |
| **Dependencies** | _TBD_ |
| **Migration Target** | **Move callers off the System Platform Web API and onto ScadaBridge's existing direct Camstar integration.** ScadaBridge already calls Camstar directly (see `../current-state.md` → ScadaBridge downstream consumers), so the retirement work for this integration is about redirecting existing consumers, **not** building a new ScadaBridge→Camstar path. |
| **Retirement Criteria** | _TBD_ |
| **Status** | Discovered |
| **Retirement Date** | — |
| **Notes** | One of the two existing Aveva Web API interfaces documented in `../current-state.md`. ScadaBridge's native Camstar path exists today, which means LEG-002 is an "end-user redirect" problem rather than a "build something new" problem — typically cheaper and faster to retire than an integration that requires replacement work. **Important directional nuance:** because **Camstar** is the caller (not System Platform), retirement can't be done purely by changing System Platform's outbound code — it requires reconfiguring Camstar to call ScadaBridge instead of the System Platform Web API. That makes Camstar's change-control process a critical dependency for retirement planning. The deeper Camstar integration described in `goal-state.md` may still inform the long-term shape. |
### LEG-003 — System Platform → email notifications via custom service
| Field | Value |
|---|---|
| **ID** | LEG-003 |
| **Name** | Aveva System Platform → email notifications via custom in-house service |
| **Source System** | Aveva System Platform (emits notification events) — _TBD whether from the primary cluster only, from site clusters, or both_. |
| **Target System** | Enterprise email system, reached via a **custom in-house notification service** that sits between System Platform and email. |
| **Direction** | Source→Target. System Platform emits notification events; the custom service relays them to email. No known return path from email back to System Platform. |
| **Transport** | Custom in-house notification service. _TBD — how the service is fed from System Platform (scripted handler in System Platform, direct DB read, OPC UA / LmxOpcUa subscription, Web API push, file drop, …) and how it reaches email (SMTP to enterprise relay, Exchange Web Services, Graph API, …)._ |
| **Site(s)** | _TBD — confirm which clusters actually emit through this path and whether site clusters use the same service or have their own variants_ |
| **Traffic Volume** | _TBD_ |
| **Business Purpose** | Operator / engineering / maintenance notifications (alarms, state changes, threshold breaches — _TBD_ which categories) emitted from System Platform and delivered by email. |
| **Current Owner** | _TBD — service is in-house but ownership of the custom service codebase and its operational runbook needs to be confirmed._ |
| **Dependencies** | Enterprise email system availability; recipient distribution lists baked into the custom service's configuration; any downstream workflows (ticketing, on-call escalation, acknowledgement) that trigger off these emails — _TBD_ whether any such downstream automation exists. |
| **Migration Target** | **ScadaBridge native notifications.** ScadaBridge already provides contact-list-driven, transport-agnostic notifications over **email and Microsoft Teams** (see `../current-state.md` → ScadaBridge Capabilities → Notifications). Migration path: port the existing notification triggers to ScadaBridge scripts, recreate recipient lists as ScadaBridge contact lists, dual-run both notification paths for a window long enough to catch low-frequency alarms, then cut over and decommission the custom service. |
| **Retirement Criteria** | _TBD — at minimum: (1) **trigger parity** — every notification the custom service emits today is emitted by a ScadaBridge script; (2) **recipient parity** — every recipient on every distribution list exists on the equivalent ScadaBridge contact list; (3) **delivery SLO parity** — email delivery latency on the ScadaBridge path meets or beats today's; (4) **dual-run window** long enough to surface low-frequency notifications (monthly / quarterly events)._ |
| **Status** | Discovered |
| **Retirement Date** | — |
| **Notes** | **Easier retirement than LEG-001** — ScadaBridge's notification capability already exists in production, so this is a migration of **triggers and recipient lists**, not new build work. Closer in difficulty to LEG-002. **Open risk:** if the custom service also handles non-notification workflows (acknowledgement callbacks, escalation state, ticketing integration), those are out of scope for ScadaBridge's notification capability and would need a separate migration path — confirm during discovery whether the service is purely fire-and-forget email or has state. **Also — Teams as a by-product:** moving to ScadaBridge opens the door to routing the same notifications over Microsoft Teams without code changes (contact-list config only), which may be a quick win to surface with stakeholders during dual-run. |
## Deliberately not tracked
Categories explicitly carved out of the "legacy integration" definition. These crossings may look like they cross the IT↔OT boundary, but they are **not** pillar 3 work and are **not** counted against the "drive to zero" target. The carve-outs are load-bearing — without them, pillar 3's zero target is ambiguous.
### Historian SQL reporting consumers (SAP BOBJ, Power BI, ad-hoc analyst SQL)
**Not legacy.** Aveva Historian exposes data via **MSSQL** (SQL Server linked-server / OPENQUERY views) as its **native consumption surface** — the supported, designed way to read historian data. A reporting tool hitting those SQL views is using Historian as intended, not a bespoke point-to-point integration grafted onto the side of it.
Pillar 3's "legacy integration" target covers **bespoke IT↔OT crossings** — Web API interfaces exposed by the System Platform primary cluster, custom services, file drops, direct DB links into internal stores that weren't designed for external reads. Consuming a system's own first-class SQL interface is categorically different and does not fit that definition.
The BOBJ → Power BI migration currently in flight (see `../current-state.md` → Aveva Historian → Current consumers) will reshape this surface independently of pillar 3. Whether Power BI ultimately reads from Historian's SQL interface, from Snowflake's dbt curated layer, or from both is a coordination question between the reporting team and this plan (tracked in `../status.md` as a top pending item) — but whichever way it lands, the resulting path is **not** tracked here as a retirement target.
> **Implication:** if at any point a reporting consumer stops using Historian's SQL views and instead starts talking to Historian via a bespoke side-channel (custom extract job, scheduled export, direct file read of the historian store, etc.), **that** side-channel **would** be legacy and would need a row in the inventory. The carve-out applies specifically to the native MSSQL surface.
### OT-internal traffic
Restating the rule from "How to use this file" above, for explicit visibility: System Platform ↔ System Platform traffic over Global Galaxy, site-level ScadaBridge ↔ local equipment, site System Platform clusters ↔ central System Platform cluster, and any other traffic that stays entirely within the OT side of the boundary is **not** legacy for pillar 3 purposes. The IT↔OT crossing at ScadaBridge central is the only boundary this inventory tracks.
## Open questions for the inventory itself
- Is there an existing system of record (e.g., a CMDB, an integration catalog, a SharePoint inventory) that this file should pull from or cross-reference rather than duplicate?
- Discovery approach: walk the Aveva System Platform cluster configs, interview site owners, scan network traffic, or some combination?
- ~~Definition boundary: does "legacy integration" include internal System Platform ↔ System Platform traffic over Global Galaxy, or only IT↔OT crossings?~~ **Resolved:** **only IT↔OT crossings at the central ScadaBridge boundary.** In the target architecture, System Platform traffic (including Global Galaxy, site↔site, and site↔central System Platform clusters) stays **entirely on the OT side** and is not subject to pillar 3 retirement. The **IT↔OT boundary** is specifically **ScadaBridge central ↔ enterprise integrations** — that crossing is the only one the inventory tracks and the only one subject to the "zero legacy paths" target.
- How often is this file reviewed, and by whom? (Needs to be at least quarterly to feed pillar 3 progress metrics.)