Core plan: current-state, goal-state (layered architecture, OtOpcUa, Redpanda EventHub, SnowBridge, canonical model, UNS posture + naming hierarchy, digital twin use cases absorbed), roadmap (7 workstreams x 3 years), and status bookmark. Component detail files: legacy integrations inventory (3 integrations, pillar 3 denominator closed), equipment protocol survey template (dual mandate with UNS hierarchy snapshot), digital twin management brief (conversation complete, outcome recorded). Output generation pipeline: specs for 18-slide mixed-stakeholder PPTX and faithful-typeset PDF, with README, design doc, and implementation plan. No generated outputs yet — deferred until source data is stable.
151 lines
15 KiB
Markdown
151 lines
15 KiB
Markdown
# Digital Twin — Management Conversation Brief
|
||
|
||
A walk-into-the-meeting artifact for the **management conversation** that turns the ask ("we want digital twins") into a scoped response.
|
||
|
||
> This brief is a **meeting prep document**, not plan content. The authoritative plan position on digital twin lives in [`../goal-state.md`](../goal-state.md) → **Strategic Considerations (Adjacent Asks)** → **Digital twin** — this file exists to prepare for the clarification conversation referenced there.
|
||
|
||
## Outcome — conversation complete (2026-04-15)
|
||
|
||
**Status: the conversation has happened.** Management delivered three concrete high-level use cases as their complete answer — that is all the requirements framing they can provide. Source document: [`../digital_twin_usecases.md.txt`](../digital_twin_usecases.md.txt).
|
||
|
||
**The three use cases management delivered:**
|
||
|
||
1. **Standardized Equipment State / Metadata Model** — raw signals → meaningful canonical state (`Running` / `Idle` / `Faulted` / `Starved` / `Blocked`), cycle-time accuracy, top-fault derivation.
|
||
2. **Virtual Testing / Simulation** — emulate equipment signals/states for automation-logic testing, FAT, integration validation, replay of historical and synthetic scenarios.
|
||
3. **Cross-System Data Normalization / Canonical Model** — common semantic layer with standardized equipment/production/event structures and uniform event definitions across systems.
|
||
|
||
**Bucket resolution — splits across use cases, does not land in a single bucket:**
|
||
|
||
| Use case | Bucket | Plan response |
|
||
|---|---|---|
|
||
| 1 — Standardized state model | **#1 with a small addition** — plan absorbs it. | Commit to a canonical machine state vocabulary (`Running / Idle / Faulted / Starved / Blocked` + TBD additions like `Changeover`, `Maintenance`). Derived at layer 3, published as an enum in the central `schemas` repo, consumed uniformly across Redpanda events and dbt curated views. See [`../goal-state.md`](../goal-state.md) → Async Event Backbone → **Canonical Equipment, Production, and Event Model** → **Canonical machine state vocabulary**. |
|
||
| 2 — Virtual testing / simulation | **#4 — served minimally, full scope exploratory.** | Replay-based simulation-lite enabled by Redpanda's `analytics`-tier retention (30 days); OtOpcUa's namespace architecture can accommodate a future `simulated` namespace without reshaping the component. Full commissioning-grade FAT / integration simulation stays **out of scope** for this plan. If a funded simulation initiative materializes, this plan's foundation supports it — no new workstream until then. |
|
||
| 3 — Cross-system canonical model | **#1 with a framing commitment** — plan absorbs it. | The plan already builds the pieces (OtOpcUa equipment namespace, Redpanda topic taxonomy, Protobuf schemas in central `schemas` repo, dbt curated layer). Commit to declaring these pieces as **the** canonical equipment/production/event model that consumers are entitled to treat as an integration interface. See [`../goal-state.md`](../goal-state.md) → Async Event Backbone → **Canonical Equipment, Production, and Event Model**. |
|
||
|
||
**What this meeting did NOT produce** (deliberately, because management could not provide these details and the plan does not require them to move forward):
|
||
|
||
- A named sponsor for a separately funded digital twin initiative.
|
||
- A budget or timeline for use case 2 (simulation).
|
||
- A specific vendor product selection.
|
||
- A "kind of twin" framing (equipment twin vs line twin vs genealogy twin vs simulation twin) — the three use cases above cut across multiple categories from the brief's Q2, which is fine given how the plan absorbs them.
|
||
- Any decision that would add a workstream to [`../roadmap.md`](../roadmap.md).
|
||
|
||
**What comes next:**
|
||
|
||
- **Use cases 1 and 3 are now plan commitments** and get implemented under existing workstreams (Redpanda EventHub for the schemas/vocabulary, Snowflake dbt Transform Layer for the curated-view side). See [`../roadmap.md`](../roadmap.md) → Year 1 updates.
|
||
- **Use case 2 remains open as an exploratory item.** The narrower open question carried forward is tracked in [`../status.md`](../status.md) → Top pending items: "Simulation initiative (digital twin use case 2) — exploratory; no plan action until/unless a funded initiative materializes with a sponsor."
|
||
|
||
**This brief is retained for reference.** The pre-meeting framing (question priority, interpretation table, decision tree, four-bucket framework) remains useful if a follow-up conversation is needed — especially around use case 2 (simulation scoping) or if management surfaces additional use cases beyond the three above. The rest of the document continues below unchanged for that purpose.
|
||
|
||
---
|
||
|
||
## Goal of the meeting
|
||
|
||
Come out with enough information to place the ask into **one of four buckets**:
|
||
|
||
1. **Already delivered by this plan.** The "real" need is a Snowflake-backed historical / predictive view of equipment health and performance. Recommendation: no new workstream; the first twin use case lands in Year 2 or Year 3 as one of pillar 2's "not possible before" analytics use cases. This is the predicted outcome (see `goal-state.md` → Digital twin → "Likely outcome of this conversation").
|
||
2. **Adjacent initiative, consumes this plan's foundation.** A funded, sponsored, separately-scoped twin effort runs alongside this plan and consumes OtOpcUa, Redpanda, Snowflake, and the SnowBridge as its data substrate. Recommendation: no changes to this plan's pillars; digital twin team owns delivery; this plan commits to keeping the foundation consumable.
|
||
3. **Folded into a future version of this plan.** A twin capability becomes a new pillar in a v2 of this plan — not today. Recommendation: document the agreement, park until the next planning cycle.
|
||
4. **Genuinely undefined — exploratory ask.** Management wants us to "look at it" but has no problem statement, sponsor, or timeline. Recommendation: run a scoped proof-of-concept (one equipment class, one site) on OtOpcUa's new equipment namespace as an inexpensive, low-commitment response; defer the bigger question.
|
||
|
||
Any outcome other than these four means the conversation did not converge; schedule a follow-up rather than try to commit on the spot.
|
||
|
||
## Suggested opener
|
||
|
||
> "Thanks for raising digital twin as something you want us to look at. Before we commit anything into the 3-year plan, we want to make sure what we build actually lands against what you're after — 'digital twin' covers enough different things that it's worth an hour to sharpen the ask. We've come with a short list of clarifying questions. Good news up front: most of the likely shapes of this ask are already served by the foundation we're building for analytics and AI enablement, so this conversation is more likely to end with 'here's how you already get it' than 'we need a new workstream.'"
|
||
|
||
This framing is deliberately **not** defensive. The plan already shapes its components for a prospective digital twin layer; we're not pushing back, we're helping the ask land in a form we can execute against.
|
||
|
||
## Question priority grouping
|
||
|
||
The 8 questions in `goal-state.md` are all useful, but they are not equally diagnostic for placing the ask into one of the four buckets. Use this order:
|
||
|
||
### Must-answer (drive the bucket decision)
|
||
|
||
These three typically resolve the entire conversation:
|
||
|
||
- **Q1. What problem are you trying to solve?** — The single most diagnostic question. If the answer is framed in terms of downtime, predictive maintenance, quality yields, or compliance evidence, the likely bucket is #1 (Snowflake-backed) or #2 (adjacent initiative on this foundation). If it is framed in terms of operator training or line simulation, the likely bucket is #2 (adjacent, probably vendor product) or #4 (exploratory). If there is no problem — "we just need to be doing digital twin" — the bucket is #4.
|
||
- **Q7. Is there a named sponsor and funding?** — Hard gate between buckets. Sponsor + funding → bucket #2. No sponsor, no funding → bucket #4. Future plan cycle → bucket #3. This question also controls how much time it's worth spending on the other seven.
|
||
- **Q8. Is this connected to an initiative already underway?** — If yes (operational excellence, predictive maintenance pilot, AI/ML platform, sustainability dashboards), the "real" ask is that parent initiative and we should talk to it directly. Finding the parent is often the fastest path to bucket #1.
|
||
|
||
### Nice-to-have (sharpen the scope once the bucket is known)
|
||
|
||
Once the bucket is known, these refine the response:
|
||
|
||
- **Q2. Which *kind* of digital twin?** — Pins the architectural fit. Equipment/asset twin → OtOpcUa + real-time layer. Line/cell twin → Snowflake + dbt. Product/genealogy twin → Camstar MES, **not** this plan. Simulation twin → vendor product. Predictive/AI twin → Snowflake + dbt + an ML layer.
|
||
- **Q4. Real-time, historical, predictive, or simulation?** — Overlaps with Q2 but is useful as a sanity-check if the answer to Q2 is "a bit of everything" (which usually means "undefined").
|
||
- **Q5. Scope and timing?** — Converts an abstract ask into something you can actually say yes or no to. Also the easiest question to get a "someday" answer on, which is itself informative.
|
||
|
||
### Skip if time is short
|
||
|
||
- **Q3. Who uses it?** — Helpful if answered crisply, usually vague if not. Can be deferred to a follow-up.
|
||
- **Q6. Assumed product?** — Only relevant if the bucket is #2 and build-vs-buy is on the table. Irrelevant if we're in bucket #1, #3, or #4.
|
||
|
||
## Interpretation table — likely answer patterns and what they mean
|
||
|
||
| If the answer sounds like... | The real ask is probably... | Bucket | Response |
|
||
|---|---|---|---|
|
||
| "Reduce unplanned downtime on our critical equipment" | Predictive maintenance on historical equipment data | #1 | "This is a pillar 2 use case. Year 2–3 delivery on the dbt curated layer." |
|
||
| "See equipment state in real time from anywhere" | Real-time equipment dashboard | #1 or #2 | Year 2+ on Ignition + Snowflake (pillar 2) if enterprise-read-only; separate initiative if interactive/bidirectional. |
|
||
| "Train operators without touching real equipment" | Simulation / process twin | #2 | Vendor product (Aveva Digital Twin, DELMIA, Siemens NX). Separate initiative — this plan provides the data substrate only. |
|
||
| "Track every part through the factory with its full history" | Product / genealogy twin | Not this plan | Camstar MES territory — direct management to the Camstar owner. |
|
||
| "Forecast future equipment failures from sensor data" | Predictive / AI twin | #1 | Pillar 2 use case. Year 2–3 on the curated layer + an ML layer. |
|
||
| "We saw a demo of \<specific product\> and want to evaluate it" | Vendor-driven exploration | #4 or #2 | Proof-of-concept, scoped to one equipment class on OtOpcUa's equipment namespace. |
|
||
| "The board wants to hear about our digital transformation" | No concrete ask; political positioning | #4 | Reframe as "here's what we're already doing that counts as digital transformation" rather than building something new. |
|
||
| "\<Parent initiative\> needs a digital twin component" | The parent initiative is the real ask | Depends on parent | Route the conversation to the parent initiative's sponsor. |
|
||
|
||
## Decision tree
|
||
|
||
Use this in the moment to place the ask:
|
||
|
||
```
|
||
Is there a named sponsor and funding? (Q7)
|
||
├── No → Is there a concrete problem? (Q1)
|
||
│ ├── No → Bucket #4 (exploratory). Offer: PoC on one equipment class, deferred bigger decision.
|
||
│ └── Yes → Does it fit pillar 2? (Q1, Q4)
|
||
│ ├── Yes → Bucket #1. Already delivered; Year 2–3 use case.
|
||
│ └── No → Bucket #3. Park for next planning cycle.
|
||
└── Yes → Is there a parent initiative? (Q8)
|
||
├── Yes → Route to parent initiative owner. Out of this plan's hands.
|
||
└── No → Does it fit the foundation this plan delivers? (Q2, Q4)
|
||
├── Yes → Bucket #2. Adjacent, consumes this plan's foundation.
|
||
└── No → Bucket #2 anyway, but flag that the foundation gap may need to be filled.
|
||
```
|
||
|
||
## Non-negotiables to hold in the conversation
|
||
|
||
Whatever the bucket turns out to be, these are already committed positions of the plan and should not be renegotiated in the meeting:
|
||
|
||
- **Any twin must consume equipment data through OtOpcUa.** No direct equipment OPC UA sessions.
|
||
- **Any twin must consume historical/analytical data through Snowflake + dbt.** No direct Historian pulls, no bespoke pipelines.
|
||
- **Any twin must consume event streams through Redpanda.** No parallel messaging bus.
|
||
- **Any twin must stay within the IT↔OT boundary** — enterprise-hosted twins cross through ScadaBridge central and the SnowBridge like every other enterprise consumer.
|
||
|
||
These are on line in `goal-state.md` → Digital twin → "Design constraints this imposes." Restate them if the conversation drifts toward a parallel integration path.
|
||
|
||
## Outputs of the meeting
|
||
|
||
Bring back:
|
||
|
||
1. The **bucket assignment** (or a reason the conversation did not converge and needs a follow-up).
|
||
2. The **sponsor and funding** status, if known.
|
||
3. Any **parent initiative** identified.
|
||
4. A **one-line summary** of the actual problem the ask exists to solve, in management's own words — this is the quotable thing you'll use to explain the decision later.
|
||
5. Agreement on the **next action**: file the use case into pillar 2, stand up a PoC, park until next planning cycle, or route to a parent initiative owner.
|
||
|
||
If you come back without (1) and (5), the meeting did not do its job — schedule the follow-up before leaving the room.
|
||
|
||
## What to do after the meeting
|
||
|
||
- If **bucket #1**: update `goal-state.md` → Digital twin section with a one-line pointer noting "resolved to pillar 2 analytics use case" and a date. Add the use case to the pillar 2 candidate list. Remove the top-pending-item entry from `../status.md`.
|
||
- If **bucket #2**: update `goal-state.md` with the sponsor, scope, and foundation touchpoints. No changes to pillars. Keep this brief on file for the adjacent initiative's kickoff.
|
||
- If **bucket #3**: note the agreement in `goal-state.md` and move on. Surface in the next planning cycle.
|
||
- If **bucket #4**: document the PoC scope in `goal-state.md` (one equipment class, one site, one quarter) and kick it off as a Year 1 side activity on OtOpcUa. Do **not** add a workstream to `roadmap.md` — PoCs don't belong on the grid.
|
||
|
||
---
|
||
|
||
**Related:**
|
||
- [`../goal-state.md`](../goal-state.md) → Strategic Considerations → Digital twin — plan position and design constraints.
|
||
- [`../goal-state.md`](../goal-state.md) → OtOpcUa — "any future consumers such as a prospective digital twin layer."
|
||
- [`../status.md`](../status.md) → Top pending items — where this meeting sits in the open-work queue.
|