Files
lmxopcua/docs/v2/dev-environment.md
Joseph Doherty 32b872d5c7 scripts+docs: Refresh-Services.ps1 for alarm-rig deploy refresh (PR D.1)
Seventeenth PR of the alarms-over-gateway epic
(docs/plans/alarms-over-gateway.md). Lands the script that the
plan calls for in Track D — the actual smoke-run validation
on the dev rig (publish, restart, fire alarms, capture artifacts)
remains operator work; this PR ships the automation that the
operator drives.

scripts/install/Refresh-Services.ps1 — single-shot refresh
script. Designed to run elevated on the deploy host
(DESKTOP-6JL3KKO today; production uses a separate runbook).
The script:

- Stops services in reverse-dependency order (OtOpcUa →
  OtOpcUaWonderwareHistorian → MxAccessGw) and force-kills any
  residual processes (avoids the publish-time MSB3027 file-lock
  the original install script hit).
- Snapshots existing C:\publish trees to
  C:\publish\.backup-YYYY-MM-DD-HHMMSS\ for rollback (skip with
  -SkipBackup).
- Builds + copies mxaccessgw worker (x86 net48) + server (net10.0)
  binaries from the sibling repo.
- Publishes OtOpcUa Server + Wonderware historian sidecar from
  this repo.
- Ensures OTOPCUA_HISTORIAN_ALARM_WRITE_ENABLED=true is set on
  the historian service env block (PR C.2 toggle).
- Starts services in forward-dependency order with the
  inter-service waits the original install used.
- Smoke-verifies (service status, listening ports 5120 / 4840
  / 4841, recent log tails).

Supports -WhatIf for dry-run inspection without touching the
running services.

docs/v2/dev-environment.md — new "Service Refresh —
Refresh-Services.ps1" section between Credential Management
and Test Data Seed. Cross-references the plan's Track D
functional verification scenarios.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 21:11:27 -04:00

40 KiB
Raw Blame History

Development Environment — OtOpcUa v2

Status: DRAFT — concrete inventory + setup plan for every external resource the v2 build needs. Companion to test-data-sources.md (which catalogues the simulator/stub strategy per driver) and implementation/overview.md (which references the dev environment in entry-gate checklists).

Branch: v2 Created: 2026-04-17 Updated 2026-04-28: Docker workloads moved off the Windows dev VM to a shared Linux Docker host at 10.100.0.35 so the dev VM can have its GPU re-attached via ESXi passthrough (Hyper-V/WSL2 was blocking it). The two-tier model below is updated accordingly: per-developer Docker Desktop is gone; SQL Server + driver fixtures all live on the central Linux host, identifiable via docker ps --filter label=project=lmxopcua.

Scope

Every external resource a developer needs on their machine, plus the dedicated integration host that runs the heavier simulators per CI tiering decision #99. Includes Docker container images, ports, default credentials (dev only — production overrides documented), and ownership.

Not in scope here: production deployment topology (separate doc when v2 ships), CI pipeline configuration (separate ops concern), individual developer's IDE / editor preferences.

Two Environment Tiers

Per decision #99 (updated 2026-04-28):

Tier Purpose Where it runs Resources
PR-CI / inner-loop dev Fast, runs on minimal Windows + Linux build agents and developer laptops Each developer's machine; CI runners Pure-managed in-process simulators (NModbus, OPC Foundation reference server, FOCAS TCP stub from test project). No Docker, no VMs locally.
Integration / nightly CI Full driver-stack validation against real wire protocols Shared Linux Docker host at 10.100.0.35 (Debian 13, Docker 29.2.1) — one host for all developers; replaces the former per-developer Docker Desktop + Hyper-V model All Docker simulators (pymodbus, ab_server, python-snap7, opc-plc) + central SQL Server, all running as /opt/otopcua-<driver>/ stacks with the project=lmxopcua label. TwinCAT XAR + the Galaxy/mxaccessgw stack stay on the Windows dev VM (license + Hyper-V constraints unchanged)

The Linux Docker host is shared because (a) only one team member needs it active at a time, (b) it removes the per-developer Docker Desktop install, and (c) the dev VM no longer needs Hyper-V/WSL2 — freeing it for GPU passthrough.

Installed Inventory — Dev VM (DESKTOP-6JL3KKO)

Running record of v2 dev services on the Windows dev VM. Updated on every install / config change. Credentials here are dev-only per decision #137 — production uses Integrated Security / gMSA per decision #46 and never any value in this table.

Last updated: 2026-04-28 — Docker Desktop + WSL2 removed; Docker workloads now live on the Linux Docker host (see next section).

Host

Attribute Value
Machine name DESKTOP-6JL3KKO (10.100.0.48)
User dohertj2 (local Administrators)
VM platform VMware ESXi
CPU Intel Xeon E5-2697 v4 @ 2.30GHz (3 vCPUs)
OS Windows 10 Enterprise (10.0.19045)
GPU (Re-attached after WSL2/Hyper-V removal)

Toolchain

Tool Version Location Install method
.NET SDK 10.0.201 C:\Program Files\dotnet\sdk\ Pre-installed
.NET AspNetCore runtime 10.0.5 C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App\ Pre-installed
.NET NETCore runtime 10.0.5 C:\Program Files\dotnet\shared\Microsoft.NETCore.App\ Pre-installed
.NET WindowsDesktop runtime 10.0.5 C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App\ Pre-installed
.NET Framework 4.8 SDK Optional — only needed when building the mxaccessgw worker (sibling repo, x86 net48)
Git Pre-installed Standard
PowerShell 7 Pre-installed Standard
winget v1.28.220 Standard Windows feature
Docker CLI (standalone, no daemon) 29.3.1 %USERPROFILE%\bin\docker.exe Static binary from download.docker.com (2026-04-28)
Docker Compose CLI plugin latest %USERPROFILE%\.docker\cli-plugins\docker-compose.exe Direct download from github.com/docker/compose (2026-04-28)
lmxopcua-fix.ps1 helper n/a %USERPROFILE%\bin\lmxopcua-fix.ps1 See "Docker host" section below
dotnet-ef CLI 10.0.6 %USERPROFILE%\.dotnet\tools\dotnet-ef.exe dotnet tool install --global dotnet-ef --version 10.0.* (2026-04-17)
Docker Desktop Removed 2026-04-28 — replaced by remote Linux Docker host
WSL2 (docker-desktop distro) Removed 2026-04-28 (frees Hyper-V for GPU passthrough)

Services

Service Container / Process Version Host:Port Credentials (dev-only) Data location Status
Central config DB Docker container otopcua-mssql on the Linux Docker host (image mcr.microsoft.com/mssql/server:2022-latest) 16.0.4250.1 (RTM-CU24-GDR, KB5083252) 10.100.0.35:143301433 (container) — port 14330 retained from the previous local-container setup so connection-string ports don't churn User sa / Password OtOpcUaDev_2026! Docker named volume otopcua-mssql-data on the Docker host Running on Docker host (/opt/otopcua-mssql/) since 2026-04-28; carries project=lmxopcua label
Dev Galaxy (AVEVA System Platform) Local install on this dev box — full ArchestrA + Historian + OI-Server stack v1 baseline Local COM via MXAccess (C:\Program Files (x86)\ArchestrA\Framework\bin\ArchestrA.MXAccess.dll); Historian via aaH* services; SuiteLink via slssvc Windows Auth Galaxy repository DB ZB on local SQL Server (separate instance from otopcua-mssql — legacy v1 Galaxy DB, not related to v2 config DB) Fully available — Phase 2 lift unblocked. 27 ArchestrA / AVEVA / Wonderware services running incl. aaBootstrap, aaGR (Galaxy Repository), aaLogger, aaUserValidator, aaPim, ArchestrADataStore, AsbServiceManager, AutoBuild_Service; full Historian set (aahClientAccessPoint, aahGateway, aahInSight, aahSearchIndexer, aahSupervisor, InSQLStorage, InSQLConfiguration, InSQLEventSystem, InSQLIndexing, InSQLIOServer, InSQLManualStorage, InSQLSystemDriver, HistorianSearch-x64); slssvc (Wonderware SuiteLink); OI-Gateway install present at C:\Program Files (x86)\Wonderware\OI-Server\OI-Gateway\ (decision #142 AppServer-via-OI-Gateway smoke test now also unblocked)
GLAuth (LDAP) Local install at C:\publish\glauth\ v2.4.0 localhost:3893 (LDAP) / 3894 (LDAPS, disabled) Direct-bind cn={user},dc=lmxopcua,dc=local per auth.md; users readonly/writeop/writetune/writeconfig/alarmack/admin/serviceaccount (passwords in glauth.cfg as SHA-256) C:\publish\glauth\ Running (NSSM service GLAuth). Phase 1 Admin uses GroupToRole map ReadOnly→ConfigViewer, WriteOperate→ConfigEditor, AlarmAck→FleetAdmin. v2-rebrand to dc=otopcua,dc=local is a future cosmetic change
OPC Foundation reference server Not yet built 10.100.0.35:62541 (target) user1 / password1 (reference-server defaults) Pending (needed for Phase 5 OPC UA Client driver testing)
FOCAS TCP stub Not yet built 10.100.0.35:8193 (target) n/a Pending (built in Phase 5; runs on Docker host)
Modbus simulator (otopcua-pymodbus:3.13.0) Docker compose at /opt/otopcua-modbus/ on Docker host pinned 3.13.0 10.100.0.35:5020 n/a n/a Stack staged; bring up with lmxopcua-fix up modbus <profile> from this VM
AB CIP fixture (otopcua-ab-server:libplctag-release) Docker compose at /opt/otopcua-abcip/ on Docker host source-pinned release tag 10.100.0.35:44818 n/a n/a Stack staged; bring up with lmxopcua-fix up abcip <profile> from this VM
S7 fixture (otopcua-python-snap7:1.0) Docker compose at /opt/otopcua-s7/ on Docker host python-snap7 ≥2.0 10.100.0.35:1102 n/a n/a Stack staged; bring up with lmxopcua-fix up s7 s7_1500 from this VM
OPC UA simulator (mcr.microsoft.com/iotedge/opc-plc:2.14.10) Docker compose at /opt/otopcua-opcuaclient/ on Docker host pinned 2.14.10 10.100.0.35:50000 anonymous n/a Stack staged; bring up with lmxopcua-fix up opcuaclient from this VM
TwinCAT XAR VM TBD via Hyper-V on a separate Windows host (NOT this dev VM) TwinCAT default route creds Pending — Hyper-V removed from this dev VM; XAR will live on a separate dedicated Windows machine if needed

Connection strings for appsettings.Development.json

Copy-paste-ready. The checked-in appsettings.json defaults already point at the Docker host (10.100.0.35,14330), so appsettings.Development.json is only needed for per-developer overrides.

{
  "ConfigDatabase": {
    "ConnectionString": "Server=10.100.0.35,14330;Database=OtOpcUaConfig_Dev;User Id=sa;Password=OtOpcUaDev_2026!;TrustServerCertificate=true;Encrypt=false;"
  },
  "Authentication": {
    "Ldap": {
      "Host": "localhost",
      "Port": 3893,
      "UseLdaps": false,
      "BindDn": "cn=admin,dc=otopcua,dc=local",
      "BindPassword": "<see glauth-otopcua.cfg — pending seeding>"
    }
  }
}

LDAP host stays localhost because GLAuth still runs as a native NSSM service on this dev VM (not yet migrated to the Docker host).

For xUnit test fixtures that need a throwaway DB per test run, build connection strings with Database=OtOpcUaConfig_Test_{timestamp} to avoid cross-run pollution.

Container management quick reference

All commands SSH into the Docker host. The standalone Windows docker.exe on this VM has no daemon — every operation runs server-side via the helper.

# Status / log / lifecycle from this VM
lmxopcua-fix ls                            # list lmxopcua-tagged containers + status
lmxopcua-fix logs mssql                    # SQL Server log tail
ssh dohertj2@10.100.0.35 'docker stop otopcua-mssql; docker start otopcua-mssql'
ssh dohertj2@10.100.0.35 'docker logs otopcua-mssql --tail 50'

# sqlcmd inside the container (run on the Docker host)
ssh dohertj2@10.100.0.35 'docker exec otopcua-mssql /opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P "OtOpcUaDev_2026!" -C -Q "SELECT @@VERSION"'

# Nuclear reset (destroys dev DB data)
ssh dohertj2@10.100.0.35 'cd /opt/otopcua-mssql && docker compose down -v && docker compose up -d'

Credential rotation

Dev credentials in this inventory are convenience defaults, not secrets. Change them at will per developer — just update this doc + each developer's appsettings.Development.json. There is no shared secret store for dev.

Resource Inventory

A. Always-required (every developer + integration host)

Resource Purpose Type Default port Default credentials Owner
.NET 10 SDK Build all .NET 10 x64 projects OS install n/a n/a Developer
.NET Framework 4.8 SDK + targeting pack Optional — build the mxaccessgw worker (sibling repo, x86 net48) Windows install n/a n/a Developer
Visual Studio 2022 17.8+ or Rider 2024+ IDE (any C# IDE works; these are the supported configs) OS install n/a n/a Developer
Git Source control OS install n/a n/a Developer
PowerShell 7.4+ Compliance scripts (phase-N-compliance.ps1) OS install n/a n/a Developer
Repo clones lmxopcua (this repo), scadalink-design (UI/auth reference per memory file scadalink_reference.md), 3yearplan (handoff + corrections) Git clone n/a n/a Developer

B. Inner-loop dev (developer machines + PR-CI)

Resource Purpose Type Default port Default credentials Owner
SQL Server 2022 dev edition Central config DB; integration tests against Configuration project Local install OR Docker container mcr.microsoft.com/mssql/server:2022-latest 1433 default, or 14330 when a native MSSQL instance (e.g. the Galaxy ZB host) already occupies 1433 sa / OtOpcUaDev_2026! (dev only — production uses Integrated Security or gMSA per decision #46) Developer (per machine)
GLAuth (LDAP server) Admin UI authentication tests; data-path ACL evaluation tests Local binary at C:\publish\glauth\ per existing CLAUDE.md 3893 (LDAP) / 3894 (LDAPS) Service principal: cn=admin,dc=otopcua,dc=local / OtOpcUaDev_2026!; test users defined in GLAuth config Developer (per machine)
Local dev Galaxy (Aveva System Platform) Galaxy driver tests; v1 IntegrationTests parity Existing on dev box per CLAUDE.md n/a (local COM) Windows Auth Developer (already present per project setup)

C. Integration host (one dedicated Windows machine the team shares)

Resource Purpose Type Default port Default credentials Owner
Docker Desktop for Windows Host for every driver test-fixture simulator (Modbus / AB CIP / S7 / OpcUaClient) + SQL Server Install (Hyper-V required; not compatible with TwinCAT runtime — see TwinCAT row below for the workaround) n/a Integration host admin
Modbus fixture — otopcua-pymodbus:3.13.0 Modbus driver integration tests Docker image (local build, see tests/ZB.MOM.WW.OtOpcUa.Driver.Modbus.IntegrationTests/Docker/); 4 compose profiles: standard / dl205 / mitsubishi / s7_1500 5020 (non-privileged) n/a (no auth in protocol) Developer (per machine)
AB CIP fixture — otopcua-ab-server:libplctag-release AB CIP driver integration tests Docker image (multi-stage build of libplctag's ab_server from source, pinned to the release tag; see tests/ZB.MOM.WW.OtOpcUa.Driver.AbCip.IntegrationTests/Docker/); 4 compose profiles: controllogix / compactlogix / micro800 / guardlogix 44818 (CIP / EtherNet/IP) n/a Developer (per machine)
S7 fixture — otopcua-python-snap7:1.0 S7 driver integration tests Docker image (local build, python-snap7>=2.0; see tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/Docker/); 1 compose profile: s7_1500 1102 (non-privileged; driver honours S7DriverOptions.Port) n/a Developer (per machine)
OPC UA Client fixture — mcr.microsoft.com/iotedge/opc-plc:2.14.10 OpcUaClient driver integration tests Docker image (Microsoft-maintained, pinned; see tests/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.IntegrationTests/Docker/) 50000 (OPC UA) Anonymous (--daa off); auto-accept certs (--aa) Developer (per machine)
TwinCAT XAR runtime VM TwinCAT ADS testing (per test-data-sources.md §5; Beckhoff XAR cannot coexist with Hyper-V on the same OS) Hyper-V VM with Windows + TwinCAT XAR installed under 7-day renewable trial 48898 (ADS over TCP) TwinCAT default route credentials configured per Beckhoff docs Integration host admin
Rockwell Studio 5000 Logix Emulate AB CIP golden-box tier — closes UDT / ALMD / AOI / GuardLogix-safety / CompactLogix-ConnectionSize gaps the ab_server simulator can't cover. Loads the L5X project documented at tests/.../AbCip.IntegrationTests/LogixProject/README.md. Tests gated on AB_SERVER_PROFILE=emulate + AB_SERVER_ENDPOINT=<ip>:44818; see docs/drivers/AbServer-Test-Fixture.md §Logix Emulate golden-box tier Windows-only install; Hyper-V conflict — can't coexist with Docker Desktop's WSL 2 backend on the same OS, same story as TwinCAT XAR. Runs on a dedicated Windows PC reachable on the LAN 44818 (CIP / EtherNet/IP) None required at the CIP layer; Studio 5000 project credentials per Rockwell install Integration host admin (license + install); Developer (per session — open Emulate, load L5X, click Run)
FOCAS TCP stub (Driver.Focas.TestStub) FOCAS functional testing (per test-data-sources.md §6) Local .NET 10 console app from this repo 8193 (FOCAS) n/a Developer / integration host (run on demand)
FOCAS FaultShim (Driver.Focas.FaultShim) FOCAS native-fault injection (per test-data-sources.md §6) Test-only native DLL named Fwlib64.dll, loaded via DLL search path in the test fixture n/a (in-process) n/a Developer / integration host (test-only)

Docker fixtures — quick reference

Every driver's integration-test simulator ships as a Docker image (or pulls one from MCR). Start the one you need, run dotnet test, stop it. Container lifecycle is always manual — fixtures TCP-probe at collection init + skip cleanly when nothing's running.

Driver Fixture image Compose file Bring up
Modbus local-build otopcua-pymodbus:3.13.0 tests/ZB.MOM.WW.OtOpcUa.Driver.Modbus.IntegrationTests/Docker/docker-compose.yml docker compose -f <compose> --profile <standard|dl205|mitsubishi|s7_1500> up -d
AB CIP local-build otopcua-ab-server:libplctag-release tests/ZB.MOM.WW.OtOpcUa.Driver.AbCip.IntegrationTests/Docker/docker-compose.yml docker compose -f <compose> --profile <controllogix|compactlogix|micro800|guardlogix> up -d
S7 local-build otopcua-python-snap7:1.0 tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/Docker/docker-compose.yml docker compose -f <compose> --profile s7_1500 up -d
OpcUaClient mcr.microsoft.com/iotedge/opc-plc:2.14.10 (pinned) tests/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.IntegrationTests/Docker/docker-compose.yml docker compose -f <compose> up -d

First build of a local-build image takes 15 minutes; subsequent runs use layer cache. ab_server is the slowest (multi-stage build clones libplctag + compiles C). Stop with docker compose -f <compose> --profile <…> down.

Endpoint overrides — every fixture respects an env var to point at a real PLC instead of the simulator:

  • MODBUS_SIM_ENDPOINT (default localhost:5020)
  • AB_SERVER_ENDPOINT (no default; overrides the local container endpoint)
  • S7_SIM_ENDPOINT (default localhost:1102)
  • OPCUA_SIM_ENDPOINT (default opc.tcp://localhost:50000)

No native launchers — Docker is the only supported path for these fixtures. A fresh clone needs Docker Desktop and nothing else; fixture TCP probes skip tests cleanly when the container isn't running.

See each driver's docs/drivers/*-Test-Fixture.md for the full coverage map + gap inventory.

D. Cloud / external services

Resource Purpose Type Access Owner
Gitea at gitea.dohertylan.com Hosts lmxopcua, 3yearplan, scadalink-design repos HTTPS git Existing org credentials Org IT
Anthropic API (for Codex adversarial reviews) /codex:adversarial-review invocations during exit gates HTTPS via Codex companion script API key in developer's ~/.claude/... config Developer (per codex:setup skill)

Network Topology (integration host)

                 ┌────────────────────────────────────────┐
                 │   Integration Host (Windows + Docker)  │
                 │                                        │
                 │   Docker Desktop (Linux containers):   │
                 │   ┌───────────────────────────────┐    │
                 │   │  oitc/modbus-server  :502/tcp │    │
                 │   └───────────────────────────────┘    │
                 │                                        │
                 │   WSL2 (Snap7 + ab_server, separate    │
                 │   from Docker Desktop's HyperV):       │
                 │   ┌───────────────────────────────┐    │
                 │   │  snap7-server        :102/tcp │    │
                 │   │  ab_server         :44818/tcp │    │
                 │   └───────────────────────────────┘    │
                 │                                        │
                 │   Hyper-V VM (Windows + TwinCAT XAR):  │
                 │   ┌───────────────────────────────┐    │
                 │   │  TwinCAT XAR        :48898    │    │
                 │   └───────────────────────────────┘    │
                 │                                        │
                 │   Native processes:                    │
                 │   ┌───────────────────────────────┐    │
                 │   │  ConsoleReferenceServer :62541│    │
                 │   │  FOCAS TestStub          :8193│    │
                 │   └───────────────────────────────┘    │
                 │                                        │
                 │   SQL Server 2022 (local install):     │
                 │   ┌───────────────────────────────┐    │
                 │   │  OtOpcUaConfig_Test    :1433  │    │
                 │   └───────────────────────────────┘    │
                 └────────────────────────────────────────┘
                              ▲
                              │ tests connect via the host's hostname or 127.0.0.1
                              │
                 ┌────────────────────────────────────────┐
                 │   Developer / CI machine running       │
                 │   `dotnet test --filter Category=...`  │
                 └────────────────────────────────────────┘

Bootstrap Order — Inner-loop Developer Machine

Order matters because some installs have prerequisites and several need admin elevation (UAC). ~6090 min total on a fresh Windows machine, including reboots.

Admin elevation appears at: WSL2 install (step 4a), Docker Desktop install (step 4b), and any wsl --install -d call. winget will prompt UAC interactively when these run; accept it. There is no fully-silent admin-free install path on Windows for Docker Desktop's prerequisites.

  1. Install .NET 10 SDK (https://dotnet.microsoft.com/) — required to build anything

    winget install --id Microsoft.DotNet.SDK.10 --accept-package-agreements --accept-source-agreements
    
  2. Install .NET Framework 4.8 SDK + targeting pack — optional, only needed when building the mxaccessgw worker (sibling repo, x86 net48). Not required by anything in this repo.

    winget install --id Microsoft.DotNet.Framework.DeveloperPack_4 --accept-package-agreements --accept-source-agreements
    
  3. Install Git + PowerShell 7.4+

    winget install --id Git.Git --accept-package-agreements --accept-source-agreements
    winget install --id Microsoft.PowerShell --accept-package-agreements --accept-source-agreements
    
  4. Install Docker Desktop (with WSL2 backend per decision #134, leaves Hyper-V free for the future TwinCAT XAR VM):

    4a. Enable WSL2 — UAC required:

    wsl --install
    

    Reboot when prompted. After reboot, the default Ubuntu distro launches and asks for a username/password — set them (these are WSL-internal, not used for Docker auth).

    Verify after reboot:

    wsl --status
    wsl --list --verbose
    

    Expected: Default Version: 2, at least one distro (typically Ubuntu) with STATE Running or Stopped.

    4b. Install Docker Desktop — UAC required:

    winget install --id Docker.DockerDesktop --accept-package-agreements --accept-source-agreements
    

    The installer adds you to the docker-users Windows group. Sign out and back in (or reboot) so the group membership takes effect.

    4c. Configure Docker Desktop — open it once after sign-in:

    • Settings → General: confirm "Use the WSL 2 based engine" is checked (decision #134 — coexists with future Hyper-V VMs)
    • Settings → General: confirm "Use Windows containers" is NOT checked (we use Linux containers for mcr.microsoft.com/mssql/server, oitc/modbus-server, etc.)
    • Settings → Resources → WSL Integration: enable for the default Ubuntu distro
    • (Optional, large fleets) Settings → Resources → Advanced: bump CPU / RAM allocation if you have headroom

    Verify:

    docker --version
    docker ps
    

    Expected: version reported, docker ps returns an empty table (no containers running yet, but the daemon is reachable).

  5. Clone repos:

    git clone https://gitea.dohertylan.com/dohertj2/lmxopcua.git
    git clone https://gitea.dohertylan.com/dohertj2/scadalink-design.git
    git clone https://gitea.dohertylan.com/dohertj2/3yearplan.git
    
  6. Start SQL Server (Linux container; runs in the WSL2 backend):

    docker run --name otopcua-mssql `
        -e "ACCEPT_EULA=Y" `
        -e "MSSQL_SA_PASSWORD=OtOpcUaDev_2026!" `
        -p 14330:1433 `
        -v otopcua-mssql-data:/var/opt/mssql `
        -d mcr.microsoft.com/mssql/server:2022-latest
    

    The host port is 14330, not 1433, to coexist with the native MSSQL14 instance that hosts the Galaxy ZB DB on port 1433. Both the native instance and Docker's port-proxy will happily bind 0.0.0.0:1433, but only one of them catches any given connection — which is effectively non-deterministic and produces confusing "Login failed for user 'sa'" errors when the native instance wins. Using 14330 eliminates the race entirely.

    The -v otopcua-mssql-data:/var/opt/mssql named volume preserves database files across container restarts and docker rm — drop it only if you want a strictly throwaway instance.

    Verify:

    docker ps --filter name=otopcua-mssql
    docker exec -it otopcua-mssql /opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P "OtOpcUaDev_2026!" -C -Q "SELECT @@VERSION"
    

    Expected: container STATUS Up, SELECT @@VERSION returns Microsoft SQL Server 2022 (...).

    To stop / start later:

    docker stop otopcua-mssql
    docker start otopcua-mssql
    
  7. Install GLAuth at C:\publish\glauth\ per existing CLAUDE.md instructions; populate glauth-otopcua.cfg with the test users + groups (template in docs/v2/dev-environment-glauth-config.md — to be added in the setup task)

  8. Install EF Core CLI (used to apply migrations against the SQL Server container starting in Phase 1 Stream B):

    dotnet tool install --global dotnet-ef --version 10.0.*
    
  9. Run dotnet restore in the lmxopcua repo

  10. Run dotnet build ZB.MOM.WW.OtOpcUa.slnx (post-Phase-0) or ZB.MOM.WW.LmxOpcUa.slnx (pre-Phase-0) — verifies the toolchain

  11. Run dotnet test with the inner-loop filter — should pass on a fresh machine

Bootstrap Order — Integration Host

Order matters more here because of Hyper-V conflicts. ~half-day on a fresh machine.

  1. Install Windows Server 2022 or Windows 11 Pro (Hyper-V capable)
  2. Enable Hyper-V + WSL2
  3. Install Docker Desktop for Windows, configure to use WSL2 backend (NOT Hyper-V backend — leaves Hyper-V free for the TwinCAT XAR VM)
  4. Set up WSL2 distro (Ubuntu 22.04 LTS) for native Linux binaries that conflict with Docker Desktop
  5. Pull / start Modbus simulator:
    docker run -d --name modbus-sim -p 502:502 -v ${PWD}/modbus-config.yaml:/server_config.yaml oitc/modbus-server
    
  6. Build + start ab_server (in WSL2):
    git clone https://github.com/libplctag/libplctag
    cd libplctag/src/tests
    make ab_server
    ./ab_server --plc=ControlLogix --port=44818  # default tags loaded from a config file
    
  7. Build + start Snap7 Server (in WSL2):
  8. Set up TwinCAT XAR VM:
    • Create a Hyper-V VM (Gen 2, Windows 11)
    • Install TwinCAT 3 XAE + XAR (download from Beckhoff, free for dev/test)
    • Activate the 7-day trial; document the rotation schedule
    • Configure ADS routes for the integration host to reach the VM
    • Deploy the test PLC project from test-data-sources.md §5 ("a tiny test project — MAIN (PLC code) + GVL")
  9. Build + start OPC Foundation reference server:
    git clone https://github.com/OPCFoundation/UA-.NETStandard
    cd UA-.NETStandard/Applications/ConsoleReferenceServer
    dotnet run --port 62541
    
  10. Install SQL Server 2022 dev edition (or run the Docker container as on developer machines)
  11. Build + run FOCAS TestStub (from this repo, post-Phase-5):
    dotnet run --project src/ZB.MOM.WW.OtOpcUa.Driver.Focas.TestStub -- --port 8193
    
  12. Verify by running dotnet test --filter Category=Integration from a developer machine pointed at the integration host

Credential Management

Dev environment defaults

The defaults in this doc are for dev environments only. They're documented here so a developer can stand up a working setup without hunting; they're not secret.

Production overrides

For any production deployment:

  • SQL Server: Integrated Security with gMSA (decision #46) — never SQL login with shared password
  • LDAP: production GLAuth or AD instance with proper service principal
  • TwinCAT: paid license (per-runtime), not the 7-day trial
  • All other services: deployment-team's credential management process; documented in deployment-guide.md (separate doc, post-v2.0)

Storage

For dev defaults:

  • SQL Server SA password: stored in each developer's local appsettings.Development.json (gitignored)
  • GLAuth bind DN/password: stored in glauth-otopcua.cfg (gitignored)
  • Docker secrets / volumes: developer-local

For production:

  • gMSA / cert-mapped principals — no passwords stored anywhere
  • Per-NodeId credentials in ClusterNodeCredential table (per decision #83)
  • Admin app uses LDAP (no SQL credential at all on the user-facing side)

Service Refresh — Refresh-Services.ps1

The deploy host hosts three NSSM-wrapped services (MxAccessGw, OtOpcUaWonderwareHistorian, OtOpcUa) that consume binaries from C:\publish\. After landing changes in either repo, refresh the deployed bits with scripts\install\Refresh-Services.ps1:

# Default invocation (dev rig).
& C:\Users\dohertj2\Desktop\lmxopcua\scripts\install\Refresh-Services.ps1

# Skip the timestamped backup (faster on iterative dev cycles).
& Refresh-Services.ps1 -SkipBackup

# Dry-run — print the actions without doing them.
& Refresh-Services.ps1 -WhatIf

The script:

  1. Stops services in reverse-dependency order (OtOpcUaOtOpcUaWonderwareHistorianMxAccessGw) and force-kills any residual processes.
  2. Snapshots the existing C:\publish\mxaccessgw\ and C:\publish\lmxopcua\ trees to C:\publish\.backup-<timestamp>\ for rollback (skip with -SkipBackup).
  3. Builds + copies mxaccessgw worker (x86 net48) + server (net10.0) binaries from the sibling repo.
  4. dotnet publish-es the OtOpcUa server + Wonderware historian sidecar from this repo.
  5. Ensures OTOPCUA_HISTORIAN_ALARM_WRITE_ENABLED=true is set on the historian service env block (PR C.2 toggle).
  6. Starts services in forward-dependency order (MxAccessGwOtOpcUaWonderwareHistorianOtOpcUa).
  7. Smoke-verifies — service status, listening ports (5120 / 4840 / 4841), recent log tails.

Functional verification (alarm raise / scripted alarm historian round-trip / sub-attribute fallback) is the operator's next step after the refresh; see docs/plans/alarms-over-gateway.md §Track D for the scenarios.

Test Data Seed

Each environment needs a baseline data set so cross-developer tests are reproducible. Lives in tests/ZB.MOM.WW.OtOpcUa.IntegrationTests/SeedData/:

  • GLAuth users: test-readonly@otopcua.local (in OtOpcUaReadOnly), test-operator@otopcua.local (OtOpcUaWriteOperate + OtOpcUaAlarmAck), test-fleetadmin@otopcua.local (OtOpcUaAdmins)
  • Central config DB: a seed cluster TEST-CLUSTER-01 with 1 node + 1 namespace + 0 drivers (other tests add drivers)
  • Modbus sim: YAML config preloading the addresses from test-data-sources.md §1 (HR 09 constants, ramp at HR 100, etc.)
  • TwinCAT XAR: the test PLC project deployed; symbols match test-data-sources.md §5
  • OPC Foundation reference server: starts with built-in test address space; tests don't modify it

Seeds are idempotent (re-runnable) and gitignored where they contain credentials.

Setup Plan (executable)

Step 1 — Inner-loop dev environment (each developer, ~1 day with documentation)

Owner: developer Prerequisite: Bootstrap order steps 110 above (note: steps 4a, 4b, and any later wsl --install -d call require admin elevation / UAC interaction — there is no fully-silent admin-free install path on Windows for Docker Desktop's prerequisites) Acceptance:

  • dotnet test ZB.MOM.WW.OtOpcUa.slnx passes
  • A test that touches the central config DB succeeds (proves SQL Server reachable)
  • A test that authenticates against GLAuth succeeds (proves LDAP reachable)
  • docker ps --filter name=otopcua-mssql shows the SQL Server container STATUS Up

Troubleshooting (common Windows install snags)

  • wsl --install says "Windows Subsystem for Linux has no installed distributions" after first reboot — open a fresh PowerShell and run wsl --install -d Ubuntu (the -d form forces a distro install if the prereq-only install ran first).
  • Docker Desktop install completes but docker --version reports "command not found"PATH doesn't pick up the new Docker shims until a new shell is opened. Open a fresh PowerShell, or sign out/in, and retry.
  • docker ps reports "permission denied" or "Cannot connect to the Docker daemon" — your user account isn't in the docker-users group yet. Sign out and back in (group membership is loaded at login). Verify with whoami /groups | findstr docker-users.
  • Docker Desktop refuses to start with "WSL 2 installation is incomplete" — open the WSL2 kernel update from https://aka.ms/wsl2kernel, install, then restart Docker Desktop. (Modern wsl --install ships the kernel automatically; this is mostly a legacy problem.)
  • SQL Server container starts but immediately exits — SA password complexity. The default OtOpcUaDev_2026! meets the requirement (≥8 chars, upper + lower + digit + symbol); if you change it, keep complexity. Check docker logs otopcua-mssql for the exact failure.
  • docker run fails with "image platform does not match host platform" — your Docker is configured for Windows containers. Switch to Linux containers in Docker Desktop tray menu ("Switch to Linux containers"), or recheck Settings → General per step 4c.
  • Hyper-V conflict when later setting up TwinCAT XAR VM — confirm Docker Desktop is on the WSL 2 backend, not Hyper-V backend. The two coexist only when Docker uses WSL 2.

Step 2 — Integration host (one-time, ~1 week)

Owner: DevOps lead Prerequisite: dedicated Windows machine, hardware specs ≥ 8 cores / 32 GB RAM / 500 GB SSD Acceptance:

  • Each simulator (Modbus, AB, S7, TwinCAT, OPC UA reference) responds to a probe from a developer machine
  • A nightly CI job runs dotnet test --filter Category=Integration against the integration host and passes
  • Service-account permissions reviewed by security lead

Step 3 — TwinCAT XAR VM trial rotation automation (one-time, half-day)

Owner: DevOps lead Prerequisite: Step 2 complete Acceptance:

  • A scheduled task on the integration host either re-activates the 7-day trial automatically OR alerts the team 24h before expiry; cycle tested

Step 4 — Per-developer GLAuth config sync (recurring, when test users change)

Owner: developer (each) Acceptance:

  • A script in the repo (scripts/sync-glauth-dev-config.ps1) updates the local GLAuth config from a template; documented in CLAUDE.md
  • Test users defined in the template work on every developer machine

Step 5 — Docker simulator config (per-developer, ~30 min)

Owner: developer (each) Acceptance:

  • The Modbus simulator container is reachable from 127.0.0.1:502 from the developer's test runner (only needed if the developer is debugging Modbus driver work; not required for Phase 0/1)

Step 6 — Codex companion setup (per-developer, ~5 min)

Owner: developer (each) Acceptance:

  • /codex:setup skill confirms readiness; /codex:adversarial-review works against a small test diff

Operational Risks

Risk Mitigation
TwinCAT 7-day trial expires mid-CI run Step 3 automation; alert before expiry; license budget approved as fallback for production-grade pre-release validation
Docker Desktop license terms change for org use Track Docker pricing; budget approved or fall back to Podman if license becomes blocking
Integration host single point of failure Document the setup so a second host can be provisioned in <2 days; test fixtures pin to a hostname so failover changes one DNS entry
GLAuth dev config drifts between developers Sync script + template (Step 4) keep configs aligned; periodic review
Galaxy / MXAccess licensing for non-dev-machine Galaxy stays on the dev machines that already have Aveva licenses; integration host does NOT run Galaxy (the mxaccessgw worker requires the AVEVA stack and runs on the dev box, not the shared host)
Long-lived dev env credentials in dev appsettings.Development.json Gitignored; documented as dev-only; production never uses these

Decisions to Add to plan.md

# Decision Rationale
133 Two-tier dev environment: inner-loop (in-process simulators on developer machines) + integration (Docker / VM / native simulators on a single dedicated Windows host) Per decision #99. Concrete inventory + setup plan in dev-environment.md
134 Docker Desktop with WSL2 backend (not Hyper-V backend) on integration host so TwinCAT XAR VM can run in Hyper-V alongside Docker TwinCAT runtime cannot coexist with Hyper-V-mode Docker Desktop; WSL2 backend leaves Hyper-V free for the XAR VM. Documented operational constraint
135 TwinCAT XAR runs only in a dedicated VM on the integration host; developer machines do NOT run XAR locally The 7-day trial reactivation needs centralized management; the VM is shared infrastructure
136 Galaxy / MXAccess testing happens on developer machines that have local Aveva installs, NOT on the shared integration host Aveva licensing scoped to dev workstations; integration host doesn't carry the license. v1 IntegrationTests parity (Phase 2) runs on developer boxes.
137 Dev env credentials are documented openly in dev-environment.md; production credentials use Integrated Security / gMSA per decision #46 Dev defaults are not secrets; they're convenience. Production never uses these values