Files touched — docs/drivers/Modbus-Test-Fixture.md dropped the key-files pointer at deleted Pymodbus/ + flipped "primary launcher is Docker, native fallback retained" framing to "Docker is the only supported launch path" (matching the code). docs/v2/dev-environment.md dropped the "skips both Docker + native-binary paths" parenthetical from AB_SERVER_ENDPOINT + flipped the "Native fallbacks" subsection to a one-liner that says Docker is the only supported path. docs/v2/modbus-test-plan.md rewrote §Harness from "pip install pymodbus + serve.ps1" setup pattern to "docker compose --profile <…> up" + updated the §PR 43 status bullet to point at Docker/profiles/. docs/v2/test-data-sources.md §"CI fixture (task #180)" rewrote the AB CIP section from "LocateBinary() picks binary off PATH" + GitHub Actions zip-download step to "Docker is the only supported reproducible build path" + docker compose GitHub Actions step; dropped the pinned-version SHA256 table + lock-file reference because the Dockerfile's LIBPLCTAG_TAG build-arg is the new pin. Code docstrings + error messages — these are developer-facing operational text too. ModbusSimulatorFixture SkipReason strings (both branches) now point at `docker compose -f Docker/docker-compose.yml --profile standard up -d` instead of the deleted `Pymodbus\serve.ps1`; doc-comment at the top references Docker/docker-compose.yml. Snap7ServerFixture SkipReason strings + doc-comment point at Docker/docker-compose.yml instead of PythonSnap7/serve.ps1. S7_1500Profile.cs docstring updated. Modbus Dockerfile comment pointing at deleted tests/.../Pymodbus/README.md redirected to docs/drivers/Modbus-Test-Fixture.md. DL205Profile.cs + DL205StringQuirkTests.cs + S7_1500Profile.cs (in Modbus project) docstrings flipped from Pymodbus/*.json references to Docker/profiles/*.json. Left untouched deliberately: docs/v2/implementation/exit-gate-phase-2-closed.md — that's a historical as-of-2026-04-18 snapshot documenting what was skipped at Phase 2 closure; rewriting would lose the date-stamped context. Its "oitc/modbus-server Docker container not started" + "ab_server binary not on PATH" lines describe the fixture landscape that existed at close time, not current operational guidance. Final sweep confirms zero remaining `Pymodbus/` / `PythonSnap7/` / `LocateBinary` / `AbServerSeedTag` / `BuildCliArgs` / `AbServerPlcArg` mentions anywhere in tracked files outside that historical exit-gate doc. Whole-solution build still 0 errors. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
35 KiB
Development Environment — OtOpcUa v2
Status: DRAFT — concrete inventory + setup plan for every external resource the v2 build needs. Companion to
test-data-sources.md(which catalogues the simulator/stub strategy per driver) andimplementation/overview.md(which references the dev environment in entry-gate checklists).Branch:
v2Created: 2026-04-17
Scope
Every external resource a developer needs on their machine, plus the dedicated integration host that runs the heavier simulators per CI tiering decision #99. Includes Docker container images, ports, default credentials (dev only — production overrides documented), and ownership.
Not in scope here: production deployment topology (separate doc when v2 ships), CI pipeline configuration (separate ops concern), individual developer's IDE / editor preferences.
Two Environment Tiers
Per decision #99:
| Tier | Purpose | Where it runs | Resources |
|---|---|---|---|
| PR-CI / inner-loop dev | Fast, runs on minimal Windows + Linux build agents and developer laptops | Each developer's machine; CI runners | Pure-managed in-process simulators (NModbus, OPC Foundation reference server, FOCAS TCP stub from test project). No Docker, no VMs. |
| Nightly / integration CI | Full driver-stack validation against real wire protocols | One dedicated Windows host with Docker Desktop + Hyper-V + a TwinCAT XAR VM | All Docker simulators (oitc/modbus-server, ab_server, Snap7), TwinCAT XAR VM, Galaxy.Host installer + dev Galaxy access, FOCAS TCP stub binary, FOCAS FaultShim assembly |
The tier split keeps developer onboarding fast (no Docker required for first build) while concentrating the heavy simulator setup on one machine the team maintains.
Installed Inventory — This Machine
Running record of every v2 dev service stood up on this developer machine. Updated on every install / config change. Credentials here are dev-only per decision #137 — production uses Integrated Security / gMSA per decision #46 and never any value in this table.
Last updated: 2026-04-17
Host
| Attribute | Value |
|---|---|
| Machine name | DESKTOP-6JL3KKO |
| User | dohertj2 (member of local Administrators + docker-users) |
| VM platform | VMware (VMware20,1), nested virtualization enabled |
| CPU | Intel Xeon E5-2697 v4 @ 2.30GHz (3 vCPUs) |
| OS | Windows (WSL2 + Hyper-V Platform features installed) |
Toolchain
| Tool | Version | Location | Install method |
|---|---|---|---|
| .NET SDK | 10.0.201 | C:\Program Files\dotnet\sdk\ |
Pre-installed |
| .NET AspNetCore runtime | 10.0.5 | C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App\ |
Pre-installed |
| .NET NETCore runtime | 10.0.5 | C:\Program Files\dotnet\shared\Microsoft.NETCore.App\ |
Pre-installed |
| .NET WindowsDesktop runtime | 10.0.5 | C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App\ |
Pre-installed |
| .NET Framework 4.8 SDK | — | Pending (needed for Phase 2 Galaxy.Host; not yet required) | — |
| Git | Pre-installed | Standard | — |
| PowerShell 7 | Pre-installed | Standard | — |
| winget | v1.28.220 | Standard Windows feature | — |
| WSL | Default v2, distro docker-desktop STATE Running |
— | wsl --install --no-launch (2026-04-17) |
| Docker Desktop | 29.3.1 (engine) / Docker Desktop 4.68.0 (app) | Standard | winget install --id Docker.DockerDesktop (2026-04-17) |
dotnet-ef CLI |
10.0.6 | %USERPROFILE%\.dotnet\tools\dotnet-ef.exe |
dotnet tool install --global dotnet-ef --version 10.0.* (2026-04-17) |
Services
| Service | Container / Process | Version | Host:Port | Credentials (dev-only) | Data location | Status |
|---|---|---|---|---|---|---|
| Central config DB | Docker container otopcua-mssql (image mcr.microsoft.com/mssql/server:2022-latest) |
16.0.4250.1 (RTM-CU24-GDR, KB5083252) | localhost:14330 (host) → 1433 (container) — remapped from 1433 to avoid collision with the native MSSQL14 instance that hosts the Galaxy ZB DB (both bind 0.0.0.0:1433; whichever wins the race gets connections) |
User sa / Password OtOpcUaDev_2026! |
Docker named volume otopcua-mssql-data (mounted at /var/opt/mssql inside container) |
✅ Running — InitialSchema migration applied, 16 entity tables live |
| Dev Galaxy (AVEVA System Platform) | Local install on this dev box — full ArchestrA + Historian + OI-Server stack | v1 baseline | Local COM via MXAccess (C:\Program Files (x86)\ArchestrA\Framework\bin\ArchestrA.MXAccess.dll); Historian via aaH* services; SuiteLink via slssvc |
Windows Auth | Galaxy repository DB ZB on local SQL Server (separate instance from otopcua-mssql — legacy v1 Galaxy DB, not related to v2 config DB) |
✅ Fully available — Phase 2 lift unblocked. 27 ArchestrA / AVEVA / Wonderware services running incl. aaBootstrap, aaGR (Galaxy Repository), aaLogger, aaUserValidator, aaPim, ArchestrADataStore, AsbServiceManager, AutoBuild_Service; full Historian set (aahClientAccessPoint, aahGateway, aahInSight, aahSearchIndexer, aahSupervisor, InSQLStorage, InSQLConfiguration, InSQLEventSystem, InSQLIndexing, InSQLIOServer, InSQLManualStorage, InSQLSystemDriver, HistorianSearch-x64); slssvc (Wonderware SuiteLink); OI-Gateway install present at C:\Program Files (x86)\Wonderware\OI-Server\OI-Gateway\ (decision #142 AppServer-via-OI-Gateway smoke test now also unblocked) |
| GLAuth (LDAP) | Local install at C:\publish\glauth\ |
v2.4.0 | localhost:3893 (LDAP) / 3894 (LDAPS, disabled) |
Direct-bind cn={user},dc=lmxopcua,dc=local per auth.md; users readonly/writeop/writetune/writeconfig/alarmack/admin/serviceaccount (passwords in glauth.cfg as SHA-256) |
C:\publish\glauth\ |
✅ Running (NSSM service GLAuth). Phase 1 Admin uses GroupToRole map ReadOnly→ConfigViewer, WriteOperate→ConfigEditor, AlarmAck→FleetAdmin. v2-rebrand to dc=otopcua,dc=local is a future cosmetic change |
| OPC Foundation reference server | Not yet built | — | localhost:62541 (target) |
user1 / password1 (reference-server defaults) |
— | Pending (needed for Phase 5 OPC UA Client driver testing) |
| FOCAS TCP stub | Not yet built | — | localhost:8193 (target) |
n/a | — | Pending (built in Phase 5) |
Modbus simulator (oitc/modbus-server) |
— | — | localhost:502 (target) |
n/a | — | Pending (needed for Phase 3 Modbus driver; moves to integration host per two-tier model) |
libplctag ab_server |
— | — | localhost:44818 (target) |
n/a | — | Pending (Phase 3/4 AB CIP and AB Legacy drivers) |
| Snap7 Server | — | — | localhost:102 (target) |
n/a | — | Pending (Phase 4 S7 driver) |
| TwinCAT XAR VM | — | — | localhost:48898 (ADS) (target) |
TwinCAT default route creds | — | Pending — runs in Hyper-V VM, not on this dev box (per decision #135) |
Connection strings for appsettings.Development.json
Copy-paste-ready. Never commit these to the repo — they go in appsettings.Development.json (gitignored per the standard .NET convention) or in user-scoped dotnet secrets.
{
"ConfigDatabase": {
"ConnectionString": "Server=localhost,14330;Database=OtOpcUaConfig_Dev;User Id=sa;Password=OtOpcUaDev_2026!;TrustServerCertificate=true;Encrypt=false;"
},
"Authentication": {
"Ldap": {
"Host": "localhost",
"Port": 3893,
"UseLdaps": false,
"BindDn": "cn=admin,dc=otopcua,dc=local",
"BindPassword": "<see glauth-otopcua.cfg — pending seeding>"
}
}
}
For xUnit test fixtures that need a throwaway DB per test run, build connection strings with Database=OtOpcUaConfig_Test_{timestamp} to avoid cross-run pollution.
Container management quick reference
# Start / stop the SQL Server container (survives reboots via Docker Desktop auto-start)
docker stop otopcua-mssql
docker start otopcua-mssql
# Logs (useful for diagnosing startup failures or login issues)
docker logs otopcua-mssql --tail 50
# Shell into the container (rarely needed; sqlcmd is the usual tool)
docker exec -it otopcua-mssql bash
# Query via sqlcmd inside the container (Git Bash needs MSYS_NO_PATHCONV=1 to avoid path mangling)
MSYS_NO_PATHCONV=1 docker exec otopcua-mssql /opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P "OtOpcUaDev_2026!" -C -Q "SELECT @@VERSION"
# Nuclear reset: drop the container + volume (destroys all DB data)
docker stop otopcua-mssql
docker rm otopcua-mssql
docker volume rm otopcua-mssql-data
# …then re-run the docker run command from Bootstrap Step 6
Credential rotation
Dev credentials in this inventory are convenience defaults, not secrets. Change them at will per developer — just update this doc + each developer's appsettings.Development.json. There is no shared secret store for dev.
Resource Inventory
A. Always-required (every developer + integration host)
| Resource | Purpose | Type | Default port | Default credentials | Owner |
|---|---|---|---|---|---|
| .NET 10 SDK | Build all .NET 10 x64 projects | OS install | n/a | n/a | Developer |
| .NET Framework 4.8 SDK + targeting pack | Build Driver.Galaxy.Host (Phase 2+) |
Windows install | n/a | n/a | Developer |
| Visual Studio 2022 17.8+ or Rider 2024+ | IDE (any C# IDE works; these are the supported configs) | OS install | n/a | n/a | Developer |
| Git | Source control | OS install | n/a | n/a | Developer |
| PowerShell 7.4+ | Compliance scripts (phase-N-compliance.ps1) |
OS install | n/a | n/a | Developer |
| Repo clones | lmxopcua (this repo), scadalink-design (UI/auth reference per memory file scadalink_reference.md), 3yearplan (handoff + corrections) |
Git clone | n/a | n/a | Developer |
B. Inner-loop dev (developer machines + PR-CI)
| Resource | Purpose | Type | Default port | Default credentials | Owner |
|---|---|---|---|---|---|
| SQL Server 2022 dev edition | Central config DB; integration tests against Configuration project |
Local install OR Docker container mcr.microsoft.com/mssql/server:2022-latest |
1433 default, or 14330 when a native MSSQL instance (e.g. the Galaxy ZB host) already occupies 1433 |
sa / OtOpcUaDev_2026! (dev only — production uses Integrated Security or gMSA per decision #46) |
Developer (per machine) |
| GLAuth (LDAP server) | Admin UI authentication tests; data-path ACL evaluation tests | Local binary at C:\publish\glauth\ per existing CLAUDE.md |
3893 (LDAP) / 3894 (LDAPS) | Service principal: cn=admin,dc=otopcua,dc=local / OtOpcUaDev_2026!; test users defined in GLAuth config |
Developer (per machine) |
| Local dev Galaxy (Aveva System Platform) | Galaxy driver tests; v1 IntegrationTests parity | Existing on dev box per CLAUDE.md | n/a (local COM) | Windows Auth | Developer (already present per project setup) |
C. Integration host (one dedicated Windows machine the team shares)
| Resource | Purpose | Type | Default port | Default credentials | Owner |
|---|---|---|---|---|---|
| Docker Desktop for Windows | Host for every driver test-fixture simulator (Modbus / AB CIP / S7 / OpcUaClient) + SQL Server | Install | (Hyper-V required; not compatible with TwinCAT runtime — see TwinCAT row below for the workaround) | n/a | Integration host admin |
Modbus fixture — otopcua-pymodbus:3.13.0 |
Modbus driver integration tests | Docker image (local build, see tests/ZB.MOM.WW.OtOpcUa.Driver.Modbus.IntegrationTests/Docker/); 4 compose profiles: standard / dl205 / mitsubishi / s7_1500 |
5020 (non-privileged) | n/a (no auth in protocol) | Developer (per machine) |
AB CIP fixture — otopcua-ab-server:libplctag-release |
AB CIP driver integration tests | Docker image (multi-stage build of libplctag's ab_server from source, pinned to the release tag; see tests/ZB.MOM.WW.OtOpcUa.Driver.AbCip.IntegrationTests/Docker/); 4 compose profiles: controllogix / compactlogix / micro800 / guardlogix |
44818 (CIP / EtherNet/IP) | n/a | Developer (per machine) |
S7 fixture — otopcua-python-snap7:1.0 |
S7 driver integration tests | Docker image (local build, python-snap7>=2.0; see tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/Docker/); 1 compose profile: s7_1500 |
1102 (non-privileged; driver honours S7DriverOptions.Port) |
n/a | Developer (per machine) |
OPC UA Client fixture — mcr.microsoft.com/iotedge/opc-plc:2.14.10 |
OpcUaClient driver integration tests | Docker image (Microsoft-maintained, pinned; see tests/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.IntegrationTests/Docker/) |
50000 (OPC UA) | Anonymous (--daa off); auto-accept certs (--aa) |
Developer (per machine) |
| TwinCAT XAR runtime VM | TwinCAT ADS testing (per test-data-sources.md §5; Beckhoff XAR cannot coexist with Hyper-V on the same OS) |
Hyper-V VM with Windows + TwinCAT XAR installed under 7-day renewable trial | 48898 (ADS over TCP) | TwinCAT default route credentials configured per Beckhoff docs | Integration host admin |
FOCAS TCP stub (Driver.Focas.TestStub) |
FOCAS functional testing (per test-data-sources.md §6) |
Local .NET 10 console app from this repo | 8193 (FOCAS) | n/a | Developer / integration host (run on demand) |
FOCAS FaultShim (Driver.Focas.FaultShim) |
FOCAS native-fault injection (per test-data-sources.md §6) |
Test-only native DLL named Fwlib64.dll, loaded via DLL search path in the test fixture |
n/a (in-process) | n/a | Developer / integration host (test-only) |
Docker fixtures — quick reference
Every driver's integration-test simulator ships as a Docker image (or pulls
one from MCR). Start the one you need, run dotnet test, stop it.
Container lifecycle is always manual — fixtures TCP-probe at collection
init + skip cleanly when nothing's running.
| Driver | Fixture image | Compose file | Bring up |
|---|---|---|---|
| Modbus | local-build otopcua-pymodbus:3.13.0 |
tests/ZB.MOM.WW.OtOpcUa.Driver.Modbus.IntegrationTests/Docker/docker-compose.yml |
docker compose -f <compose> --profile <standard|dl205|mitsubishi|s7_1500> up -d |
| AB CIP | local-build otopcua-ab-server:libplctag-release |
tests/ZB.MOM.WW.OtOpcUa.Driver.AbCip.IntegrationTests/Docker/docker-compose.yml |
docker compose -f <compose> --profile <controllogix|compactlogix|micro800|guardlogix> up -d |
| S7 | local-build otopcua-python-snap7:1.0 |
tests/ZB.MOM.WW.OtOpcUa.Driver.S7.IntegrationTests/Docker/docker-compose.yml |
docker compose -f <compose> --profile s7_1500 up -d |
| OpcUaClient | mcr.microsoft.com/iotedge/opc-plc:2.14.10 (pinned) |
tests/ZB.MOM.WW.OtOpcUa.Driver.OpcUaClient.IntegrationTests/Docker/docker-compose.yml |
docker compose -f <compose> up -d |
First build of a local-build image takes 1–5 minutes; subsequent runs use
layer cache. ab_server is the slowest (multi-stage build clones
libplctag + compiles C). Stop with docker compose -f <compose> --profile <…> down.
Endpoint overrides — every fixture respects an env var to point at a real PLC instead of the simulator:
MODBUS_SIM_ENDPOINT(defaultlocalhost:5020)AB_SERVER_ENDPOINT(no default; overrides the local container endpoint)S7_SIM_ENDPOINT(defaultlocalhost:1102)OPCUA_SIM_ENDPOINT(defaultopc.tcp://localhost:50000)
No native launchers — Docker is the only supported path for these fixtures. A fresh clone needs Docker Desktop and nothing else; fixture TCP probes skip tests cleanly when the container isn't running.
See each driver's docs/drivers/*-Test-Fixture.md for the full coverage
map + gap inventory.
D. Cloud / external services
| Resource | Purpose | Type | Access | Owner |
|---|---|---|---|---|
Gitea at gitea.dohertylan.com |
Hosts lmxopcua, 3yearplan, scadalink-design repos |
HTTPS git | Existing org credentials | Org IT |
| Anthropic API (for Codex adversarial reviews) | /codex:adversarial-review invocations during exit gates |
HTTPS via Codex companion script | API key in developer's ~/.claude/... config |
Developer (per codex:setup skill) |
Network Topology (integration host)
┌────────────────────────────────────────┐
│ Integration Host (Windows + Docker) │
│ │
│ Docker Desktop (Linux containers): │
│ ┌───────────────────────────────┐ │
│ │ oitc/modbus-server :502/tcp │ │
│ └───────────────────────────────┘ │
│ │
│ WSL2 (Snap7 + ab_server, separate │
│ from Docker Desktop's HyperV): │
│ ┌───────────────────────────────┐ │
│ │ snap7-server :102/tcp │ │
│ │ ab_server :44818/tcp │ │
│ └───────────────────────────────┘ │
│ │
│ Hyper-V VM (Windows + TwinCAT XAR): │
│ ┌───────────────────────────────┐ │
│ │ TwinCAT XAR :48898 │ │
│ └───────────────────────────────┘ │
│ │
│ Native processes: │
│ ┌───────────────────────────────┐ │
│ │ ConsoleReferenceServer :62541│ │
│ │ FOCAS TestStub :8193│ │
│ └───────────────────────────────┘ │
│ │
│ SQL Server 2022 (local install): │
│ ┌───────────────────────────────┐ │
│ │ OtOpcUaConfig_Test :1433 │ │
│ └───────────────────────────────┘ │
└────────────────────────────────────────┘
▲
│ tests connect via the host's hostname or 127.0.0.1
│
┌────────────────────────────────────────┐
│ Developer / CI machine running │
│ `dotnet test --filter Category=...` │
└────────────────────────────────────────┘
Bootstrap Order — Inner-loop Developer Machine
Order matters because some installs have prerequisites and several need admin elevation (UAC). ~60–90 min total on a fresh Windows machine, including reboots.
Admin elevation appears at: WSL2 install (step 4a), Docker Desktop install (step 4b), and any wsl --install -d call. winget will prompt UAC interactively when these run; accept it. There is no fully-silent admin-free install path on Windows for Docker Desktop's prerequisites.
-
Install .NET 10 SDK (https://dotnet.microsoft.com/) — required to build anything
winget install --id Microsoft.DotNet.SDK.10 --accept-package-agreements --accept-source-agreements -
Install .NET Framework 4.8 SDK + targeting pack — only needed when starting Phase 2 (Galaxy.Host); skip for Phase 0–1 if not yet there
winget install --id Microsoft.DotNet.Framework.DeveloperPack_4 --accept-package-agreements --accept-source-agreements -
Install Git + PowerShell 7.4+
winget install --id Git.Git --accept-package-agreements --accept-source-agreements winget install --id Microsoft.PowerShell --accept-package-agreements --accept-source-agreements -
Install Docker Desktop (with WSL2 backend per decision #134, leaves Hyper-V free for the future TwinCAT XAR VM):
4a. Enable WSL2 — UAC required:
wsl --installReboot when prompted. After reboot, the default Ubuntu distro launches and asks for a username/password — set them (these are WSL-internal, not used for Docker auth).
Verify after reboot:
wsl --status wsl --list --verboseExpected:
Default Version: 2, at least one distro (typicallyUbuntu) withSTATE RunningorStopped.4b. Install Docker Desktop — UAC required:
winget install --id Docker.DockerDesktop --accept-package-agreements --accept-source-agreementsThe installer adds you to the
docker-usersWindows group. Sign out and back in (or reboot) so the group membership takes effect.4c. Configure Docker Desktop — open it once after sign-in:
- Settings → General: confirm "Use the WSL 2 based engine" is checked (decision #134 — coexists with future Hyper-V VMs)
- Settings → General: confirm "Use Windows containers" is NOT checked (we use Linux containers for
mcr.microsoft.com/mssql/server,oitc/modbus-server, etc.) - Settings → Resources → WSL Integration: enable for the default Ubuntu distro
- (Optional, large fleets) Settings → Resources → Advanced: bump CPU / RAM allocation if you have headroom
Verify:
docker --version docker psExpected: version reported,
docker psreturns an empty table (no containers running yet, but the daemon is reachable). -
Clone repos:
git clone https://gitea.dohertylan.com/dohertj2/lmxopcua.git git clone https://gitea.dohertylan.com/dohertj2/scadalink-design.git git clone https://gitea.dohertylan.com/dohertj2/3yearplan.git -
Start SQL Server (Linux container; runs in the WSL2 backend):
docker run --name otopcua-mssql ` -e "ACCEPT_EULA=Y" ` -e "MSSQL_SA_PASSWORD=OtOpcUaDev_2026!" ` -p 14330:1433 ` -v otopcua-mssql-data:/var/opt/mssql ` -d mcr.microsoft.com/mssql/server:2022-latestThe host port is 14330, not 1433, to coexist with the native MSSQL14 instance that hosts the Galaxy
ZBDB on port 1433. Both the native instance and Docker's port-proxy will happily bind0.0.0.0:1433, but only one of them catches any given connection — which is effectively non-deterministic and produces confusing "Login failed for user 'sa'" errors when the native instance wins. Using 14330 eliminates the race entirely.The
-v otopcua-mssql-data:/var/opt/mssqlnamed volume preserves database files across container restarts anddocker rm— drop it only if you want a strictly throwaway instance.Verify:
docker ps --filter name=otopcua-mssql docker exec -it otopcua-mssql /opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P "OtOpcUaDev_2026!" -C -Q "SELECT @@VERSION"Expected: container
STATUS Up,SELECT @@VERSIONreturnsMicrosoft SQL Server 2022 (...).To stop / start later:
docker stop otopcua-mssql docker start otopcua-mssql -
Install GLAuth at
C:\publish\glauth\per existing CLAUDE.md instructions; populateglauth-otopcua.cfgwith the test users + groups (template indocs/v2/dev-environment-glauth-config.md— to be added in the setup task) -
Install EF Core CLI (used to apply migrations against the SQL Server container starting in Phase 1 Stream B):
dotnet tool install --global dotnet-ef --version 10.0.* -
Run
dotnet restorein thelmxopcuarepo -
Run
dotnet build ZB.MOM.WW.OtOpcUa.slnx(post-Phase-0) orZB.MOM.WW.LmxOpcUa.slnx(pre-Phase-0) — verifies the toolchain -
Run
dotnet testwith the inner-loop filter — should pass on a fresh machine
Bootstrap Order — Integration Host
Order matters more here because of Hyper-V conflicts. ~half-day on a fresh machine.
- Install Windows Server 2022 or Windows 11 Pro (Hyper-V capable)
- Enable Hyper-V + WSL2
- Install Docker Desktop for Windows, configure to use WSL2 backend (NOT Hyper-V backend — leaves Hyper-V free for the TwinCAT XAR VM)
- Set up WSL2 distro (Ubuntu 22.04 LTS) for native Linux binaries that conflict with Docker Desktop
- Pull / start Modbus simulator:
docker run -d --name modbus-sim -p 502:502 -v ${PWD}/modbus-config.yaml:/server_config.yaml oitc/modbus-server - Build + start ab_server (in WSL2):
git clone https://github.com/libplctag/libplctag cd libplctag/src/tests make ab_server ./ab_server --plc=ControlLogix --port=44818 # default tags loaded from a config file - Build + start Snap7 Server (in WSL2):
- Download Snap7 from https://snap7.sourceforge.net/
- Build the example server; run on port 102 with the test DB layout from
test-data-sources.md§4
- Set up TwinCAT XAR VM:
- Create a Hyper-V VM (Gen 2, Windows 11)
- Install TwinCAT 3 XAE + XAR (download from Beckhoff, free for dev/test)
- Activate the 7-day trial; document the rotation schedule
- Configure ADS routes for the integration host to reach the VM
- Deploy the test PLC project from
test-data-sources.md§5 ("a tiny test project —MAIN(PLC code) +GVL")
- Build + start OPC Foundation reference server:
git clone https://github.com/OPCFoundation/UA-.NETStandard cd UA-.NETStandard/Applications/ConsoleReferenceServer dotnet run --port 62541 - Install SQL Server 2022 dev edition (or run the Docker container as on developer machines)
- Build + run FOCAS TestStub (from this repo, post-Phase-5):
dotnet run --project src/ZB.MOM.WW.OtOpcUa.Driver.Focas.TestStub -- --port 8193 - Verify by running
dotnet test --filter Category=Integrationfrom a developer machine pointed at the integration host
Credential Management
Dev environment defaults
The defaults in this doc are for dev environments only. They're documented here so a developer can stand up a working setup without hunting; they're not secret.
Production overrides
For any production deployment:
- SQL Server: Integrated Security with gMSA (decision #46) — never SQL login with shared password
- LDAP: production GLAuth or AD instance with proper service principal
- TwinCAT: paid license (per-runtime), not the 7-day trial
- All other services: deployment-team's credential management process; documented in deployment-guide.md (separate doc, post-v2.0)
Storage
For dev defaults:
- SQL Server SA password: stored in each developer's local
appsettings.Development.json(gitignored) - GLAuth bind DN/password: stored in
glauth-otopcua.cfg(gitignored) - Docker secrets / volumes: developer-local
For production:
- gMSA / cert-mapped principals — no passwords stored anywhere
- Per-NodeId credentials in
ClusterNodeCredentialtable (per decision #83) - Admin app uses LDAP (no SQL credential at all on the user-facing side)
Test Data Seed
Each environment needs a baseline data set so cross-developer tests are reproducible. Lives in tests/ZB.MOM.WW.OtOpcUa.IntegrationTests/SeedData/:
- GLAuth users:
test-readonly@otopcua.local(inOtOpcUaReadOnly),test-operator@otopcua.local(OtOpcUaWriteOperate+OtOpcUaAlarmAck),test-fleetadmin@otopcua.local(OtOpcUaAdmins) - Central config DB: a seed cluster
TEST-CLUSTER-01with 1 node + 1 namespace + 0 drivers (other tests add drivers) - Modbus sim: YAML config preloading the addresses from
test-data-sources.md§1 (HR 0–9 constants, ramp at HR 100, etc.) - TwinCAT XAR: the test PLC project deployed; symbols match
test-data-sources.md§5 - OPC Foundation reference server: starts with built-in test address space; tests don't modify it
Seeds are idempotent (re-runnable) and gitignored where they contain credentials.
Setup Plan (executable)
Step 1 — Inner-loop dev environment (each developer, ~1 day with documentation)
Owner: developer
Prerequisite: Bootstrap order steps 1–10 above (note: steps 4a, 4b, and any later wsl --install -d call require admin elevation / UAC interaction — there is no fully-silent admin-free install path on Windows for Docker Desktop's prerequisites)
Acceptance:
dotnet test ZB.MOM.WW.OtOpcUa.slnxpasses- A test that touches the central config DB succeeds (proves SQL Server reachable)
- A test that authenticates against GLAuth succeeds (proves LDAP reachable)
docker ps --filter name=otopcua-mssqlshows the SQL Server containerSTATUS Up
Troubleshooting (common Windows install snags)
wsl --installsays "Windows Subsystem for Linux has no installed distributions" after first reboot — open a fresh PowerShell and runwsl --install -d Ubuntu(the-dform forces a distro install if the prereq-only install ran first).- Docker Desktop install completes but
docker --versionreports "command not found" —PATHdoesn't pick up the new Docker shims until a new shell is opened. Open a fresh PowerShell, or sign out/in, and retry. docker psreports "permission denied" or "Cannot connect to the Docker daemon" — your user account isn't in thedocker-usersgroup yet. Sign out and back in (group membership is loaded at login). Verify withwhoami /groups | findstr docker-users.- Docker Desktop refuses to start with "WSL 2 installation is incomplete" — open the WSL2 kernel update from https://aka.ms/wsl2kernel, install, then restart Docker Desktop. (Modern
wsl --installships the kernel automatically; this is mostly a legacy problem.) - SQL Server container starts but immediately exits — SA password complexity. The default
OtOpcUaDev_2026!meets the requirement (≥8 chars, upper + lower + digit + symbol); if you change it, keep complexity. Checkdocker logs otopcua-mssqlfor the exact failure. docker runfails with "image platform does not match host platform" — your Docker is configured for Windows containers. Switch to Linux containers in Docker Desktop tray menu ("Switch to Linux containers"), or recheck Settings → General per step 4c.- Hyper-V conflict when later setting up TwinCAT XAR VM — confirm Docker Desktop is on the WSL 2 backend, not Hyper-V backend. The two coexist only when Docker uses WSL 2.
Step 2 — Integration host (one-time, ~1 week)
Owner: DevOps lead Prerequisite: dedicated Windows machine, hardware specs ≥ 8 cores / 32 GB RAM / 500 GB SSD Acceptance:
- Each simulator (Modbus, AB, S7, TwinCAT, OPC UA reference) responds to a probe from a developer machine
- A nightly CI job runs
dotnet test --filter Category=Integrationagainst the integration host and passes - Service-account permissions reviewed by security lead
Step 3 — TwinCAT XAR VM trial rotation automation (one-time, half-day)
Owner: DevOps lead Prerequisite: Step 2 complete Acceptance:
- A scheduled task on the integration host either re-activates the 7-day trial automatically OR alerts the team 24h before expiry; cycle tested
Step 4 — Per-developer GLAuth config sync (recurring, when test users change)
Owner: developer (each) Acceptance:
- A script in the repo (
scripts/sync-glauth-dev-config.ps1) updates the local GLAuth config from a template; documented in CLAUDE.md - Test users defined in the template work on every developer machine
Step 5 — Docker simulator config (per-developer, ~30 min)
Owner: developer (each) Acceptance:
- The Modbus simulator container is reachable from
127.0.0.1:502from the developer's test runner (only needed if the developer is debugging Modbus driver work; not required for Phase 0/1)
Step 6 — Codex companion setup (per-developer, ~5 min)
Owner: developer (each) Acceptance:
/codex:setupskill confirms readiness;/codex:adversarial-reviewworks against a small test diff
Operational Risks
| Risk | Mitigation |
|---|---|
| TwinCAT 7-day trial expires mid-CI run | Step 3 automation; alert before expiry; license budget approved as fallback for production-grade pre-release validation |
| Docker Desktop license terms change for org use | Track Docker pricing; budget approved or fall back to Podman if license becomes blocking |
| Integration host single point of failure | Document the setup so a second host can be provisioned in <2 days; test fixtures pin to a hostname so failover changes one DNS entry |
| GLAuth dev config drifts between developers | Sync script + template (Step 4) keep configs aligned; periodic review |
| Galaxy / MXAccess licensing for non-dev-machine | Galaxy stays on the dev machines that already have Aveva licenses; integration host does NOT run Galaxy (Galaxy.Host integration tests run on the dev box, not the shared host) |
Long-lived dev env credentials in dev appsettings.Development.json |
Gitignored; documented as dev-only; production never uses these |
Decisions to Add to plan.md
| # | Decision | Rationale |
|---|---|---|
| 133 | Two-tier dev environment: inner-loop (in-process simulators on developer machines) + integration (Docker / VM / native simulators on a single dedicated Windows host) | Per decision #99. Concrete inventory + setup plan in dev-environment.md |
| 134 | Docker Desktop with WSL2 backend (not Hyper-V backend) on integration host so TwinCAT XAR VM can run in Hyper-V alongside Docker | TwinCAT runtime cannot coexist with Hyper-V-mode Docker Desktop; WSL2 backend leaves Hyper-V free for the XAR VM. Documented operational constraint |
| 135 | TwinCAT XAR runs only in a dedicated VM on the integration host; developer machines do NOT run XAR locally | The 7-day trial reactivation needs centralized management; the VM is shared infrastructure |
| 136 | Galaxy / MXAccess testing happens on developer machines that have local Aveva installs, NOT on the shared integration host | Aveva licensing scoped to dev workstations; integration host doesn't carry the license. v1 IntegrationTests parity (Phase 2) runs on developer boxes. |
| 137 | Dev env credentials are documented openly in dev-environment.md; production credentials use Integrated Security / gMSA per decision #46 |
Dev defaults are not secrets; they're convenience. Production never uses these values |