Files
lmxopcua/CLAUDE.md
Joseph Doherty 77229dfaf3 chore: post-audit cleanup — gr/ relocated, scratch + PR-body snapshots removed
- gr/ folder moved to sibling repo at C:\Users\dohertj2\Desktop\graccess\gr;
  the SQL queries + DDL captures belong with the graccess CLI work, not
  with the OPC UA server. PR 7.2 retired direct Galaxy-DB access from this
  repo (mxaccessgw owns those queries server-side now).
- Drop the now-obsolete "Galaxy Repository Database" section in CLAUDE.md
  for the same reason — server no longer queries the DB directly.
- Delete root scratch files surfaced by the doc audit (runtimestatus.md,
  service_info.md) — abandoned plan + operational scratch.
- Delete docs/v2/implementation/pr-{1,2,4}-body.md — ephemeral PR-body
  snapshots from the v2-mxgw rollout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 09:36:13 -04:00

9.6 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Goal

Build an OPC UA server (.NET 10) that exposes AVEVA System Platform (Wonderware) Galaxy tags. The server mirrors the Galaxy object hierarchy as an OPC UA address space, translating between contained-name browse paths and tag-name runtime references. Galaxy access flows through the in-process GalaxyDriver (src/ZB.MOM.WW.OtOpcUa.Driver.Galaxy/) talking gRPC to a separately installed mxaccessgw gateway process. The gateway owns the MXAccess COM bitness constraint (its worker is x86 net48); everything in this repo is .NET 10. PR 7.2 retired the legacy in-process Galaxy.Host / Galaxy.Proxy / Galaxy.Shared projects + the OtOpcUaGalaxyHost Windows service.

See lmx_mxgw.md for the migration design and docs/v2/Galaxy.Performance.md for the runtime perf surface (tracing, metrics, soak harness).

Architecture Overview

Data Flow

  1. Galaxy Repository DB (ZB) — SQL Server database holding the deployed object hierarchy and attribute definitions. The mxaccessgw's GalaxyRepositoryClient queries it via gRPC; the driver consumes the materialised hierarchy through IGalaxyHierarchySource.
  2. MXAccess (via mxaccessgw) — Live read/write/subscribe over a gRPC session. The gateway owns the COM apartment + STA pump server-side; the driver speaks MxCommand / MxEvent protos exclusively.
  3. OPC UA Server — Exposes the hierarchy as browse nodes and attributes as variable nodes. Clients browse via contained names but reads/writes are translated to tag_name.AttributeName format for MXAccess.

Key Concept: Contained Name vs Tag Name

Galaxy objects have two names:

  • contained_name — human-readable name scoped to parent (used for OPC UA browse tree)
  • tag_name — globally unique system name (used for MXAccess read/write)

Example: browsing TestMachine_001/DelmiaReceiver/DownloadPath translates to MXAccess reference DelmiaReceiver_001.DownloadPath.

Data Type Mapping

Galaxy mx_data_type values map to OPC UA types (Boolean, Int32, Float, Double, String, DateTime, etc.). Array attributes use ValueRank=1 with ArrayDimensions from the Galaxy attribute definition. The driver-side mapping lives in src/ZB.MOM.WW.OtOpcUa.Driver.Galaxy/Browse/DataTypeMap.cs.

Change Detection

DeployWatcher (src/ZB.MOM.WW.OtOpcUa.Driver.Galaxy/Browse/DeployWatcher.cs) polls the gateway's deploy-event signal and raises IRediscoverable.OnRediscoveryNeeded when the Galaxy redeploys. The server's DriverHost consumes the signal and rebuilds the address space.

mxaccessgw

The gateway lives in a sibling repo at c:\Users\dohertj2\Desktop\mxaccessgw\. See docs/v2/Galaxy.ParityRig.md for the gw setup recipe (build, API key provisioning via apikey create-key, env-var overrides for HTTP/2 cleartext + worker path). The gw's MXAccess Toolkit reference (its gateway.md) is the canonical MxAccess API doc; the standalone mxaccess_documentation.md previously kept in this repo retired in PR 7.3.

Build Commands

dotnet restore ZB.MOM.WW.OtOpcUa.slnx
dotnet build ZB.MOM.WW.OtOpcUa.slnx
dotnet test ZB.MOM.WW.OtOpcUa.slnx                          # all tests
dotnet test tests/ZB.MOM.WW.OtOpcUa.Tests                    # unit tests only
dotnet test tests/ZB.MOM.WW.OtOpcUa.IntegrationTests         # integration tests only
dotnet test --filter "FullyQualifiedName~MyTestClass.MyMethod"  # single test

Docker Workflow (driver fixtures + central SQL Server)

Migrated 2026-04-28: Docker config + host moved off this dev VM (DESKTOP-6JL3KKO) onto the shared Linux Docker host (DOCKER, 10.100.0.35) so the dev VM could shed WSL2/Hyper-V and have its GPU re-attached via ESXi passthrough. Docker Desktop is no longer installed here. All checked-in appsettings.json defaults, fixture-class default endpoints, and e2e-config.sample.json were rewritten to target 10.100.0.35. The driver fixture compose files under tests/.../Docker/docker-compose.yml now carry a project: lmxopcua label on every service. See docs/v2/dev-environment.md for the full rewrite (header dated 2026-04-28).

Docker workloads run on a shared Linux host at 10.100.0.35 — not on this VM. Stacks live at /opt/otopcua-<driver>/ on the host and carry the project=lmxopcua label so they're discoverable via docker ps --filter label=project=lmxopcua.

docker -H ssh://... does NOT work from this VM. Windows OpenSSH ↔ docker.exe stdio bridging hangs (docker system dial-stdio runs server-side but no API data flows). Use the helper below — it SSHes into the docker host and runs docker compose server-side.

Use lmxopcua-fix.ps1 (in ~/bin) to control fixtures from this VM:

lmxopcua-fix ls                            # list all lmxopcua-tagged containers on the host
lmxopcua-fix up   modbus      standard     # bring a profile up
lmxopcua-fix up   abcip       controllogix
lmxopcua-fix up   s7          s7_1500
lmxopcua-fix up   opcuaclient                # single-service stack, no profile arg
lmxopcua-fix down modbus                   # tear stack down
lmxopcua-fix logs modbus
lmxopcua-fix sync modbus                   # rsync this repo's tests/.../Docker/ → /opt/otopcua-modbus/

sync is the deployment step. When you edit a fixture's compose file or Dockerfile under tests/.../Docker/, run lmxopcua-fix sync <driver> to push the changes to the docker host before bringing the stack up. The repo files are the source of truth; /opt/otopcua-<driver>/ is a mirrored deployment.

Endpoints (defaults already point at the docker host):

  • SQL Server (always-on): 10.100.0.35,14330 — used by appsettings.json for ConfigDb.
  • Modbus: 10.100.0.35:5020 (MODBUS_SIM_ENDPOINT)
  • AB CIP: 10.100.0.35:44818 (AB_SERVER_ENDPOINT)
  • S7: 10.100.0.35:1102 (S7_SIM_ENDPOINT)
  • OPC UA reference (opc-plc): opc.tcp://10.100.0.35:50000 (OPCUA_SIM_ENDPOINT)

Override any endpoint via the env var to point at a real PLC. The local OtOpcUa server runs on this VM at opc.tcp://localhost:4840that's not on the docker host.

See docs/v2/dev-environment.md for the full inventory and rationale.

Build & Runtime Constraints

  • Language: C#, .NET 10, AnyCPU. The MXAccess COM bitness constraint is owned by the mxaccessgw worker (x86 net48), not by anything in this repo.
  • The gateway's MXAccess worker requires a deployed ArchestrA Platform on the machine running the gateway. The OtOpcUa server itself does not.

Transport Security

The server supports configurable OPC UA transport security via the Security section in appsettings.json. Phase 1 profiles: None (default), Basic256Sha256-Sign, Basic256Sha256-SignAndEncrypt. Security profiles are resolved by SecurityProfileResolver at startup. The server certificate is always created even for None-only deployments because UserName token encryption depends on it. See docs/security.md for the full guide.

Redundancy

The server supports non-transparent warm/hot redundancy via the Redundancy section in appsettings.json. Two instances share the same Galaxy DB and the same mxaccessgw (under distinct MxAccess.ClientName values) but have unique ApplicationUri values. Each exposes RedundancySupport, ServerUriArray, and a dynamic ServiceLevel based on role and runtime health. The primary advertises a higher ServiceLevel than the secondary. See docs/Redundancy.md for the full guide.

LDAP Authentication

The server uses LDAP-based user authentication via the Authentication.Ldap section in appsettings.json. When enabled, credentials are validated by LDAP bind against a GLAuth server (installed at C:\publish\glauth\), and LDAP group membership maps to OPC UA permissions: ReadOnly (browse/read), WriteOperate (write FreeAccess/Operate attributes), WriteTune (write Tune attributes), WriteConfigure (write Configure attributes), AlarmAck (alarm acknowledgment). LdapUserAuthenticator (src/ZB.MOM.WW.OtOpcUa.Server/Security/LdapUserAuthenticator.cs) implements IUserAuthenticator. See docs/Security.md for the full guide and C:\publish\glauth\auth.md for LDAP user/group reference.

Library Preferences

  • Logging: Serilog with rolling daily file sink
  • Unit tests: xUnit + Shouldly for assertions
  • Service hosting (Server, Admin): .NET generic host with AddWindowsService (decision #30 — replaced TopShelf in v2; see src/ZB.MOM.WW.OtOpcUa.Server/OpcUaServerService.cs)
  • OPC UA: OPC Foundation UA .NET Standard stack (https://github.com/opcfoundation/ua-.netstandard) — NuGet: OPCFoundation.NetStandard.Opc.Ua.Server

OPC UA .NET Standard Documentation

Use the DeepWiki MCP (mcp__deepwiki) to query documentation for the OPC UA .NET Standard stack: https://deepwiki.com/OPCFoundation/UA-.NETStandard. Tools: read_wiki_structure, read_wiki_contents, and ask_question with repo OPCFoundation/UA-.NETStandard.

Testing

Use the Client CLI at src/ZB.MOM.WW.OtOpcUa.Client.CLI/ for manual testing against the running OPC UA server. Supports connect, read, write, browse, subscribe, historyread, alarms, and redundancy commands. See docs/Client.CLI.md for full documentation.

dotnet run --project src/ZB.MOM.WW.OtOpcUa.Client.CLI -- connect -u opc.tcp://localhost:4840
dotnet run --project src/ZB.MOM.WW.OtOpcUa.Client.CLI -- browse -u opc.tcp://localhost:4840 -r -d 3
dotnet run --project src/ZB.MOM.WW.OtOpcUa.Client.CLI -- read -u opc.tcp://localhost:4840 -n "ns=2;s=SomeNode"
dotnet run --project src/ZB.MOM.WW.OtOpcUa.Client.CLI -- subscribe -u opc.tcp://localhost:4840 -n "ns=2;s=SomeNode" -i 500