Document LmxProxy protocol in DCL, strengthen plan generation traceability guards, and add UI constraints
- Replace "custom protocol" placeholder with full LmxProxy details (gRPC transport, SDK API mapping, session management, keep-alive, TLS, batch ops) - Add bullet-level requirement traceability, design constraint traceability (52 KDD + 6 CD), split-section tracking, and post-generation orphan check to plan framework - Resolve Q9 (LmxProxy), Q11 (REST test server), Q13 (solo dev), Q14 (self-test), Q15 (Machine Data DB out of scope) - Set Central UI constraints: Blazor Server + Bootstrap only, no heavy frameworks, custom components, clean corporate design
This commit is contained in:
@@ -10,7 +10,7 @@ Site clusters only. Central does not interact with machines directly.
|
||||
|
||||
## Responsibilities
|
||||
|
||||
- Manage data connections defined at the site level (OPC UA servers, custom protocol endpoints).
|
||||
- Manage data connections defined at the site level (OPC UA servers, LmxProxy endpoints).
|
||||
- Establish and maintain connections to data sources based on deployed instance configurations.
|
||||
- Subscribe to tag paths as requested by Instance Actors (based on attribute data source references in the flattened configuration).
|
||||
- Deliver tag value updates to the requesting Instance Actors.
|
||||
@@ -19,7 +19,7 @@ Site clusters only. Central does not interact with machines directly.
|
||||
|
||||
## Common Interface
|
||||
|
||||
Both OPC UA and the custom protocol implement the same interface:
|
||||
Both OPC UA and LmxProxy implement the same interface:
|
||||
|
||||
```
|
||||
IDataConnection
|
||||
@@ -34,15 +34,65 @@ IDataConnection
|
||||
|
||||
Additional protocols can be added by implementing this interface.
|
||||
|
||||
### Concrete Type Mappings
|
||||
|
||||
| IDataConnection | OPC UA SDK | LmxProxy SDK (`LmxProxyClient`) |
|
||||
|---|---|---|
|
||||
| `Connect()` | OPC UA session establishment | `ConnectAsync()` → gRPC `ConnectRequest`, server returns `SessionId` |
|
||||
| `Disconnect()` | Close OPC UA session | `DisconnectAsync()` → gRPC `DisconnectRequest` |
|
||||
| `Subscribe(tagPath, callback)` | OPC UA Monitored Items | `SubscribeAsync(addresses, onUpdate)` → server-streaming gRPC (`IAsyncEnumerable<VtqMessage>`) |
|
||||
| `Unsubscribe(id)` | Remove Monitored Item | `ISubscription.DisposeAsync()` (cancels streaming RPC) |
|
||||
| `Read(tagPath)` | OPC UA Read | `ReadAsync(address)` → `Vtq` |
|
||||
| `Write(tagPath, value)` | OPC UA Write | `WriteAsync(address, value)` |
|
||||
| `Status` | OPC UA session state | `IsConnected` property + keep-alive heartbeat (30-second interval via `GetConnectionStateAsync`) |
|
||||
|
||||
### Common Value Type
|
||||
|
||||
Both protocols produce the same value tuple consumed by Instance Actors:
|
||||
|
||||
| Concept | ScadaLink Design | LmxProxy SDK (`Vtq`) |
|
||||
|---|---|---|
|
||||
| Value container | `{value, quality, timestamp}` | `Vtq(Value, Timestamp, Quality)` — readonly record struct |
|
||||
| Quality | good / bad / uncertain | `Quality` enum (byte, OPC UA compatible: Good=0xC0, Bad=0x00, Uncertain=0x40) |
|
||||
| Timestamp | UTC | `DateTime` (UTC) |
|
||||
| Value type | object | `object?` (parsed: double, bool, string) |
|
||||
|
||||
## Supported Protocols
|
||||
|
||||
### OPC UA
|
||||
- Standard OPC UA client implementation.
|
||||
- Supports subscriptions (monitored items) and read/write operations.
|
||||
|
||||
### Custom Protocol
|
||||
- Proprietary protocol adapter.
|
||||
- Implements the same subscription-based model as OPC UA.
|
||||
### LmxProxy (Custom Protocol)
|
||||
|
||||
LmxProxy is a gRPC-based protocol for communicating with LMX data servers. An existing client SDK (`LmxProxyClient` NuGet package) provides a production-ready implementation.
|
||||
|
||||
**Transport & Connection**:
|
||||
- gRPC over HTTP/2, using protobuf-net code-first contracts (service: `scada.ScadaService`).
|
||||
- Default port: **5050**.
|
||||
- Session-based: `ConnectAsync` returns a `SessionId` used for all subsequent operations.
|
||||
- Keep-alive: 30-second heartbeat via `GetConnectionStateAsync`. On failure, the client marks itself disconnected and disposes subscriptions.
|
||||
|
||||
**Authentication & TLS**:
|
||||
- API key-based authentication (sent in `ConnectRequest`).
|
||||
- Full TLS support: TLS 1.2/1.3, mutual TLS (client cert + key in PEM), custom CA trust, self-signed cert allowance for dev.
|
||||
|
||||
**Subscriptions**:
|
||||
- Server-streaming gRPC (`IAsyncEnumerable<VtqMessage>`).
|
||||
- Configurable sampling interval (default: 1000ms; 0 = on-change).
|
||||
- Wire format: `VtqMessage { Tag, Value (string), TimestampUtcTicks (long), Quality (string: "Good"/"Uncertain"/"Bad") }`.
|
||||
- Subscription disposed via `ISubscription.DisposeAsync()`.
|
||||
|
||||
**Additional Capabilities (beyond IDataConnection)**:
|
||||
- `ReadBatchAsync(addresses)` — bulk read in a single gRPC call.
|
||||
- `WriteBatchAsync(values)` — bulk write in a single gRPC call.
|
||||
- `WriteBatchAndWaitAsync(values, flagAddress, flagValue, responseAddress, responseValue, timeout)` — write-and-poll pattern for handshake protocols (default timeout: 30s, poll interval: 100ms).
|
||||
- Built-in retry policy via Polly: exponential backoff (base delay × 2^attempt), configurable max attempts (default: 3), applied to reads. Transient errors: `Unavailable`, `DeadlineExceeded`, `ResourceExhausted`, `Aborted`.
|
||||
- Operation metrics: count, errors, p95/p99 latency (ring buffer of last 1000 samples per operation).
|
||||
- Correlation ID propagation for distributed tracing (configurable header name).
|
||||
- DI integration: `AddLmxProxyClient(IConfiguration)` binds to `"LmxProxy"` config section in `appsettings.json`.
|
||||
|
||||
**SDK Reference**: The client SDK source is at `LmxProxyClient` in the ScadaBridge repository. The DCL's LmxProxy adapter wraps this SDK behind the `IDataConnection` interface.
|
||||
|
||||
## Subscription Management
|
||||
|
||||
@@ -77,12 +127,14 @@ Each data connection is managed by a dedicated connection actor that uses the Ak
|
||||
|
||||
This pattern ensures no messages are lost during connection transitions and is the standard Akka.NET approach for actors with I/O lifecycle dependencies.
|
||||
|
||||
**LmxProxy-specific notes**: The LmxProxy connection actor holds the `SessionId` returned by `ConnectAsync` and passes it to all subsequent operations. On entering the **Connected** state, the actor starts the 30-second keep-alive timer. Subscriptions use server-streaming gRPC — the actor processes the `IAsyncEnumerable<VtqMessage>` stream and forwards updates to Instance Actors. On keep-alive failure, the actor transitions to **Reconnecting** and the client automatically disposes active subscriptions.
|
||||
|
||||
## Connection Lifecycle & Reconnection
|
||||
|
||||
The DCL manages connection lifecycle automatically:
|
||||
|
||||
1. **Connection drop detection**: When a connection to a data source is lost, the DCL immediately pushes a value update with quality `bad` for **every tag subscribed on that connection**. Instance Actors and their downstream consumers (alarms, scripts checking quality) see the staleness immediately.
|
||||
2. **Auto-reconnect with fixed interval**: The DCL retries the connection at a configurable fixed interval (e.g., every 5 seconds). The retry interval is defined **per data connection**. This is consistent with the fixed-interval retry philosophy used throughout the system.
|
||||
2. **Auto-reconnect with fixed interval**: The DCL retries the connection at a configurable fixed interval (e.g., every 5 seconds). The retry interval is defined **per data connection**. This is consistent with the fixed-interval retry philosophy used throughout the system. **Note on LmxProxy**: The LmxProxy SDK includes its own retry policy (exponential backoff via Polly) for individual operations (reads). The DCL's fixed-interval reconnect owns **connection-level** recovery (re-establishing the gRPC session after a keep-alive failure or disconnect). The SDK's retry policy handles **operation-level** transient failures within an active session. These are complementary — the DCL does not disable the SDK's retry policy.
|
||||
3. **Connection state transitions**: The DCL tracks each connection's state as `connected`, `disconnected`, or `reconnecting`. All transitions are logged to Site Event Logging.
|
||||
4. **Transparent re-subscribe**: On successful reconnection, the DCL automatically re-establishes all previously active subscriptions for that connection. Instance Actors require no action — they simply see quality return to `good` as fresh values arrive from restored subscriptions.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user